More From Forbes

How to adopt a collaborative problem-solving approach through 'yes, and' thinking.

Forbes Coaches Council

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

After more than 24 years of coaching, I've noticed that teams and organizations still use traditional problem-solving techniques despite these being either obsolete or ineffective. For example, individuals still attempt to focus and dissect problems on their own with the hope of coming up with a solution by themselves.

I also notice a pattern of clients operating in silos. They have a tendency to equate the ability to solve problems by themselves as a form of independence and initiative. This works only to a certain degree. As the problem becomes more complex, this solo-solving technique becomes ineffective. Instead, teams should tap into the increasingly diverse and multidisciplinary pool that makes up the workforce. Not only is this useful for performance and productivity but also for problem solving.

I have found the collaborative problem-solving approach, by Alexander Hancock , to be an effective approach to achieving clients’ objectives. Collaborative problem solving occurs as you collaborate with other people to exchange information, ideas or perspectives. The essence of this type of collaboration is based on “yes, and” thinking – building on and valuing each other’s ideas.

Any individual, team or company can take advantage of this approach. I have found this approach to be most effective for companies facing problems that involve team members from different departments, backgrounds and personalities. This is also an approach that is usually unique to the coaching profession.

In any situation, when someone comes to you as a leader with a problem to discuss, your role is to help him or her look for the causes and discover solutions. Your role is not to resolve the problem alone but to guide them through collaborative problem-solving approach.

Attitudes For Collaborative Problem Solving

Hancock provides the list below of attitudes that are best paired with the approach:

• Win-win abundance thinking:  Collaboration allows you to work with others to develop solutions that will benefit you both. The key concept is to believe that it is possible to create a synergistic solution before you create them. It is not "you vs. me" — we can both succeed. Develop an "abundance mentality" — there is enough for everyone. “If you win, we all win.”

• Patience:  Collaboration takes time. You need to recognize that you are both helping one another to reach a resolution, and it may take more than one meeting to discuss. You will often need to work together over time to reach a satisfying solution that you will both agree on.

• “Yes, and” thinking:  Move away from polarized (either/or) thinking, and develop a “yes, and” way of thinking. This thinking is supporting a suggested idea and building on the idea to make it better.

Benefits Of Collaborative Problem Solving

Collaborative problem solving opens communication and builds trust in the relationship as you and your co-collaborator discover that you are both working together toward a shared outcome. This increases a joint commitment to the relationship and to the organization. It also indicates a commitment to helping others reach their goals and objectives, and to improve everyone’s performance for the company or the organization. Collaborative communication also encourages finding creative solutions. This increases the likelihood that others will take ownership of an issue and its solution.

Collaborative Problem-Solving Techniques

There are techniques that can help you engage in collaborative communication. Here are a few examples:

• Build on and connect ideas, rather than discarding one idea and looking for another one.

• Explore the strengths and drawbacks of each idea, compare and balance the pluses and drawbacks of each idea.

• Convert drawbacks to new possibilities. Try to find ways to integrate and combine new possibilities into an existing idea.

• When sharing your own opinion, make sure you offer it as a suggestion and not as a directive. The intention of collaborative problem solving is to provide a catalyst for exploration and consideration, instead of having the other person accept your advice or direction.

The collaborative problem-solving approach paves ways to open communication, trust, better planning and smooth implementation of a plan or strategy.

  • Editorial Standards
  • Reprints & Permissions

collaborative problem solving communication

Collaborative Problem Solving: What It Is and How to Do It

What is collaborative problem solving, how to solve problems as a team, celebrating success as a team.

Problems arise. That's a well-known fact of life and business. When they do, it may seem more straightforward to take individual ownership of the problem and immediately run with trying to solve it. However, the most effective problem-solving solutions often come through collaborative problem solving.

As defined by Webster's Dictionary , the word collaborate is to work jointly with others or together, especially in an intellectual endeavor. Therefore, collaborative problem solving (CPS) is essentially solving problems by working together as a team. While problems can and are solved individually, CPS often brings about the best resolution to a problem while also developing a team atmosphere and encouraging creative thinking.

Because collaborative problem solving involves multiple people and ideas, there are some techniques that can help you stay on track, engage efficiently, and communicate effectively during collaboration.

  • Set Expectations. From the very beginning, expectations for openness and respect must be established for CPS to be effective. Everyone participating should feel that their ideas will be heard and valued.
  • Provide Variety. Another way of providing variety can be by eliciting individuals outside the organization but affected by the problem. This may mean involving various levels of leadership from the ground floor to the top of the organization. It may be that you involve someone from bookkeeping in a marketing problem-solving session. A perspective from someone not involved in the day-to-day of the problem can often provide valuable insight.
  • Communicate Clearly.  If the problem is not well-defined, the solution can't be. By clearly defining the problem, the framework for collaborative problem solving is narrowed and more effective.
  • Expand the Possibilities.  Think beyond what is offered. Take a discarded idea and expand upon it. Turn it upside down and inside out. What is good about it? What needs improvement? Sometimes the best ideas are those that have been discarded rather than reworked.
  • Encourage Creativity.  Out-of-the-box thinking is one of the great benefits of collaborative problem-solving. This may mean that solutions are proposed that have no way of working, but a small nugget makes its way from that creative thought to evolution into the perfect solution.
  • Provide Positive Feedback. There are many reasons participants may hold back in a collaborative problem-solving meeting. Fear of performance evaluation, lack of confidence, lack of clarity, and hierarchy concerns are just a few of the reasons people may not initially participate in a meeting. Positive public feedback early on in the meeting will eliminate some of these concerns and create more participation and more possible solutions.
  • Consider Solutions. Once several possible ideas have been identified, discuss the advantages and drawbacks of each one until a consensus is made.
  • Assign Tasks.  A problem identified and a solution selected is not a problem solved. Once a solution is determined, assign tasks to work towards a resolution. A team that has been invested in the creation of the solution will be invested in its resolution. The best time to act is now.
  • Evaluate the Solution. Reconnect as a team once the solution is implemented and the problem is solved. What went well? What didn't? Why? Collaboration doesn't necessarily end when the problem is solved. The solution to the problem is often the next step towards a new collaboration.

The burden that is lifted when a problem is solved is enough victory for some. However, a team that plays together should celebrate together. It's not only collaboration that brings unity to a team. It's also the combined celebration of a unified victory—the moment you look around and realize the collectiveness of your success.

We can help

Check out MindManager to learn more about how you can ignite teamwork and innovation by providing a clearer perspective on the big picture with a suite of sharing options and collaborative tools.

Need to Download MindManager?

Try the full version of mindmanager free for 30 days.

Filter by Keywords

People Management

How to encourage collaborative communication in the workplace.

ClickUp Contributor

January 24, 2024

Picture this. The marketing team launched a viral campaign based on a cat video without informing the sales team. The sales team promoted a limited-edition product that nobody in the marketing team knew existed. Meanwhile, the IT team created an overly secure website such that even the CEO couldn’t find the contact page. 

This is the result of the lack of collaborative communication skills. It spreads chaos and increases frustration to such an extent that whatever efforts you make go unnoticed and unappreciated.

So, let’s look at how to improve collaborative communication in the workplace and explore this powerful new-age tool that can improve teamwork, induce positive conversations, and boost productivity! 

What Is Collaborative Communication? 

Importance of collaborative communication, important elements of effective collaborative communication, step 1: embrace instant collaboration, step 2: foster transparency, step 3: make problem-solving fun, step 4: work toward shared goals, examples of collaborative communication, teamwork makes the dream work, common faqs.

Avatar of person using AI

Collaborative communication is a dynamic, open, and inclusive approach to communication that prioritizes mutual understanding and shared goals. 

It’s not just about sharing information but actively using knowledge, involving everyone in the conversation, valuing diverse perspectives, and working together toward solutions.

By fostering an open and inclusive environment where everyone feels valued and heard, you can unlock the full potential of your team and achieve great things together.

Collaborative communication is the core of a thriving workplace. It fosters innovation, builds trust, and drives success. Here’s why it’s so crucial:

  • Positive company culture: Open communication leads to a positive work environment where everyone feels comfortable expressing themselves, sharing knowledge, and contributing their unique talents
  • Power of collective minds: Sharing diverse perspectives and knowledge leads to comprehensive solutions beyond what one person could ever create alone
  • Innovation: A collaborative environment creates a safe space for brainstorming and out-of-the-box thinking. This sparks creativity, leading to new ideas and improvements that might have remained dormant in a siloed workplace
  • Strong relationships and trust: Do you want to build mutual respect and understanding among colleagues? Effective communication is the key. When you share ideas openly, listen deeply, and value each other’s contributions, trust and companionship naturally emerge
  • Efficiency and productivity: Clear communication eliminates ambiguities and misunderstandings, simplifies workflows, and prevents rework. With collaborative problem-solving, your team can find the most efficient solutions, optimize processes, and maximize productivity
  • Employee engagement and motivation: Feeling valued, heard, and part of a supportive team boosts employee morale and engagement. This empowers people, giving them a sense of ownership and responsibility, which fuels their motivation to contribute
  • Adaptability and agility: Collaborative communications enable teams to share project information and insights in real time, making them flexible and responsive to emerging challenges and opportunities

Effective collaborative communication hinges on several key elements. Here are some absolute essentials for establishing collaborative communication in your organization. 

  • Streamlined communication: No one likes misunderstandings . To deliver messages clearly and concisely, you should avoid jargon and ambiguities. Create an environment where people can share ideas, concerns, and updates without fear of judgment
  • Empathy and understanding: Listening actively to others, understanding their perspectives, and acknowledging their contributions will help you build collaborative communication. Considering others’ viewpoints and being sensitive to their feelings and concerns is also important
  • Feedback and continuous improvement: It’s essential to frame feedback as suggestions for improvement, not personal attacks. Use feedback and suggestions to enhance and refine existing ideas and promote collaboration
  • Clearly defined goals: Collaborative communication requires you to ensure everyone understands the team’s goals and how their work contributes to the overall success
  • Choosing the right tools: Collaborative communication happens well when you use the most appropriate communication channels for different situations, like face-to-face meetings for complex discussions and digital platforms for sharing updates and resources 
  • Using technology effectively: Using technology to enhance communication while avoiding barriers and information overload also enhances communication in a team

Now that you know the core elements to improve collaborative communication, let’s give you a detailed guide for your teams to interact harmoniously. 

Steps to Improve Collaborative Communication

There are several ways to improve collaborative communication. You need to remember, though,  that it is a continuous process and will require active efforts from top leadership to lay the foundation and for the entire organization to follow it. 

Here’s a step-by-step blueprint to improve team collaboration and foster positive productivity. 

Using communication tools that let you talk to your team members in real time will create an environment where people can ask for help openly, limit bottlenecks, and work toward finding solutions together.

Use ClickUp , an all-in-one project management tool, to bring your organization to embrace collaborative communication.

ClickUp Chat View

With ClickUp’s Chat View , you can create dedicated channels for specific projects or topics, like “#marketing-brainstorm” or “#client-support.” 

Mention relevant teammates (@username) for immediate attention and keep conversations organized. You can also comment or react with emojis to inject fun and personality into your chats. 

Team members can use the chat view for quick questions, clarifications, and feedback instead of relying solely on emails or comments.

What about collaboratively creating docs, presentations, and more? ClickUp Docs are ideal for when you and your colleagues need to create and edit your docs together in real time. You can also connect your Docs to tasks, checklists, and workflows to execute ideas, even with your remote and hybrid teams. Within Docs,  ClickUp’s Collaboration Detection automatically lets you track when others are commenting, editing, and even viewing the same task as you. 

Protect your Docs with privacy and edit controls. Create links to shared documents, connect them to your workflows, and manage permissions for team, guest, or public access.

ClickUp Tasks 

Promote transparency and accountability by assigning work that is visible to everyone in your organization. 

With ClickUp Tasks , clearly define ownership and deadlines for each task. Use priority tags and custom fields to highlight urgent tasks or specific requirements. Use comments to provide feedback, ask questions, or share relevant information directly on the task. 

Encourage teammates to regularly update task status and share progress reports within the task itself. This keeps everyone informed and prevents last-minute surprises from derailing project outcomes. 

Brainstorming is a powerful collaborative communication tool that sparks collective creativity in your team. It enhances group dynamics and fosters a fun, collaborative environment.

ClickUp Whiteboards

ClickUp Whiteboards are a powerful tool for encouraging team collaboration while promoting creativity. They offer a visual space for collaborative teams to get things done creatively. 

Ditch text walls and let ideas flow freely with ClickUp Mind Maps . Visually connect thoughts, build relationships on each other’s ideas, and easily see the bigger picture.

Jot down ideas, questions, and reminders with colorful sticky notes. Arrange, move, and prioritize them as your brainstorming evolves.

The best part? You can instantly turn whiteboard brainstorming into actionable tasks. Simply click and drag any element (sticky note, shape, text) onto a ClickUp task card, complete with due dates and assignees.

When you work toward shared goals, challenges become opportunities for successful collaboration and collective problem-solving. This provides value and respect for diverse viewpoints, and people are encouraged to understand the other team member’s goals and how their work contributes to the overall success of the project team.

ClickUp Goal View

ClickUp Goal View helps you define overarching goals and break them into smaller, actionable objectives for each team member. 

Visually representing these goals and objectives keeps it transparent and ensures everyone is aligned on priorities. Easy tracking of goal progress toward each objective reinforces shared responsibility and eliminates confusion.

You can also mention individual contributions to team performance and milestones. This public recognition strengthens morale, encourages healthy competition, and reinforces the importance of teamwork.

To foster collaborative communication, you should use the tools and technologies available to nurture a better workplace culture where everyone is respected and heard. This helps reduce employee turnover and increase company goodwill.

Here are some ideas to induce a better culture using collaboration tools that simplify tasks and create an enjoyable work environment.

Scenario 1: Brainstorming for a marketing campaign

Suppose there’s a new social media campaign your marketing team needs to brainstorm on. 

You can capture ideas, competitor analysis, and target audience insights using ClickUp Mind Maps and ClickUp Whiteboards . Simply mark ideas to people and assign them to team members to build on them. 

ClickUp Mind Maps

You can create a dedicated “#social campaign” channel with a chat view option for real-time discussion and feedback.

Create docs outlining initial campaign ideas, assign research and design tasks, and set deadlines. Once the work is completed, mention people to give feedback and solve issues within the deadline. 

Scenario 2: Planning a product launch

Say your product development team needs to plan and track the launch of a new software feature.

Launching a product is an exciting process with many moving pieces and stakeholders. To ensure your launch is successful, create a clear timeline, project schedule, and checklist of critical tasks.

Align teams, tasks, timelines, and resources for your next product launch with the ClickUp Product Launch Checklist Template

Choose from a library of pre-built ClickUp templates. The ClickUp Product Launch Checklist Template will streamline your workflows, keep things organized, and minimize problems with real-time progress tracking. 

It will also help you visualize your data via ClickUp Gantt Charts or the ClickUp Board View , which can be accessed by your specified team members wherever they are. 

Use collaboration agreement templates to make working together easier.

Pairing ClickUp AI with your suite of tools and templates will help you generate product ideas, visualize roadmaps, analyze workloads, and recommend appropriate due dates and priorities for your tasks. This will reduce manual labor so the team can focus on the product launch.

Scenario 3: Remote team collaboration

When you have a remote, geographically dispersed team, efficient coordination becomes more critical and challenging than ever.

That’s why choosing the right platform to manage daily interactions is essential. 

ClickUp Remote Team Project Management Software

With ClickUp Remote Team Project Management Software , stay organized, simplify processes, and collaborate with team members from anywhere. Increase productivity, track progress, assign tasks, and monitor deadlines in one centralized platform.

Want to have a team call? Instantly start or schedule your Zoom meetings from within tasks. Get notified to join a meeting in progress and receive the meeting agenda details with a recording link afterward.

With ClickUp Emails , you can send and receive emails, create and automate tasks, collaborate on messages with your team, and manage emails by integrating them within your workflow.

You can also track time globally from any device, monitor the time taken to complete tasks, and access comprehensive progress reports for an entire year. 

Collaboration is crucial to success. When teams work together effectively, they can achieve audacious goals.

By now, you must have understood that collaborative communication skills aren’t rocket science; yet, the lack of team communication can crash your workplace environment. 

Sign up on ClickUp today to utilize the right team collaboration tools, encourage active participation in project team meetings, and foster a safe and open work environment. Gradually make your team ready to tackle any challenges together.

1. What does collaborative communication mean?

Collaborative communication is all about working together to share information, ideas, and feedback to achieve a common goal. It’s more than just sending emails or having meetings; it’s about actively listening, building trust, and valuing everyone’s contributions.

2. Why is collaborative communication important?

Collaborative communication is not just a nice-to-have; it’s the cornerstone of a great workplace. By embracing open communication, valuing diverse perspectives, and fostering a supportive environment, organizations unlock the true potential of their teams and pave the way for sustainable success. 

3. How can I make my team more collaborative in communication?

Turning your team into a communication powerhouse takes effort and dedication, but with the right strategies, you can transform them into collaborative communication pros! Here are some actionable tips:

  • Impress the top-level management to be approachable and open to feedback 
  • Encourage everyone to pay full attention when someone is speaking and ask clarifying questions to ensure understanding
  • Dedicate channels, threads, or documents for specific projects or topics to avoid information overload and context-switching
  • Allow team members to contribute at their own pace, especially if your team is in different time zones
  • Equip your team with the skills they need to communicate effectively, including active listening, conflict resolution, and giving and receiving feedback
  • Break down silos by creating opportunities for cross-functional teams to work together on projects

Questions? Comments? Visit our Help Center for support.

Receive the latest WriteClick Newsletter updates.

Thanks for subscribing to our blog!

Please enter a valid email

  • Free training & 24-hour support
  • Serious about security & privacy
  • 99.99% uptime the last 12 months
  • Enroll & Pay
  • Prospective Students
  • Current Students
  • Degree Programs

Collaborative Problem Solving

"Collaborative Problem Solving manual cover photo"

The key to solving shared problems relies, in large measure, in the communication skills of the people associated with those problems. Although designed specifically for use in educational settings, Collaborative Problem Solving outlines the communication skills needed to solve problems within any group. These skills include listening actively, reflecting another person’s statements back to that person, asking questions, and summarizing. The skills are then incorporated within a problem-solving process that can be used to structure meetings between two or more people, especially those working together to support specific students. Numerous examples of this process are presented throughout the book, as are opportunities to practice the skills.This process is appropriate for any educator, parent, or person interested in improving communication while working with others to help students succeed. It is especially useful for teachers in cooperative teaching relationships and in collaborative relationships between general and special educators.

Teacher or Student Feedback on Collaborative Problem Solving Co-Author Ann Knackendoffel says, "The response from teachers in the field has been very supportive. They have been much more visionary than I was when I first created Collaborative Problem Solving and found a multitude of uses for the model beyond just their collaboration with general and special education teachers. They have found the process helpful with working one-on-one with colleagues, but they have been most enthusiastic about how it helps when they have multiple participants in a meeting such as a student improvement or IEP meeting. Teachers have also reported it useful when they are meeting with parents or working with students. They find the Problem-Solving Worksheet keeps them on task and focused on the problem rather than straying off into areas that detract from solution finding."

Author(s): E. Ann Knackendoffel, Suzanne M. Robinson, Donald D. Deshler, and Jean B. Schumaker

Publication Info: Edge Enterprises, 1992

Available from Edge Enterprises, Inc.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10054602

Logo of jintell

Creativity, Critical Thinking, Communication, and Collaboration: Assessment, Certification, and Promotion of 21st Century Skills for the Future of Work and Education

Branden thornhill-miller.

1 Faculty of Philosophy, University of Oxford, Oxford OX2 6GG, UK

2 International Institute for Competency Development, 75001 Paris, France

Anaëlle Camarda

3 LaPEA, Université Paris Cité and Univ Gustave Eiffel, 92100 Boulogne-Billancourt, France

4 Institut Supérieur Maria Montessori, 94130 Nogent-Sur-Marne, France

Maxence Mercier

Jean-marie burkhardt.

5 LaPEA, Univ Gustave Eiffel and Université Paris Cité, CEDEX, 78008 Versailles, France

Tiffany Morisseau

6 Strane Innovation, 91190 Gif-sur-Yvette, France

Samira Bourgeois-Bougrine

Florent vinchon, stephanie el hayek.

7 AFNOR International, 93210 Saint-Denis, France

Myriam Augereau-Landais

Florence mourey, cyrille feybesse.

8 Centre Hospitalier Guillaume Regnier, Université de Rennes 1, 35200 Rennes, France

Daniel Sundquist

Todd lubart, associated data.

Not Applicable.

This article addresses educational challenges posed by the future of work, examining “21st century skills”, their conception, assessment, and valorization. It focuses in particular on key soft skill competencies known as the “4Cs”: creativity, critical thinking, collaboration, and communication. In a section on each C, we provide an overview of assessment at the level of individual performance, before focusing on the less common assessment of systemic support for the development of the 4Cs that can be measured at the institutional level (i.e., in schools, universities, professional training programs, etc.). We then present the process of official assessment and certification known as “labelization”, suggesting it as a solution both for establishing a publicly trusted assessment of the 4Cs and for promoting their cultural valorization. Next, two variations of the “International Institute for Competency Development’s 21st Century Skills Framework” are presented. The first of these comprehensive systems allows for the assessment and labelization of the extent to which development of the 4Cs is supported by a formal educational program or institution. The second assesses informal educational or training experiences, such as playing a game. We discuss the overlap between the 4Cs and the challenges of teaching and institutionalizing them, both of which may be assisted by adopting a dynamic interactionist model of the 4Cs—playfully entitled “Crea-Critical-Collab-ication”—for pedagogical and policy-promotion purposes. We conclude by briefly discussing opportunities presented by future research and new technologies such as artificial intelligence and virtual reality.

1. Introduction

There are many ways of describing the massive educational challenges faced in the 21st century. With the appearance of computers and digital technologies, new means of interacting between people, and a growing competitiveness on the international level, organizations are now requiring new skills from their employees, leaving educational systems struggling to provide appropriate ongoing training. Indeed, according to the World Economic Forum’s 2020 “Future of Jobs Report”, studying 15 industries in 26 advanced and emerging countries, up to 50% of employees will need some degree of “reskilling” by 2025 ( World Economic Forum 2020 ). Although many national and international educational efforts and institutions now explicitly put the cultivation of new kinds of skills on their educational agendas, practical means of assessing such skills remains underdeveloped, thus hampering the valorization of these skills and the development of guidance for relevant pedagogy ( Care et al. 2018 ; Vincent-Lancrin et al. 2019 ; for overviews and discussion of higher education in global developmental context, see Blessinger and Anchan 2015 ; Salmi 2017 ).

This article addresses some of these challenges and related issues for the future of education and work, by focusing on so-called “21st Century Skills” and key “soft skills” known as the “4Cs” (creativity, critical thinking, communication, and collaboration), more particularly. It begins with a brief discussion of these skills, outlining their conceptual locations and potential roles in the modern educational context. A section on each “C” then follows, defining the C, summarizing research and methods for its scientific assessment at the individual level, and then outlining some means and avenues at the systemic level for fostering its development (e.g., important aspects of curriculum, institutional structure, or of the general environment, as well as pedagogical methods) that might be leveraged by an institution or program in order to promote the development of that C among its students/trainees. In the next section, the certification-like process of “labelization” is outlined and proposed as one of the best available solutions both for valorizing the 4Cs and moving them towards the center of the modern educational enterprise, as well as for benchmarking and monitoring institutions’ progress in fostering their development. The International Institute for Competency Development’s 4Cs Framework is then outlined as an example of such a comprehensive system for assessing and labelizing the extent to which educational institutions and programs support the development of the 4Cs. We further demonstrate the possibility of labelizing and promoting support for the development of the 4Cs by activities or within less formal educational settings, presenting a second framework for assessment of the 4Cs in games and similar training activities. Our discussion section begins with the challenges to implementing educational change in the direction of 21st century skills, focusing on the complex and overlapping nature of the 4Cs. Here, we propose that promoting a “Dynamic Interactionist Model of the 4Cs” not only justifies grouping them together, but it might also assist more directly with some of the challenges of pedagogy, assessment, policy promotion, and ultimately, institutionalization, faced by the 4Cs and related efforts to modernize education. We conclude by suggesting some important future work for the 4Cs individually and also as an interrelated collective of vital skills for the future of education and work.

“21st Century Skills”, “Soft Skills”, and the “4Cs”

For 40 years, so-called “21st century skills” have been promoted as those necessary for success in a modern work environment that the US Army War College ( Barber 1992 ) has accurately described as increasingly “VUCA”—“volatile, uncertain, complex and ambiguous”. Various lists of skills and competencies have been formulated on their own or as part of comprehensive overarching educational frameworks. Although a detailed overview of this background material is outside the scope of this article (see Lamri et al. 2022 ; Lucas 2022 for summaries), one of the first prominent examples of this trend was the Partnership for 21st Century Skills (P21), whose comprehensive “Framework for 21st Century Learning” is presented in Figure 1 ( Battelle for Kids 2022 ). This framework for future-oriented education originated the idea of the “4Cs”, placing them at its center and apex as “Learning and Innovation Skills” that are in need of much broader institutional support at the foundational level in the form of new standards and assessments, curriculum and instructional development, ongoing professional development, and appropriately improved learning environments ( Partnership for 21st Century Skills 2008 ). These points are also consistent with the approach and assessment frameworks presented later in this article.

An external file that holds a picture, illustration, etc.
Object name is jintelligence-11-00054-g001.jpg

The P21 Framework for 21st Century Learning. (© 2019, Battelle for Kids. All Rights Reserved. https://www.battelleforkids.org/ ; accessed on 17 January 2023).

Other important organizations such as the World Economic Forum ( 2015 ) have produced similar overarching models of “21st century skills’’ with the 4Cs at their center, but the term “21st century skills’’ has been rightly criticized for a several reasons: the skills referred to are not actually all unique to, or uniquely important to, the 21st century, and it is a term that is often used more as an advertising or promotional label for systems that sometimes conflate and confuse different kinds of skills with other concepts that users lump together ( Lucas 2019 ). Indeed, though there is no absolute consensus on the definition of a “skill”, they are often described as being multidimensional and involve the ability to solve problems in context and to perform tasks using appropriate resources at the right time and in the right combination ( Lamri and Lubart 2021 ). At its simplest, a skill is a “learned capacity to do something useful” ( Lucas and Claxton 2009 ), or an ability to perform a given task at a specified performance level, which develops through practice, experience. and training ( Lamri et al. 2022 ).

The idea of what skills “are’’, however, has also evolved to some extent over time in parallel to the nature of the abilities required to make valued contributions to society. The digital and information age, in particular, has seen the replacement by machines of much traditional work sometimes referred to as “hard skills’’—skills such as numerical calculation or driving, budget-formulating, or copyediting abilities, which entail mastery of fixed sets of knowledge and know-how of standard procedures, and which are often learned on the job. Such skills are more routine, machine-related, or technically oriented and not as likely to be centered on human interaction. In contrast, the work that has been increasingly valued in the 21st century involves the more complex, human interactive, and/or non-routine skills that Whitmore ( 1972 ) first referred to as “soft skills”.

Unfortunately, researchers, educators, and consultants have defined, redefined, regrouped, and expanded soft skills—sometimes labeling them “transversal competencies”, “generic competencies”, or even “life skills” in addition to “21st century skills”—in so many different ways within and across different domains of research and education (as well as languages and national educational systems) that much progress towards these goals has literally been “lost in translation” ( Cinque 2016 ).

Indeed, there is also a long-standing ambiguity and confusion between the terms “competency” (also competence) and “skill” due to their use across different domains (e.g., learning research, education, vocational training, personnel selection) as well as different epistemological backgrounds and cultural specificities ( Drisko 2014 ; Winterton et al. 2006 ; van Klink and Boon 2003 ). The term “competency” is, however, often used as a broader concept that encompasses skills, abilities, and attitudes, whereas, in a narrower sense, the term “skill” has been defined as “goal-directed, well-organized behavior that is acquired through practice and performed with economy of effort” ( Proctor and Dutta 1995, p. 18 ). For example, whereas the command of a spoken language or the ability to write are skills (hard skills, to be precise), the ability to communicate effectively is a competence that may draw on an individual’s knowledge of language, writing skills, practical IT skills, and emotional intelligence, as well as attitudes towards those with whom one is communicating ( Rychen and Hersch 2003 ). Providing high-quality customer service is a competency that relies on listening skills, social perception skills, and contextual knowledge of products. Beyond these potential distinctions, the term “competency” is predominant in Europe, whereas “skill” is more commonly used in the US. Yet it also frequently occurs that both are used as rough synonyms. For example, Voogt and Roblin ( 2012, p. 299 ) examine the “21st century competences and the recommended strategies for the implementation of these skills”, and Graesser et al. ( 2022, p. 568 ) state that twenty-first-century skills “include self-regulated learning, collaborative problem solving, communication (…) and other competencies”. In conclusion, the term “competencies” is often used interchangeably with “skills” (and can have a particularly large overlap with “soft skills”), but it is also often considered in a broader sense as a set of skills, knowledge, and attitudes that, together, meet a complex demand ( Ananiadoui and Claro 2009 ). From this perspective, one could argue that the 4Cs, as complex, “higher-order” soft skills, might best be labeled competencies. For ease and convenience, however, in this text, we consider the two terms interchangeable but favor the term “skills”, only using “competency” in some instances to avoid cumbersome repetition.

Even having defined soft skills as a potentially more narrow and manageable focus, we are still aware of no large-scale study that has employed a comprehensive enough range of actual psychometric measures of soft skills in a manner that might help produce a definitive empirical taxonomy. Some more recent taxonomic efforts have, however, attempted to provide additional empirical grounding for the accurate identification of key soft skills (see e.g., Joie-La Marle et al. 2022 ). Further, recent research by JobTeaser (see Lamri et al. 2022 ) surveying a large, diverse sample of young workers about a comprehensive, systematic list of soft skills as actually used in their professional roles represents a good step towards some clarification and mapping of this domain on an empirical basis. Despite the fact that both these studies necessarily involved assumptions and interpretive grouping of variables, the presence and importance of the 4Cs as higher-order skills is evident in both sets of empirical results.

Various comprehensive “21st century skills” systems proposed in the past without much empirical verification also seem to have been found too complex and cumbersome for implementation. The 4Cs, on the other hand, seem to provide a relatively simple, persuasive, targetable core that has been found to constitute a pedagogically and policy-friendly model by major organizations, and that also now seems to be gaining some additional empirical validity. Gathering support from researchers and industry alike, we suggest that the 4Cs can be seen as highest-level transversal skills—or “meta-competencies”—that allow individuals to remain competent and to develop their potential in a rapidly changing professional world. Thus, in the end, they may also be one of the most useful ways of summarizing and addressing the critical challenges faced by the future of work and education ( National Education Association 2011 ).

Taking them as our focus, we note, however, that the teaching and development of the 4Cs will require a complex intervention and mobilization of educational and socio-economic resources—both a major shift in pedagogical techniques and even more fundamental changes in institutional structures ( Ananiadoui and Claro 2009 ). One very important issue for understanding the 4Cs and their educational implementation related to this, which can simultaneously facilitate their teaching but be a challenge for their assessment, is the multidimensionality, interrelatedness, and transdisciplinary relevance of the 4Cs. Thus, we address the relationships between the Cs in the different C sections and later in our Discussion, we present a “Dynamic Interactionist Model of the 4Cs’’ that we hope will assist in their understanding, in the further development of pedagogical processes related to them, and in their public promotion and related policy. Ultimately, it is partly due to their complexity and interrelationships, we argue, that it is important and expedient that the 4Cs are taught, assessed, and promoted together.

2. The 4Cs, Assessment, and Support for Development

2.1. creativity.

In psychology, creativity is usually defined as the capacity to produce novel, original work that fits with task constraints and has value in its context (for a recent overview, see Lubart and Thornhill-Miller 2019 ). This basic definition, though useful for testing and measurement, is largely incomplete, as it does not contain any information about the individual or groups doing the creating or the nature of physical and social contexts ( Glăveanu 2014 ). Moreover, Corazza ( 2016 ) challenged this standard definition of creativity, arguing that as it focuses solely on the existence of an original and effective outcome, it misses the dynamics of the creative process, which is frequently associated with periods of creative inconclusiveness and limited occasions of creative achievements. To move away from the limitations of the standard definition of creativity, we can consider Bruner’s description of creativity as “figuring out how to use what you already know in order to go beyond what you currently think” (p. 183 in Weick 1993 ). This description echoes the notion of potential, which refers to a latent state that may be put to use if a person has the opportunity.

Creativity is a multifaceted phenomenon that can be approached from many different angles. There are three main frameworks for creativity studies: the 4Ps ( Rhodes 1961 ), the 5As ( Glăveanu 2013 ), and the 7Cs model ( Lubart 2017 ). These frameworks share at least four fundamental and measurable dimensions: the act of creating (process), the outcome of the creative process (product), the characteristics of creative actor(s) enacting the process (person), and the social and physical environment that enable or hinder the creative process (press). Contrary to many traditional beliefs, however, creativity can be trained and taught in a variety of different ways, both through direct, active teaching of creativity concepts and techniques and through more passive and indirect means such as the development of creativity-supporting contexts ( Chiu 2015 ; Thornhill-Miller and Dupont 2016 ). Alongside intelligence, with which it shares some common mechanisms, creativity is now recognized as an indispensable element for the flexibility and adaptation of individuals in challenging situations ( Sternberg 1986 ).

2.1.1. Individual Assessment of Creativity

Drawing upon previous efforts to structure creativity research, Batey ( 2012 ) proposed a taxonomic framework for creativity measurement that takes the form of a three-dimensional matrix: (a) the level at which creativity may be measured (the individual, the team, the organization, and the culture), (b) the facets of creativity that may be assessed (person/trait, process, press, and product), and (c) the measurement approach (objective, self-rating, other ratings). It is beyond the scope of this article to offer a literature review of all these dimensions, but for the purposes of this paper, we address some important aspects of individual-level and institutional-level assessment here.

Assessing creativity at an individual level encompasses two major approaches: (1) creative accomplishment based on production and (2) creative potential. Regarding the first approach focusing on creative accomplishment , there are at least four main assessment techniques (or tools representing variations of assessment techniques): (a) the historiometric approach, which applies quantitative analysis to historically available data (such as the number of prizes won or times cited) in an effort to understand eminent, field-changing creativity ( Simonton 1999 ); (b) the Consensual Assessment Technique (CAT) ( Amabile 1982 ), which offers a method for combining and validating judges’ subjective evaluations of a set of (potentially) creative productions or ideas; (c) the Creative Achievement Questionnaire ( Carson et al. 2005 ), which asks individuals to supply a self-reported assessment of their publicly recognizable achievement in ten different creative domains; and (d) the Inventory of Creative Activities and Achievements (ICAA) ( Jauk et al. 2014 ; Diedrich et al. 2018 ), which includes self-report scales assessing the frequency of engagement in creative activity and also levels of achievement in eight different domains.

The second major approach to individual assessment is based on creative potential, which measures the cognitive abilities and/or personality traits that are important for creative work. The two most popular assessments of creative potential are the Remote Associations Test (RAT) and the Alternative Uses Task (AUT). The RAT, which involves identifying the fourth word that is somehow associated with each of three given words, underscores the role that the ability to convergently associate disparate ideas plays as a key capacity for creativity. In contrast, the AUT, which requires individuals to generate a maximum number of ideas based on a prompt (e.g., different uses for a paperclip), is used to assess divergent thinking capacity. According to multivariate models of creative potential ( Lubart et al. 2013 ), there are cognitive factors (e.g., divergent thinking, mental flexibility, convergent thinking, associative thinking, selective combination), conative factors (openness, tolerance of ambiguity, intuitive thinking, risk taking, motivation to create), and environmental factors that all support creativity. Higher creative potential is predicted by having more of the ingredients for creativity. However, multiple different profiles among a similar set of these important ingredients exist, and their weighting for optimal creative potential varies according to the profession, the domain, and the task under consideration. For example, Lubart and Thornhill-Miller ( 2021 ) and Lubin et al. ( forthcoming ) have taken this creativity profiling approach, exploring the identification and training of the components of creative potential among lawyers and clinical psychologists, respectively. For a current example of this sort of comprehensive, differentiated measurement of creative potential in adults in different domains and professions, see CreativityProfiling.org. For a recent battery of tests that are relevant for children, including domain-relevant divergent-exploratory and convergent-integrative tasks, see Lubart et al. ( 2019 ). Underscoring the growing recognition of the importance of creativity assessment, measures of creative potential for students were introduced internationally for the first time in the PISA 2022 assessment ( OECD 2019a ).

2.1.2. Institutional and Environmental Support for Development of Creativity

The structural support that institutions and programs can provide to promote the development of creativity can be described as coming through three main paths: (1) through design of the physical environment in a manner that supports creativity, (2) through teaching about creativity, the creative process, and creativity techniques, and (3) through training opportunities to help students/employees develop personal habits, characteristics, and other ingredients associated with creative achievement and potential.

Given the multi-dimensionality of the notion of creativity, the environment can positively influence and help develop creative capacities. Studies have shown that the physical environment in which individuals work can enhance their positive emotions and mood and thus their creativity. For example, stimulating working environments might have unusual furniture and spaces that have natural light, windows open to nature, plants and flowers, a relaxing atmosphere and colors in the room (e.g., green and blue), or positive sounds (e.g., calm music or silence), as well as inspiring and energizing colors (e.g., yellow, pink, orange). Furthermore, the arrangement of physical space to promote interpersonal exchange rather than isolation, as well as the presence of tools, such as whiteboards, that support and show the value of exchange, are also important (for reviews, see Dul and Ceylan 2011 ; Samani et al. 2014 ).

Although it has been claimed that “creativity is intelligence having fun” ( Scialabba 1984 ; Reiman 1992 ), for most people, opportunities for fun and creativity, especially in their work environment, appear rather limited. In fact, the social and physical environment often hinders creativity. Corazza et al. ( 2021 )’s theoretical framework concerning the “Space-Time Continuum”, related to support for creativity, suggests that traditional education systems are an example of an environment that is “tight” both in the conceptual “space” it affords for creativity and in the available time allowed for creativity to happen—essentially leaving little room for original ideas to emerge. Indeed, though world-wide data suggest that neither money nor mere time spent in class correlate well with educational outcomes, both policies and pedagogy that direct the ways in which time is spent make a significant difference ( Schleicher 2022 ). Research and common sense suggest that teachers, students, and employees need more space and time to invest energy in the creative process and the development of creative potential.

Underscoring the importance of teaching the creative process and creativity techniques is the demonstration, in a number of contexts, that groups of individuals who generate ideas without a specific method are often negatively influenced by their social environment. For example, unless guarded against, the presence of others tends to reduce the number of ideas generated and to induce a fixation on a limited number of ideas conforming to those produced by others ( Camarda et al. 2021 ; Goldenberg and Wiley 2011 ; Kohn and Smith 2011 ; Paulus and Dzindolet 1993 ; Putman and Paulus 2009 ; Rietzschel et al. 2006 ). To overcome these cognitive and social biases, different variants of brainstorming techniques have shown positive effects (for reviews of methods, see Al-Samarraie and Hurmuzan 2018 ; Paulus and Brown 2007 ). These include: using ( Osborn 1953 ) initial brainstorming rules (which aim to reduce spontaneous self-judgment of ideas and fear of this judgment by others); drawing attention to ideas generated by others by writing them down independently (e.g., the technique known as “brainwriting”); and requiring incubation periods between work sessions by forcing members of a problem-solving group to take breaks ( Paulus and Yang 2000 ; Paulus and Kenworthy 2019 ).

It is also possible to use design methods that are structured to guide the creative process and the exploration of ideas, as well as to avoid settling on uncreative solution paths ( Chulvi et al. 2012 ; Edelman et al. 2022 ; Kowaltowski et al. 2010 ; see Cotter et al. 2022 for a valuable survey of best practices for avoiding the suppression of creativity and fostering creative interaction and metacognition in the classroom). Indeed, many helpful design thinking-related programs now exist around the world and have been shown to have a substantial impact on creative outcomes ( Bourgeois-Bougrine 2022 ).

Research and experts suggest the utility of many additional creativity enhancement techniques (see, e.g., Thornhill-Miller and Dupont 2016 ), and the largest and most rapid effects are often attributed to these more method- or technique-oriented approaches ( Scott et al. 2004 ). More long-term institutional and environmental support for the development of creativity, however, should also include targeted training and understanding of personality and emotional traits associated with the “creative person” (e.g., empathy and exploratory habits that can expand knowledge, as well as increase tolerance of ambiguity, openness, and mental flexibility; see Lubart and Thornhill-Miller 2021 ). Complementing these approaches and focusing on a more systemic level, recent work conducted by the OECD exemplifies efforts aimed to foster creativity (and critical thinking) by focusing simultaneously on curriculum, educational activities, and teacher support and development at the primary, secondary, and higher education levels (see Vincent-Lancrin et al. 2019 ; Saroyan 2022 ).

2.2. Critical Thinking

Researchers, teachers, employers, and public policymakers around the world have long ranked the development of critical thinking (CT) abilities as one of the highest educational priorities and public needs in modern democratic societies ( Ahern et al. 2019 ; Dumitru et al. 2018 ; Pasquinelli et al. 2021 ). CT is central to better outcomes in daily life and general problem solving ( Hitchcock 2020 ), to intelligence and adaptability ( Halpern and Dunn 2021 ), and to academic achievement ( Ren et al. 2020 ). One needs to be aware of distorted or erroneous information in the media, of the difference between personal opinions and proven facts, and how to handle increasingly large bodies of information required to understand and evaluate information in the modern age.

Although much research has addressed both potentially related constructs, such as intelligence and wisdom, and lists of potential component aspects of human thought, such as inductive or deductive reasoning (for reviews of all of these, see Sternberg and Funke 2019 ), reaching a consensus on a definition has been difficult, because CT relies on the coordination of many different skills ( Bellaera et al. 2021 ; Dumitru et al. 2018 ) and is involved in, and sometimes described from the perspective of, many different domains ( Lewis and Smith 1993 ). Furthermore, as a transversal competency, having the skills to perform aspects of critical thinking in a given domain does not necessarily entail also having the metacognitive ability to know when to engage in which of its aspects, or having the disposition, attitude, or “mindset” that motivates one to actually engage in them—all of which are actually required to be a good critical thinker ( Facione 2011 ).

As pointed out by the American Philosophical Association’s consensus definition, the ideal “critical thinker” is someone who is inquisitive, open-minded, flexible, fair-minded, and keeps well-informed, thus understanding different points of view and perspectives ( Facione 1990b ). These characteristics, one might note, are also characteristic of the “creative individual” ( Facione 1990b ; Lai 2011 ), as is the ability to imagine alternatives, which is often cited as a component of critical thinking ability ( Facione 1990b ; Halpern 1998 ). Conversely, creative production in any domain needs to be balanced by critical appraisal and thought at each step of the creative process ( Bailin 1988 ). Indeed, it can be argued that creativity and critical thinking are inextricably linked and are often two sides of the same coin. Representing different aspects of “good thought” that are linked and develop in parallel, it seems reasonable that they should, in practice, be taught and considered together in teaching and learning ( Paul and Elder 2006 ).

Given its complexity, many definitions of critical thinking have been offered. However, some more recent work has helpfully defined critical thinking as “the capacity of assessing the epistemic quality of available information and—as a consequence of this assessment—of calibrating one’s confidence in order to act upon such information” ( Pasquinelli et al. 2021 ). This definition, unlike others proposed in the field (for a review, see: Bellaera et al. 2021 ; Liu et al. 2014 ), is specific (i.e., it limits the use of poorly defined concepts), as well as consensual and operational (i.e., it has clear and direct implications for the education and assessment of critical thinking skills; Pasquinelli et al. 2021 ; Pasquinelli and Bronner 2021 ). Thus, this approach assumes that individuals possess better or worse cognitive processes and strategies that make it possible to judge the reliability of the information received, by determining, for example, what the arguments provided actually are. Are the arguments convincing? Is the source of information identifiable and reliable? Does the information conflict with other information held by the individual?

It should also be noted that being able to apply critical thinking is necessary to detect and overcome the cognitive biases that can constrain one’s reasoning. Indeed, when solving a problem, it is widely recognized that people tend to automate the application of strategies that are usually relevant in similar and analogous situations that have already been encountered. However, these heuristics (i.e., automatisms) can be a source of errors, in particular, in tricky reasoning situations, as demonstrated in the field of reasoning, arithmetic problems ( Kahneman 2003 ) or even divergent thinking tasks ( Cassotti et al. 2016 ; for a review of biases, see Friedman 2017 ). Though some cognitive biases can even be seen as normal ways of thinking and feeling, sometimes shaping human beliefs and ideologies in ways that make it completely normal—and even definitely human— not to be objective (see Thornhill-Miller and Millican 2015 ), the mobilization of cognitive resources such as those involved in critical reasoning on logical bases usually makes it possible to overcome cognitive biases and adjust one’s reasoning ( West et al. 2008 ).

According to Pasquinelli et al. ( 2021 ), young children already possess cognitive functions underlying critical thinking, such as the ability to determine that information is false. However, until late adolescence, studies have demonstrated an underdevelopment of executive functions involved in resistance to biased reasoning ( Casey et al. 2008 ) as well as some other higher-order skills that underlie the overall critical thinking process ( Bloom 1956 ). According to Facione and the landmark American Philosophical Association’s task force on critical thinking ( Facione 1990b ; Facione 2011 ), these components of critical thinking can be organized into six measurable skills: the ability to (1) interpret information (i.e., meaning and context); (2) analyze information (i.e., make sense of why this information has been provided, identify pro and con arguments, and decide whether we can accept the conclusion of the information); (3) make inferences (i.e., determine the implications of the evidence, its reliability, the undesirable consequences); (4) evaluate the strength of the information (i.e., its credibility, determine the trust in the person who provides it); (5) provide explanations (i.e., summarize the findings, determine how the information can be interpreted, and offer verification of the reasoning); (6) self-regulate (i.e., evaluate the strength of the methods applied, determine the conflict between different conclusions, clarify the conclusions, and verify missing elements).

2.2.1. Individual Assessment of Critical Thinking

The individual assessment of critical thinking skills presents a number of challenges, because it is a multi-task ability and involves specific knowledge in the different areas in which it is applied ( Liu et al. 2014 ; Willingham 2008 ). However, the literature provides several tools with which to measure different facets of cognitive functions and skills involved in the overarching critical thinking process ( Lai 2011 ; Liu et al. 2014 ). Most assessments involve multiple-choice questions requiring reasoning within a particular situation based upon a constrained set of information provided. For example, in one of the most widely used tests, the California Critical Thinking Skills Test ( Facione 1990a ), participants are provided with everyday scenarios and have to answer multiple questions targeting the six higher-order skills described previously. Similarly, the Watson–Glaser Critical Thinking Appraisal ( Watson 1980 ; Watson and Glaser 2010 ) presents test takers with passages and scenarios measuring their competencies at recognizing assumptions, evaluating arguments, and drawing conclusions. Although the Watson–Glaser is one of the oldest and most frequently used assessments internationally for hiring and promotion in professional contexts, its construct validity, like many other measures of this challenging topic, has some limitations ( Possin 2014 ).

Less frequently, case study or experiential methods of assessment are also used. This approach may involve asking participants to reflect on past experiences, analyze the situations they faced and the way they behaved or made judgments and decisions and then took action ( Bandyopadhyay and Szostek 2019 ; Brookfield 1997 ). These methods, often employed by teachers or employers on students and employees, usually involve the analysis of qualitative data that can cast doubt on the reliability of the results. Consequently, various researchers have suggested ways to improve analytic methods, and they emphasize the need to create more advanced evaluation methods ( Brookfield 1997 ; Liu et al. 2014 ).

For example, Liu et al. ( 2014 ) reviewed current assessment methods and suggest that future work improves the operational definition of critical thinking, aiming to assess it both in different specific contexts and in different formats. Specifically, assessments could be contextualized within the major areas addressed by education programs (e.g., social sciences, humanities, and/or natural sciences), and the tasks themselves should be as practically connected to the “real world” as possible (e.g., categorizing a set of features, opinions, or facts based on whether or not they support an initial statement). Moreover, as Brookfield ( 1997 ) argues, because critical thinking is a social process that takes place in specific contexts of knowledge and culture, it should be assessed as a social process, therefore, involving a multiplicity of experiences, perceptions, and contributions. Thus, Brookfield makes three recommendations for improving the assessment of critical thinking that are still relevant today: (1) to assess critical thinking in specific situations, so one can study the process and the discourse related to it; (2) to involve students/peers in the evaluation of critical thinking abilities, so that the evaluation is not provided only by the instructor; and (3) to allow learners or participants in an experiment to document, demonstrate, and justify their engagement in critical thinking, because this learning perspective can provide insight into basic dimensions of the critical thinking process.

Finally, another more recent and less widely used form of assessment targets the specific executive functions that underlie logical reasoning and resistance to cognitive biases, as well as the ability of individuals to resist these biases. This form of assessment is usually done through specific experimental laboratory tasks that vary depending on the particular executive function and according to the domain of interest ( Houdé and Borst 2014 ; Kahneman 2011 ; West et al. 2008 ).

2.2.2. Institutional and Environmental Support for Development of Critical Thinking Skills

The executive functions underlying general critical thinking, the ability to overcome bias ( Houdé 2000 ; Houdé and Borst 2014 ), and meta-cognitive processes (i.e., meta information about our cognitive strategies) can all be trained and enhanced by educational programs ( Abrami et al. 2015 ; Ahern et al. 2019 ; Alsaleh 2020 ; Bellaera et al. 2021 ; Uribe-Enciso et al. 2017 ; Popil 2011 ; Pasquinelli and Bronner 2021 ; Yue et al. 2017 ).

Educational programs and institutions can support the development of critical thinking in several different ways. The process of developing critical thinking focuses on the interaction between personal dispositions (attitudes and habits), skills (evaluation, reasoning, self-regulation), and finally, knowledge (general and specific knowledge, as well as experience) ( Thomas and Lok 2015 ). It is specifically in regard to skills and knowledge that institutions are well suited to develop critical thinking through pedagogical elements such as rhetoric training, relevance of information evaluation (e.g., media literacy, where and how to check information on the internet, dealing with “fake news”, etc.), deductive thinking skills, and inductive reasoning ( Moore and Parker 2016 ). A few tools, such as case studies or concept mapping, can also be used in conjunction with a problem-based learning method, both in individual and team contexts and in person or online ( Abrami et al. 2015 ; Carmichael and Farrell 2012 ; Popil 2011 ; Thorndahl and Stentoft 2020 ). According to Marin and Halpern ( 2011 ), training critical thinking should include explicit instruction involving at least the four following components and objectives: (1) working on attitudes and encouraging individuals to think; (2) teaching and practicing critical thinking skills; (3) training for transfer between contexts, identifying concrete situations in which to adopt the strategies learned; and (4) suggesting metacognition through reflection on one’s thought processes. Supporting these propositions, Pasquinelli and Bronner ( 2021 ), in a French national educational report, proposed practical advice for creating workshops to stimulate critical thinking in school classrooms, which appear relevant even in non-school intervention situations. For example, the authors suggest combining concrete examples and exercises with general and abstract explanations, rules and strategies, which can be transferred to other areas beyond the one studied. They also suggest inviting learners to create examples of situations (e.g., case studies) in order to increase the opportunities to practice and for the learner to actively participate. Finally, they suggest making the process of reflection explicit by asking the learner to pay attention to the strategies adopted by others in order to stimulate the development of metacognition.

2.3. Communication

In its most basic definition, communication consists of exchanging information to change the epistemic context of others. In cooperative contexts, it aims at the smooth and efficient exchange of information contributing to the achievement of a desired outcome or goal ( Schultz 2010 ). But human communication involves multiple dimensions. Both verbal and non-verbal communication can involve large quantities of information that have to be both formulated and deciphered with a range of purposes and intentions in mind ( Jones and LeBaron 2002 ). These dimensions of communication have as much to do with the ability to express oneself, both orally and in writing and the mastering of a language (linguistic competences), as with the ability to use this communication system appropriately (pragmatic skills; see Grassmann 2014 ; Matthews 2014 ), and with social skills, based on the knowledge of how to behave in society and on the ability to connect with others, to understand the intentions and perspectives of others ( Tomasello 2005 ).

Like the other 4Cs, according to most authorities, communication skills are ranked by both students and teachers as skills of the highest priority for acquisition in order to be ready for the workforce in 2030 ( OECD 2019b ; Hanover Research 2012 ). Teaching students how to communicate efficiently and effectively in all the new modalities of information exchange is an important challenge faced by all pedagogical organizations today ( Morreale et al. 2017 ). All dimensions of communication (linguistic, pragmatic, and social) are part of what is taught in school curricula at different levels. But pragmatic and social competencies are rarely explicitly taught as such. Work on social/emotional intelligence (and on its role in students’ personal and professional success) shows that these skills are both disparate and difficult to assess ( Humphrey et al. 2007 ). Research on this issue is, however, becoming increasingly rigorous, with the potential to provide usable data for the development of science-based practice ( Keefer et al. 2018 ). Teachers and pedagogical teams also have an important, changing role to play: they also need to master new information and communication technologies and the transmission of information through them ( Zlatić et al. 2014 ).

Communication has an obvious link with the three other Cs. Starting with critical thinking, sound communication implies fostering the conditions for a communicative exchange directed towards a common goal, which is, at least in educational and professional contexts, based on a fair evaluation of reality ( Pornpitakpan 2004 ). Collaboration too has a strong link with communication, because successful collaboration is highly dependent on the quality of knowledge sharing and trust that emerges between group members. Finally, creativity involves the communication of an idea to an audience and can involve high-quality communication when creative work occurs in a team context.

2.3.1. Individual Assessment of Communication

Given the vast field of communication, an exhaustive list of its evaluation methods is difficult to establish. A number of methods have been reported in the literature to assess an individual’s ability to communicate non-verbally and verbally. But although these two aspects are intrinsically linked, they are rarely measured together with a single tool. Moreover, as Spitzberg ( 2003 ) pointed out, communication skills are supported by different abilities, classically conceptualized as motivational functions (e.g., confidence and goal-orientation), knowledge (e.g., content and procedural knowledge), or cognitive and socio-cognitive functions (e.g., theory of mind, verbal cognition, emotional intelligence, and empathy; McDonald et al. 2014 ; Rothermich 2020 ), implying different specific types of evaluations. Finally, producing vs. receiving communication involve different skills and abilities, which can also vary according to the context ( Landa 2005 ).

To overcome these challenges, Spitzberg ( 2003 ) recommends the use of different assessment criteria. These criteria include the clarity of interaction, the understanding of what was involved in the interaction, the satisfaction of having interacted (expected to be higher when communication is effective), the efficiency of the interaction (the more competent someone is, the less effort, complexity, and resources will be needed to achieve their goal), its effectiveness or appropriateness (i.e., its relevance according to the context), as well as criteria relative to the quality of the dialogue (which involves coordination, cooperation, coherence, reciprocity, and mutuality in the exchange with others). Different forms of evaluation are also called for, such as self-reported questionnaires, hetero-reported questionnaires filled out by parents, teachers, or other observers, and tasks involving exposure to role-playing games, scenarios or videos (for a review of these assessment tools, see Cömert et al. 2016 ; Landa 2005 ; Sigafoos et al. 2008 ; Spitzberg 2003 ; van der Vleuten et al. 2019 ). Results from these tools must then be associated with others assessing underlying abilities, such as theory of mind and metacognition.

2.3.2. Institutional and Environmental Support for Development of Communication Skills

Although communication appears to be a key employability skill, the proficiency acquired during studies rarely meets the expectations of employers ( Jackson 2014 ). Communication must therefore become a priority in the training of students, beyond the sectors in which it is already known as essential (e.g., in medicine, nursing, engineering, etc.; Bourke et al. 2021 ; D’Alimonte et al. 2019 ; Peddle et al. 2018 ; Riemer 2007 ), and also through professional development ( Jackson 2014 ). Training programs involving, for example, communication theory classes ( Kruijver et al. 2000 ) and self-assessment tools that can be used in specific situations ( Curtis et al. 2013 ; Rider and Keefer 2006 ) have had convincingly positive results. The literature suggests that interactive approaches in small groups, in which competencies are practiced explicitly in an open and feedback-safe environment, are more effective ( Bourke et al. 2021 ; D’Alimonte et al. 2019 ; AbuSeileek 2012 ; Fryer-Edwards et al. 2006 ). These can take different forms: project-based work, video reviews, simulation or role-play games (see Hathaway et al. 2022 for a review; Schlegel et al. 2012 ). Finally, computer-assisted learning methods can be relevant for establishing a secure framework (especially, for example, when learning another language): anonymity indeed helps to overcome anxiety or social blockages linked to fear of public speaking or showing one’s difficulties ( AbuSeileek 2012 ). Each of these methods tackles one or more dimensions of communication that must then be assessed as such, by means of tools specifically developed and adapted to the contexts in which these skills are expressed (e.g., see the two 4Cs evaluation grids for institutions and for games outlined in Section 4 and Section 5 , below).

2.4. Collaboration

Collaborative problem solving—and more generally, collaboration—has gained increasing attention in national and international assessments (e.g., PISA) as an educational priority encompassing social, emotional, and cognitive skills critical to efficiency, effectiveness, and innovation in the modern global economy ( Graesser et al. 2018 ; OECD 2017 ). Understanding what makes effective collaboration is of crucial importance for professional practice and training ( Détienne et al. 2012 ; Graesser et al. 2018 ), as evidenced by the long line of research on group or team collaboration over the past 40 years (for a review, see e.g., Salas et al. 2004 ; Mathieu et al. 2017 ). Although there is no consensus on a definition of collaboration, scholars often see it as mutual engagement in a coordinated effort to achieve a common goal that involves the sharing of goals, resources, and representations relating to the joint activity of participants; and other important aspects relate to mutual respect, trust, responsibilities, and accountability within situational rules and norms ( Détienne et al. 2012 ).

In the teamwork research literature, skills are commonly described across three classes most often labeled Knowledge, Behavior, and Attitudes (e.g., Cannon-Bowers et al. 1995 ). Knowledge competencies refer to the skills related to elaborating the knowledge content required for the group to process and successfully achieve the task/goal to which they are assigned. Behavior includes skills related to the actualization of actions, coordination, communication, and interactions within the group as well as with any other relevant interlocutors for the task at hand. Note here that effective collaboration involves skills that have also been identified elsewhere as essential competencies, including communication, creativity, and critical thinking. Finally, several attitudes have been evidenced or hypothesized as desirable competencies in the team context, for example, attitude towards teamwork, collective orientation, cohesion/team morale, etc. Another common distinction lies between teamwork and taskwork. Teamwork refers to the collaborative, communicative, or social skills required to coordinate the work within the participants in order to achieve the task, whereas taskwork refers to specific aspects related to solving the task such as using the tools and knowing the procedure, policies, and any other task-related activities ( Salas et al. 2015 ; Graesser et al. 2018 ). Furthermore, collaborative competences can have specific (to a group of people or to a task) and general dimensions (i.e., easily transferable to any group or team situation and to other tasks). For example, skills related to communication, information exchange, conflict management, maintaining attention and motivation, leadership, etc. are present and transferable to a large number of group work situations and tasks (team-generic and task-contingent skills). Other skills can, on the other hand, be more specific to a team or group, such as internal organization, motivation, knowledge of the skills distributed in the team, etc.

2.4.1. Individual Assessment of Collaboration

Assessing collaboration requires capturing the dynamic and multi-level nature of the collaboration process, which is not as easily quantifiable as group/team inputs and outputs (task performance, satisfaction, and changes at group/team and individual level). There are indeed multiple interactions between the context, the collaboration processes, the task processes, and their (various) outcomes ( Détienne et al. 2012 ). The integrative concept of “quality of collaboration” ( Burkhardt et al. 2009 ) encapsulates much of what is currently known about collaborative processes and what constitutes effective collaboration. According to this approach, collaborative processes can be grouped along several dimensions concerning communication processes such as grounding, task-related processes (e.g., exchanges of knowledge relevant for the task at hand), and organization/coordination processes ( Burkhardt et al. 2009 ). Communication processes are most important for ensuring the construction of a common referential within a group of collaborators. Task-related processes relate to how the group resolves the task at hand by sharing and co-elaborating knowledge, by confronting their various perspectives, and by converging toward negotiated solutions. Collaboration also involves group management activities such as: (a) common goal management and coordination activities, e.g., allocation and planning of tasks; (b) meeting/interaction management activities, e.g., ordering and postponing of topics in the meeting. Finally, the ability to pursue reflexive activity, in the sense of reflecting not only on the content of a problem or solution but on one’s collaboration and problem-solving strategies, is critical for the development of the team and supports them in changing and improving their practices. Graesser et al. ( 2018 ) identify collaborative skills based on the combination of these dimensions with a step in the problem-solving process.

A large body of methodology developed to assess collaboration processes and collaborative tools has been focused on quantifying a restricted subset of fine-grained interactions (e.g., number of speakers’ turns; number of words spoken; number of interruptions; amount of grounding questions). This approach has at least two limitations. First, because these categories of analysis are often ad hoc with respect to the considered situation, they are difficult to apply in all situations and make it difficult to compare between studies. Second, quantitative variations of most of these indicators are non-univocal: any increase or decrease of them could signify either an interactive–intensive collaboration or else evidence of major difficulties in establishing and/or maintaining the collaboration ( Détienne et al. 2012 ). Alternatively, qualitative approaches based on multidimensional views of collaboration provide a more elaborated or nuanced view of collaboration and are useful for identifying potential relationships between distinctive dimensions of collaboration and aspects of team performance, in order to identify processes that could be improved. Based on the method of Spada et al. ( 2005 ) in Computer-Supported Collaborative Learning (CSCL) research, Burkhardt et al. ( 2009 ) have proposed a multi-dimensional rating scheme for evaluating the quality of collaboration (QC) in technology-mediated design. QC distinguishes seven dimensions, grouped along five aspects, identified as central for collaboration in a problem-solving task such as design: communication (1, 2), task-oriented processes (3, 4), group-oriented processes (5), symmetry in interaction—an orthogonal dimension—(6), and individual task orientation (7). This method has recently been adapted for use in the context of assessing games as a support to collaborative skills learning.

2.4.2. Institutional and Environmental Support for Development of Collaboration and Collaborative Skills

Support for individuals’ development of collaborative skills provided by institutions and programs can take a variety of forms: (a) through the social impact of the physical structure of the organization, (b) the nature of the work required within the curriculum, (c) content within the curriculum focusing on collaboration and collaborative skills, and (d) the existence and promotion of extracurricular and inter-institutional opportunities for collaboration.

For instance, institutional support for collaboration has taken a variety of forms in various fields such as healthcare, engineering, public participation, and education. Training and education programs such as Interprofessional Education or Team Sciences in the health domain ( World Health Organization 2010 ; Hager et al. 2016 ; O’Carroll et al. 2021 ), Peer-Led Team Learning in chemistry and engineering domains ( Wilson and Varma-Nelson 2016 ), or Collaborative Problem Solving in education ( Peña-López 2017 ; Taddei 2009 ) are notable examples.

Contextual support recently arose from the deployment of online digital media and new mixed realities in the workplace, in the learning environments and in society at large—obviously stimulated and accentuated with the COVID-19 pandemic. This has led many organizations to invest in proposing support for synchronous and asynchronous collaboration (notably remote, between employees, between students and educators or within group members, etc.) in various ways, including the provision of communication hardware and software, computer-supported cooperative work and computer-supported collaborative learning platforms, training and practical guides, etc. Users can collaborate through heterogeneous hybrid collaborative interaction spaces that can be accessed through virtual or augmented reality, but also simple video conferencing or even a voice-only or text-only interface. These new spaces for collaboration are, however, often difficult to use and less satisfactory than face-to-face interactions, suggesting the need for more research on collaborative activities and on how to support them ( Faidley 2018 ; Karl et al. 2022 ; Kemp and Grieve 2014 ; Singh et al. 2022 ; Waizenegger et al. 2020 ).

A substantive body of literature on teams, collaborative learning, and computer-supported technologies provides evidence related to individual, contextual, and technological factors impacting the collaboration quality and efficiency. For example, teacher-based skills that are critical for enhancing collaboration are, among others, the abilities to plan, monitor, support, consolidate, and reflect upon student interaction in group work ( Kaendler et al. 2016 ). Research focuses also on investigating the most relevant tasks and evaluating the possibilities offered by technology to support, to assess (e.g., Nouri et al. 2017 ; Graesser et al. 2018 ), and/or to learn the skills involved in pursuing effective and satisfying collaboration (see e.g., Schneider et al. 2018 ; Doyle 2021 ; Ainsworth and Chounta 2021 ).

3. Labelization: Valorization of the 4Cs and Assessing Support for Their Development

Moving from the nature of the 4Cs and their individual assessment and towards the ways in which institutions can support their development in individuals, we can now address the fundamentally important question of how best to support and promote this 21st century educational mission within and among institutions themselves. This also raises the question of the systemic recognition of educational settings that are conducive to the development of the 4Cs. In response to these questions, the nature and value of labelization is now presented.

A label is “a special mark created by a trusted third party and displayed on a product intended for sale, to certify its origin, to guarantee its quality and to ensure its conformity with the standards of practices in force” ( Renard 2005 ). A label is therefore a way of informing the public about the objective properties and qualities of a product, service, or system. The label is usually easily identifiable and can be seen as a proof that a product or service, a company, or an organization complies with defined criteria. Its effectiveness is therefore closely linked to the choice of requirements set out in its specifications, as well as to the independence and rigor of the body that verifies compliance with the criteria.

3.1. Labeling as a Means of Trust and Differentiation

As a sign of recognition established by a third party, the label or certification can constitute a proof of trust aiming to reassure the final consumer. According to Sutter ( 2005 ), there are different means of signaling trust. First, the brand name of a product or service and its reputation can, in itself, constitute a label when this brand name is recognized on the market. Second, various forms of self-declaration, such as internal company charters, though not statements assessed by a third party, show an internal commitment that can provide reassurance. Finally, there is certification or labeling, which is awarded by an external body and requires a third-party assessment by a qualified expert, according to criteria set out in a specific reference framework. It is this external body, a trusted third party, which guarantees the reliability of the label and constitutes a guarantee of credibility. Its objectivity and impartiality are meant to guarantee that the company, organization, product, or service meets defined quality or reliability criteria ( Jahn et al. 2005 ).

Research on populations around the world (e.g., Amron 2018 ; Sasmita and Suki 2015 ) show that the buying decisions of consumers are heavily influenced by the trust they have in a brand. More specifically, third-party assurances and labelization have been shown to strongly influence customer buying intentions and purchasing behavior (e.g., Kimery and McCord 2002 ; Lee et al. 2004 ). Taking France as an example, research shows that quality certification is seen as “important” or “significant” by 76% of companies ( Chameroy and Veran 2014 ), and decision makers feel more confident and are more willing to invest with the support of third-party approval than if their decision is merely based on the brand’s reputation or its demonstrated level of social responsibility ( Etilé and Teyssier 2016 ). Indeed, French companies with corporate social responsibility labels have been shown to have higher than average growth rates, and the adoption of quality standards is linked with a 7% increase in the share of export turnover ( Restout 2020 ).

3.2. Influence on Choice and Adoption of Goods and Services

Studies diverge in this area, but based on the seminal work of Parkinson ( 1975 ); Chameroy and Veran ( 2014 ), in their research on the effect of labels on willingness to pay, found that in 75% of cases, products with labels are chosen and preferred to those without labels, demonstrating the impact of the label on customer confidence—provided that it is issued by a recognized third party. Thus, brands that have good reputations tend to be preferred over cheaper new brands, because they are more accepted and valued by the individual social network ( Zielke and Dobbelstein 2007 ).

3.3. Process of Labelizing Products and Services

The creation of a label may be the result of a customer or market need, a request from a private sector of activity or from the government. Creating a label involves setting up a working group including stakeholders who are experts in the field, product managers, and a certification body in order to elaborate a reference framework. This is then reviewed by a specialized committee and validated by the stakeholders. The standard includes evaluation criteria that must be clearly defined ( Mourad 2017 ). An audit system is set up by a trusted third party. It must include the drafting of an audit report, a system for making decisions on labeling, and a system for identifying qualified assessors. The validity of the assessment process is reinforced by this double evaluation: a first level of audit carried out by a team of experts according to a clearly defined set of criteria and a second level of decision making assuring that the methodology and the result of the audit are in conformity with the defined reference framework.

3.4. Labelization of 21st Century Skills

The world of education is particularly concerned by the need to develop and assess 21st century skills, because it represents the first link in the chain of skills acquisition, preparing the human resources of tomorrow. One important means of simultaneously offering a reliable, independent assessment of 21st century skills and valorizing them by making them a core target within an educational system (schools, universities, and teaching and training programs of all kinds) is labelization. Two examples of labelization processes related to 21st century skills were recently developed by the International Institute for Competency Development ( 2021 ; see iicd.net; accessed on 20 November 2022) working with international experts, teachers, and researchers from the University of Paris Cité (formerly Université Sorbonne Paris Cité), Oxford University, and AFNOR UK (an accredited certification body and part of AFNOR International, a subsidiary of the AFNOR group, the only standards body in France).

The last two or three decades has seen the simultaneous rise of international ranking systems and an interest in quality assurance and assessment in an increasingly competitive educational market ( Sursock 2021 ). The aim of these labelization frameworks is to assist in the development of “quality culture” in education by offering individual programs, institutions, and systems additional independent, reliable means of benchmarking, charting progress, and distinguishing themselves based on their capacity to support and promote the development of crucial skills. Importantly, the external perspectives provided by such assessment system should be capable of being individually adapted and applied in a manner that can resist becoming rigidly imposed external standards ( Sursock and Vettori 2017 ). Similarly, as we have seen in the literature review, the best approach to understanding and assessing a particular C is from a combination of different levels and perspectives in context. For example, important approaches to critical thinking have been made from educationally, philosophically, and psychologically focused vantage points ( Lai 2011 ). We can also argue that understandings of creativity are also results of different approaches: the major models in the literature (e.g., the “4Ps” and “7Cs” models; see Lubart and Thornhill-Miller 2019 ) explicitly result from and include the objectives of different education-focused, process-focused, and “ingredient” or component-focused approaches.

The two assessment frameworks outlined in the sections that follow were formulated with these different perspectives and objective needs in mind. Given the complexity and very different natures of their respective targets (i.e., one assessing entire formal educational contexts such as institutions or programs, whereas the other targets the less multi-dimensional, informal educational activities represented by games), the assessment of the individual Cs also represents what experts consider a target-appropriate balance of education- and curriculum-focused, process-focused, and component-focused criteria for assessing each different C.

4. The International Institute for Competency Development’s 21st Century Competencies 4Cs Assessment Framework for Institutions and Programs

One comprehensive attempt to operationalize programmatic-level and institutional-level support for the development of the 4Cs is the International Institute for Competency Development’s 4Cs Assessment Framework ( International Institute for Competency Development 2021 ). Based upon expert opinion and a review of the available literature, this evaluation grid is a practical tool that divides each of the 4Cs into three “user-friendly” but topic-covering components (see Table 1 and definitions and further discussion in the sections that follow). Each of these components is then assessed across seven dimensions (see Table 2 , below), designed to cover concisely the pedagogical process and the educational context. Examples for each point level are provided within the evaluation grid in order to offer additional clarity for educational stakeholders and expert assessors.

Three different components of each C in IICD’s 21st Century Skills 4Cs Assessment Framework.

Seven dimensions evaluated for the 3 different components of each C.

* Educational-level dependent and potentially less available for younger students or in some contexts.

The grid itself can be used in several important and different ways by different educational stakeholders: (1) by the institution itself in its self-evaluation and possible preparation for a certification or labelization process, (2) as an explicit list of criteria for external evaluation of the institution and its 4Cs-related programs, and (3) as a potential long-term development targeting tool for the institution or the institution in dialogue with the labelization process.

4.1. Evaluation Grid for Creativity

Dropping the component of “creative person” that is not relevant at the institutional level, this evaluation grid is based on Rhodes’ ( 1961 ) classic “4P” model of creativity, which remains the most concise model today ( Lubart and Thornhill-Miller 2019 ). The three “P” components retained are: creative process , creative environment , and creative product . Creative process refers to the acquisition of a set of tools and techniques that students can use to enhance the creativity of their thinking and work. Creative environment (also called “Press” in earlier literature) is about how the physical and social surroundings of students can help them be more creative. Finally, creative product refers to the evaluation of actual “productions” (e.g., a piece of art, text, speech, etc.) generated through the creative process.

4.2. Evaluation Grid for Critical Thinking

Our evaluation grid divides critical thinking into three main components: critical thinking about the world , critical thinking about oneself (self-reflection), as well as critical action and decision making . The first component refers to having an evidence-based view of the exterior world, notably by identifying and evaluating sources of information and using them to question current understandings and solve problems. Self-reflection refers to thinking critically about one’s own life situation, values, and actions; it presupposes the autonomy of thought and a certain distance as well as the most objective observation possible with regard to one’s own knowledge (“meta-cognition”). The third and final component, critical action and decision making, is about using critical thinking skills more practically in order to make appropriate life decisions as well as to be open to different points of view. This component also addresses soft skills and attitudes such as trusting information.

Our evaluation framework for critical thinking was in part inspired by Barnett’s “curriculum for critical being” (2015), whose model distinguishes two axes: one defined by the qualitative differences in the level of criticality attained and the second comprised of three different domains of application: formal knowledge, the self, and the world. The first two components of our framework (and the seven dimensions on which they are rated) reflect and encompass these three domains. Similar to Barrett’s proposal, our third rubric moves beyond the “skills-plus-dispositions” model of competency implicit in much theorizing about critical thinking and adds the importance of “action”—not just the ability to think critically and the disposition to do so, but the central importance of training and practicing “critical doing” ( Barnett 2015 ). Critical thinking should also be exercised collectively by involving students in collective thinking, facilitating the exchange of ideas and civic engagement ( Huber and Kuncel 2016 ).

4.3. Evaluation Grid for Collaboration

The first component of collaboration skills in the IICD grid is engagement and participation , referring to the active engagement in group work. Perspective taking and openness concerns the flexibility to work with and accommodate other group members and their points of view. The final dimension— social regulation —is about being able to reach for a common goal, notably through compromise and negotiation, as well as being aware of the different types of roles that group members can hold ( Hesse et al. 2015 ; Rusdin and Ali 2019 ; Care et al. 2016 ). (These last two components include elements of leadership, character, and emotional intelligence as sometimes described in other soft-skill and competency-related systems.) Participation, social regulation, and perspective taking have been identified as central social skills in collaborative problem solving ( Hesse et al. 2015 ). Regarding social regulation in this context, recognizing and profiting from group diversity is key ( Graesser et al. 2018 ). When describing an assessment in an educational setting of collaborative problem solving (with a task in which two or more students have to collaborate in order to solve it, each using a different set of resources), two main underpinning skills were described for the assessment: the social skill of audience awareness (“how to adapt one’s own behavior to suit the needs of the task and the partner’s requirements”, Care et al. 2016, p. 258 ) and the cognitive skill of planning and executing (developing a plan to reach for a goal) ( Care et al. 2016 ). The former is included in the perspective taking and openness rubric and the latter in the social regulation component in the IICD grid. Evans ( 2020 ) identified four main collaboration skills consistently mentioned in the scientific literature that are assessed in the IICD grid: the ability to plan and make group decisions (example item from the IICD grid: teachers provide assistance to students to overcome differences and reach a common goal during group work); the ability to communicate about thinking with the group (assessed notably in the meta-reflection strand of the IICD grid); the ability to contribute resources, ideas, and efforts and support group members (included notably in the engagement and participation as well as the social regulation components); and finally, the ability to monitor, reflect, and adapt individual and group processes to benefit the group (example item from the IICD grid: students use perspective-taking tools and techniques in group activities).

4.4. Evaluation Grid for Communication

The evaluation grid for communication is also composed of three dimensions: message formulation, message delivery, and message and communication feedback . Message formulation refers to the ability to design and structure a message to be sent, such as outlining the content of an argument. Message delivery is about effectively transmitting verbal and non-verbal aspects of a message. Finally, message and communication feedback refers to the ability of students and teachers to understand their audience, analyze their social surroundings, and interpret information in context. Other components of communication skills such as theory of mind, empathy, or emotional intelligence are also relevant and included in the process of applying the grid. Thompson ( 2020 ) proposes a four-component operationalized definition of communication for its assessment in students. First, they describe a comprehension strand covering the understanding and selection of adequate information from a range of sources. Message formulation in the IICD grid captures this dimension through its focus on content analysis and generation. Second, the presentation of information and ideas is mentioned in several different modes, adjusted to the intended audience, verbally as well as non-verbally. The message delivery component of the IICD grid focuses on these points. Third, the authors note the importance of communication technology and its advanced use. The IICD grid also covers the importance of technology use in its tools and techniques category, with, for example, an item that reads: students learn to effectively use a variety of formats of communication (social media, make a video, e-mail, letter writing, creating a document). Finally, Thompson ( 2020 ) describes the recognition of cultural and other differences as an important aspect of communication. The IICD grid aims at incorporating these aspects, notably in the meta-reflection category under each of the three dimensions.

5. Assessing the 4Cs in Informal Educational Contexts: The Example of Games

5.1. the 4cs in informal educational contexts.

So far, the focus has been on rather formal ways of nurturing the 4Cs. Although institutions and training programs are perhaps the most significant and necessary avenues of education, they are not the sole context in which 4Cs’ learning and improvement can manifest. One other important potential learning context is game play. Games are activities that are present and participated in throughout human society—by those of all ages, genders, and socio-economic statuses ( Bateson and Martin 2013 ; Huizinga 1949 ; Malaby 2007 ). This informal setting can also provide favorable conditions to help improve the 4Cs ( van Rosmalen et al. 2014 ) and should not be under-appreciated. Games provide a unique environment for learning, as they can foster a space to freely explore possibilities and one’s own potential ( de Freitas 2006 ). We argue that games are a significant potential pathway for the improvement of the 4Cs, and as such, they merit the same attention as more formal ways of learning and developing competencies.

5.2. 4Cs Evaluation Framework for Games

Compared to schools and educational institutions, the focus of IICD’s evaluation framework for games (see International Institute for Competency Development 2021 ) is more narrow. Thus, it is fundamentally different from the institutional grid: games, complex and deep as they can sometimes be, cannot directly be compared to the complexity of a school curriculum and all the programs it contains. The evaluation of a game’s effectiveness for training/improving a given C rests on the following principle: if a game presents affordances conducive to exercising a given skill, engaged playing of that game should help improve that skill.

The game’s evaluation grid is scored based on two criteria. For example, as a part of a game’s rating as a tool for the development of creativity, we determine the game must first meet two conditions. First, whether or not the game allows the opportunity for creativity to manifest itself: if creativity cannot occur in the game, it is obviously not eligible to receive ratings for that C. Second, whether or not creativity is needed in order to perform well in the game: if the players can win or achieve success in the game without needing creativity, this also means it cannot receive a rating for that C. If both conditions are met, however, the game will be considered potentially effective to improve creativity through the practice of certain components of creative behavior. This basic principle applies for all four of the Cs.

As outlined in Table 3 , below, the evaluation grid for each of the four Cs is composed of five components relevant to games that are different for each of the Cs. The grid works as follows: for each of the five components of each C, we evaluate the game on a list of sub-components using two yes/no scales: one for whether it is “possible” for that subcomponent to manifest and one for whether that sub-component is “required for success” in the game. This evaluation is done for all sub-components. After this, each general component is rated on the same two indicators. If 60% (i.e., three out of five) or more sub-components are positively rated as required, the general component is considered required. Then, the game is evaluated on its effectiveness for training and improving each of the 4Cs. If 60% or more components are positively rated as required, the game will be labelized as having the potential to be effective for training and improving the corresponding C.

Five different components evaluated for each C by the 4Cs assessment framework for games.

The evaluation grid for creativity is based on the multivariate model of creative potential (see Section 2.1.1 and Lubart et al. 2013 for more information) and is composed of four cognitive factors and one conative factor: originality , divergent thinking , convergent thinking , mental flexibility , and creative dispositions . Originality refers to the generation of ideas that are novel or unexpected, depending on the context. Divergent thinking corresponds to the generation of multiple ideas or solutions. Convergent thinking refers to the combination of multiple ideas and the selection of the most creative idea. Mental flexibility entails changing perspectives on a given problem and breaking away from initial ideas. Finally, creative dispositions concerns multiple personality-related factors conducive to creativity, such as openness to experience or risk taking.

The evaluation grid for critical thinking echoes Halpern’s ( 1998 ) as well as Marin and Halpern’s ( 2011 ) considerations for teaching this skill, that is, taking into consideration thinking skills, metacognition, and dispositions. The five components of the critical thinking grid are: goal-adequate discernment, objective thinking, metacognition, elaborate reasoning, and uncertainty management. Goal-adequate discernment entails the formulation of inferences and the discernment of contradictions when faced with a problem. Objective thinking corresponds to the suspension of one’s own judgment and the analysis of affirmations and sources in the most objective manner possible. Metacognition, here, is about questioning and reassessing information, as well as the awareness of one’s own cognitive biases. Elaborate reasoning entails reasoning in a way that is cautious, thorough, and serious. Finally, uncertainty management refers to the dispositional propensity to tolerate ambiguity and accept doubt.

The evaluation grid for collaboration is based on the quality of collaboration (QC) method ( Burkhardt et al. 2009 ; see Section 2.4.2 for more details) and is composed of the following five components: collaboration fluidity, well-argued deliberation and consensus-based decision, balance of contribution, organization and coordination, and cognitive syncing, input, and support. Collaboration fluidity entails the absence of speech overlap and the presence of a good flow in terms of turns to speak. Well-argued deliberation and consensus-based decision is about contributing to the discussion and task at hand, as well as participating in discussions and arguments, in order to obtain a consensus. Balance of contribution refers to having equal or equivalent contributions to organization, coordination, and decision making. Organization and coordination refers to effective management of roles, time, and “deadlines”, as well as the attribution of roles depending on participants’ skills. Finally, cognitive syncing, input, and support is about bringing ideas and resources to the group, as well as supporting and reinforcing other members of the group.

The five components used to evaluate communication in games include both linguistic, pragmatic, and social aspects. Linguistic skills per se are captured by the mastery of written and spoken language component. This component assesses language comprehension and the appropriate use of vocabulary. Pragmatic skills are captured by the verbal and non-verbal communication components and refer to the efficient use of verbal and body signals in the context of the game to achieve one’s communicative goals ( Grassmann 2014 ; Matthews 2014 ). Finally, the grid also evaluates social skills with its two last components, social interactions and social cognition, which, respectively, refer to the ability to interact with others appropriately—including by complying with the rules of the game—and to the understanding of other people’ mental states ( Tomasello 2005 ).

6. Discussion and Conclusions

Each of the 4Cs is a broad, multi-faceted concept that is the subject of a tremendous amount of research and discussion by a wide range of stakeholders in different disciplines, professions, and parts of the educational establishment. The development of evaluation frameworks to allow support for the 4Cs to be assessed and publicly recognized, using a label, is an important step for promoting and fostering these skills in educational contexts. As illustrated by IICD’s 4Cs Framework for educational institutions and programs, as well as its games/activities evaluation grid, the specific criteria to detect support for each C can vary depending upon the educational context (e.g., formal and institutional level or informal and at the activity level). Yet considering the 4Cs together highlights some additional observations, current challenges, and opportunities for the future that are worthy of discussion.

6.1. Interrelationships between the 4Cs and a New Model for Use in Pedagogy and Policy Promotion

One very important issue for understanding the 4Cs and their educational implementation that can be simultaneously a help and a hindrance for teaching them—and also a challenge when assessing them—is their multidimensionality and interrelatedness. In other words, the 4Cs are not entirely separate entities but instead, as Figure 2 shows, should be seen as four interlinked basic “elements” for future-oriented education that can help individuals in their learning process and, together, synergistically “bootstrap” the development of their cognitive potentials. Lamri and Lubart ( 2021 ), for example, found a certain base level of creativity was a necessary but not sufficient condition for success in managerial tasks, but that high-level performance required a combination of all four Cs. Some thinkers have argued that one cannot be creative without critical thinking, which also requires creativity, for example, to come up with alternative arguments (see Paul and Elder 2006 ). Similarly, among many other interrelationships, there is no collaboration without communication—and even ostensibly individual creativity is a “collaboration” of sorts with the general culture and precursors in a given field. As a result, it ranges from impossible to suboptimal to teach (or teach towards) one of the 4Cs without involving one or more of the others, and this commingling also underscores the genuine need and appropriateness of assessing them together.

An external file that holds a picture, illustration, etc.
Object name is jintelligence-11-00054-g002.jpg

“‘Crea-Critical-Collab-ication’: a Dynamic Interactionist Model of the 4Cs”. (Illustration of the interplay and interpenetration of creativity, critical thinking, collaboration, and communication shown in dimensional space according to their differing cognitive/individual vs. social/interpersonal emphases; (© 2023, Branden Thornhill-Miller. All Rights Reserved. thornhill-miller.com; accessed on 20 January 2023)).

From this perspective, Thornhill-Miller ( 2021 ) proposed a “dynamic interactionist model of the 4Cs” and their interrelated contributions to the future of education and work. Presented in Figure 2 , this model is meant to serve as a visual and conceptual aid for understanding the 4Cs and their interrelationships, thereby also promoting better use and understanding of them in pedagogical and policy settings. In addition to suggesting the portmanteau of “crea-critical thinking” as a new term to describe the overlap of much of the creative and critical thinking processes, the title of this model, “Crea-Critical-Collab-ication”, is a verbal representation of the fluid four-way interrelationship between the 4Cs visually represented in Figure 2 (a title meant to playfully repackage the 4Cs for important pedagogical and policy uses). This model goes further to suggest some dimensional differences in emphases that, roughly speaking, also often exist among the 4Cs: that is to say, the frequently greater emphasis on cognitive or individual elements at play in creativity and critical thinking in comparison to the social and interpersonal aspects more central to communication and collaboration ( Thornhill-Miller 2021 ).

Similarly focused on the need to promote a phase change towards future-oriented education, Lucas ( 2019 ) and colleagues have suggested conflating creative thinking and critical thinking in order to propose “3Cs” (creative thinking, communication, and collaboration) as new “foundational literacies” to symmetrically add to the 3Rs (Reading, wRiting, and aRithmetic) of previous educational eras. Although we applaud these efforts, from our applied research perspective, we believe that the individual importance of, and distinct differences between, creative thinking and critical thinking support preserving them both as separate constructs in order to encourage the greatest development of each of them. Moreover, if only three categories were somehow required or preferable, one could argue that uniting communication and collaboration (as “collab-ication” suggests) might be preferable—particularly also given the fact that substantial aspects of communication are already covered within the 3Rs. In any case, we look forward to more such innovations and collaborations in this vibrant and important area of work at the crossroads between research, pedagogy, and policy development.

6.2. Limitations and Future Work

The rich literature in each of the 4Cs domains shows the positive effects of integrating these dimensions into educational and professional curricula. At the same time, the complexity of their definitions makes them difficult to assess, both in terms of reliability (assessment must not vary from one measurement to another) and of validity (tests must measure that which they are intended to measure). However, applied research in this area is becoming increasingly rigorous, with a growing capacity to provide the necessary tools for evidence-based practice. The development of these practices should involve interdisciplinary teams of teachers and other educational practitioners who are equipped and trained accordingly. Similarly, on the research side, further exploration and clarification of subcomponents of the 4Cs and other related skills will be important. Recent efforts to clarify the conceptual overlap and hierarchical relations of soft skills for the future of education and work, for example, have been helpful and promising (e.g., Joie-La Marle et al. 2022 ; Lamri et al. 2022 ). But the most definitive sort of taxonomy and measurement model that we are currently lacking might only be established based on the large-scale administration of a comprehensive battery of skill-measuring psychometric tests on appropriate cross sections of society.

The rapid development and integration of new technologies will also aid and change the contexts, resources, and implementation of the 4Cs. For example, the recent developments make it clear that the 4Cs will be enhanced and changed by interaction with artificially intelligence, even as 4Cs-related skills will probably, for the same reason, increasingly constitute the core of available human work in the future (see, e.g., Ross 2018 ). Similarly, research on virtual reality and creativity suggest that VR environments assist and expand individual and collaborative creativity ( Bourgeois-Bougrine et al. 2022 ). Because VR technologies offer the possibility of enhanced and materially enriched communication, collaboration, and information availability, they not only allow for the enhancement of creativity techniques but also for similar expansions and improvements on almost all forms of human activity (see Thornhill-Miller and Dupont 2016 )—including the other three Cs.

6.3. Conclusion: Labelization of the 4Cs and the Future of Education and Work

Traditional educational approaches cannot meet the educational needs of our emergent societies if they do not teach, promote, and assess in line with the new learner characteristics and contexts of the 21st century ( Sahin 2009 ). The sort of future-oriented change and development required by this shift in institutional practices, programming, and structure will likely meet with significant resistance from comfortably entrenched (and often outdated) segments of traditional educational and training establishments. Additional external evaluation and monitoring is rarely welcome by workers in any context. We believe, however, that top-down processes from the innovative and competition-conscious administrative levels will be met by bottom-up demands from students and education consumers to support these institutional changes. And we contend that efforts such as labelizing 4C processes will serve to push educators and institutions towards more relevant offerings, oriented towards the future of work and helping build a more successful future for all.

In the end, the 4Cs framework seems to be a manageable, focused model for modernizing education, and one worthy of its growing prevalence in the educational and research marketplace for a number of reasons. These reasons include the complexity and cumbersome nature of larger alternative systems and the 4Cs’ persuasive presence at the core of a number of early and industry-driven frameworks. In addition, the 4Cs have benefitted from their subsequent promotion by organizations such as the OECD and the World Economic Forum, as well as some more direct support from recent empirical research. The promotion, teaching, and assessment of the 4Cs will require a complex social intervention and mobilization of educational resources—a major shift in pedagogy and institutional structures. Yet the same evolving digital technologies that have largely caused the need for these massive, rapid changes can also assist in the implementation of solutions ( van Laar et al. 2017 ). To the extent that future research also converges on such a model (that has already been found pedagogically useful and policy-friendly by so many individuals and organizations), the 4Cs framework has the potential to become a manageable core for 21st century skills and the future of education and work—one that stakeholders with various agendas can already begin building on for a better educational and economic future together.

Funding Statement

This research received no external funding.

Author Contributions

Conceptualization, B.T.-M. and T.L.; writing—original draft preparation, B.T.-M., A.C., M.M., J.-M.B., T.M., S.B.-B., S.E.H., F.V., M.A.-L., C.F., D.S., F.M.; writing—review and editing, B.T.-M., A.C., T.L., J.-M.B., C.F.; visualization, B.T.-M.; supervision, B.T.-M., T.L.; project administration, B.T.-M., T.L. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Data availability statement, conflicts of interest.

B.T.-M. and T.L. are unpaid academic co-founder and project collaborator for the International Institute for Competency Development, whose labelization frameworks (developed in cooperation with Afnor International and the LaPEA lab of Université Paris Cité and Université Gustave Eiffel) are used as examples in this review. S.E.H. and M.A.-L. are employees of AFNOR International. No funding was received to support this research or article, which reflects the views of the scientists and researchers and not their organizations or companies.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

  • Abrami Philip C., Bernard Robert M., Borokhovski Eugene, Waddington David I., Wade C. Anne, Persson Tonje. Strategies for Teaching Students to Think Critically: A Meta-Analysis. Review of Educational Research. 2015; 85 :275–314. doi: 10.3102/0034654314551063. [ CrossRef ] [ Google Scholar ]
  • AbuSeileek Ali Farhan. The Effect of Computer-Assisted Cooperative Learning Methods and Group Size on the EFL Learners’ Achievement in Communication Skills. Computers & Education. 2012; 58 :231–39. doi: 10.1016/j.compedu.2011.07.011. [ CrossRef ] [ Google Scholar ]
  • Ahern Aoife, Dominguez Caroline, McNally Ciaran, O’Sullivan John J., Pedrosa Daniela. A Literature Review of Critical Thinking in Engineering Education. Studies in Higher Education. 2019; 44 :816–28. doi: 10.1080/03075079.2019.1586325. [ CrossRef ] [ Google Scholar ]
  • Ainsworth Shaaron E., Chounta Irene-Angelica. The roles of representation in computer-supported collaborative learning. In: Cress Ulrike, Rosé Carolyn, Wise Alyssa Friend, Oshima Jun., editors. International Handbook of Computer-Supported Collaborative Learning. Springer; Cham: 2021. pp. 353–69. [ CrossRef ] [ Google Scholar ]
  • Alsaleh Nada J. Teaching Critical Thinking Skills: Literature Review. [(accessed on 1 November 2022)]; The Turkish Online Journal of Educational Technology. 2020 19 :21–39. Available online: http://files.eric.ed.gov/fulltext/EJ1239945.pdf [ Google Scholar ]
  • Al-Samarraie Hosam, Hurmuzan Shuhaila. A Review of Brainstorming Techniques in Higher Education. Thinking Skills and Creativity. 2018; 27 :78–91. doi: 10.1016/j.tsc.2017.12.002. [ CrossRef ] [ Google Scholar ]
  • Amabile Teresa M. Social Psychology of Creativity: A Consensual Assessment Technique. Journal of Personality and Social Psychology. 1982; 43 :997–1013. doi: 10.1037/0022-3514.43.5.997. [ CrossRef ] [ Google Scholar ]
  • Amron Manajemen Pemasaran. The influence of brand image, brand trust, product quality, and price on the consumer’s buying decision of MPV cars. European Scientific Journal. 2018; 14 :228–39. doi: 10.19044/esj.2018.v14n13p228. [ CrossRef ] [ Google Scholar ]
  • Ananiadoui Katerina, Claro Magdalean. 21st Century Skills and Competences for New Millennium Learners in OECD Countries. OECD Publishing; Paris: 2009. OECD Education Working Papers, No. 41. [ CrossRef ] [ Google Scholar ]
  • Bailin Sharon. Achieving Extraordinary Ends: An Essay on Creativity. Springer; Dordrecht: 1988. [ CrossRef ] [ Google Scholar ]
  • Bandyopadhyay Subir, Szostek Jana. Thinking Critically about Critical Thinking: Assessing Critical Thinking of Business Students Using Multiple Measures. Journal of Education for Business. 2019; 94 :259–70. doi: 10.1080/08832323.2018.1524355. [ CrossRef ] [ Google Scholar ]
  • Barber Herbert F. Developing Strategic Leadership: The US Army War College Experience. Journal of Management Development. 1992; 11 :4–12. doi: 10.1108/02621719210018208. [ CrossRef ] [ Google Scholar ]
  • Barnett Ronald. The Palgrave Handbook of Critical Thinking in Higher Education. Palgrave Macmillan US; New York: 2015. A Curriculum for Critical Being; pp. 63–76. [ CrossRef ] [ Google Scholar ]
  • Bateson Patrick, Martin Paul. Play, Playfulness, Creativity and Innovation. Cambridge University Press; Cambridge: 2013. [ CrossRef ] [ Google Scholar ]
  • Batey Mark. The Measurement of Creativity: From Definitional Consensus to the Introduction of a New Heuristic Framework. Creativity Research Journal. 2012; 24 :55–65. doi: 10.1080/10400419.2012.649181. [ CrossRef ] [ Google Scholar ]
  • Battelle for Kids Framework for 21st Century Learning Definitions. 2022. [(accessed on 1 November 2022)]. Available online: http://static.battelleforkids.org/documents/p21/P21_Framework_DefinitionsBFK.pdf
  • Bellaera Lauren, Weinstein-Jones Yana, Ilie Sonia, Baker Sara T. Critical Thinking in Practice: The Priorities and Practices of Instructors Teaching in Higher Education. Thinking Skills and Creativity. 2021; 41 :100856. doi: 10.1016/j.tsc.2021.100856. [ CrossRef ] [ Google Scholar ]
  • Blessinger Patrick, Anchan John P. In: Democratizing Higher Education: International Comparative Perspectives. 1st ed. Blessinger Patrick, Anchan John P., editors. Routledge; London: 2015. [(accessed on 1 November 2022)]. Available online: https://www.routledge.com/Democratizing-Higher-Education-International-Comparative-Perspectives/Blessinger-Anchan/p/book/9781138020955 [ Google Scholar ]
  • Bloom Benjamin Samuel., editor. Taxonomy of Educational Objectives: The Classification of Educational Goals: Handbook I, Cognitive Domain. Longmans; New York: 1956. [ Google Scholar ]
  • Bourgeois-Bougrine Samira. The Palgrave Encyclopedia of the Possible. Springer International Publishing; Cham: 2022. Design Thinking. [ CrossRef ] [ Google Scholar ]
  • Bourgeois-Bougrine Samira, Bonnardel Nathalie, Burkhardt Jean-Marie, Thornhill-Miller Branden, Pahlavan Farzaneh, Buisine Stéphanie, Guegan Jérôme, Pichot Nicolas, Lubart Todd. Immersive Virtual Environments’ Impact on Individual and Collective Creativity: A Review of Recent Research. European Psychologist. 2022; 27 :237–53. doi: 10.1027/1016-9040/a000481. [ CrossRef ] [ Google Scholar ]
  • Bourke Sharon L., Cooper Simon, Lam Louisa, McKenna Lisa. Undergraduate Health Professional Students’ Team Communication in Simulated Emergency Settings: A Scoping Review. Clinical Simulation in Nursing. 2021; 60 :42–63. doi: 10.1016/j.ecns.2021.07.004. [ CrossRef ] [ Google Scholar ]
  • Brookfield Stephen D. Assessing Critical Thinking. New Directions for Adult and Continuing Education. 1997; 75 :17–29. doi: 10.1002/ace.7502. [ CrossRef ] [ Google Scholar ]
  • Burkhardt Jean-Marie, Détienne Françoise, Hébert Anne-Marie, Perron Laurence. Human-Computer Interaction—INTERACT 2009. Springer; Berlin/Heidelberg: 2009. Assessing the ‘Quality of Collaboration’ in Technology-Mediated Design Situations with Several Dimensions; pp. 157–60. [ CrossRef ] [ Google Scholar ]
  • Camarda Anaëlle, Bouhours Lison, Osmont Anaïs, Masson Pascal Le, Weil Benoît, Borst Grégoire, Cassotti Mathieu. Opposite Effect of Social Evaluation on Creative Idea Generation in Early and Middle Adolescents. Creativity Research Journal. 2021; 33 :399–410. doi: 10.1080/10400419.2021.1902174. [ CrossRef ] [ Google Scholar ]
  • Cannon-Bowers Janis, Tannenbaum Scott I., Salas Eduardo, Volpe Catherine E. Defining team competencies and establishing team training requirements. In: Guzzo Richard A., Salas Eduardo., editors. Team Effectiveness and Decision Making in Organizations. Jossey-Bass; San Francisco: 1995. pp. 333–80. [ Google Scholar ]
  • Care Esther, Scoular Claire, Griffin Patrick. Assessment of Collaborative Problem Solving in Education Environments. Applied Measurement in Education. 2016; 29 :250–64. doi: 10.1080/08957347.2016.1209204. [ CrossRef ] [ Google Scholar ]
  • Care Esther, Kim Helyn, Vista Alvin, Anderson Kate. Education System Alignment for 21st Century Skills: Focus on Assessment. Brookings Institution; Washington, DC: 2018. [ Google Scholar ]
  • Carmichael Erst, Farrell Helen. Evaluation of the Effectiveness of Online Resources in Developing Student Critical Thinking: Review of Literature and Case Study of a Critical Thinking Online Site. Journal of University Teaching and Learning Practice. 2012; 9 :38–55. doi: 10.53761/1.9.1.4. [ CrossRef ] [ Google Scholar ]
  • Carson Shelley H., Peterson Jordan B., Higgins Daniel M. Reliability, Validity, and Factor Structure of the Creative Achievement Questionnaire. Creativity Research Journal. 2005; 17 :37–50. doi: 10.1207/s15326934crj1701_4. [ CrossRef ] [ Google Scholar ]
  • Casey Betty J., Getz Sarah, Galvan Adriana. The Adolescent Brain. Developmental Review: DR. 2008; 28 :62–77. doi: 10.1016/j.dr.2007.08.003. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cassotti Mathieu, Camarda Anaëlle, Poirel Nicolas, Houdé Olivier, Agogué Marine. Fixation Effect in Creative Ideas Generation: Opposite Impacts of Example in Children and Adults. Thinking Skills and Creativity. 2016; 19 :146–52. doi: 10.1016/j.tsc.2015.10.008. [ CrossRef ] [ Google Scholar ]
  • Chameroy Fabienne, Veran Lucien. Immatérialité de La Qualité et Effet Des Labels Sur Le Consentement à Payer. Management International. 2014; 18 :32–44. doi: 10.7202/1025088ar. [ CrossRef ] [ Google Scholar ]
  • Chiu Fa-Chung. Improving Your Creative Potential without Awareness: Overinclusive Thinking Training. Thinking Skills and Creativity. 2015; 15 :1–12. doi: 10.1016/j.tsc.2014.11.001. [ CrossRef ] [ Google Scholar ]
  • Chulvi Vicente, Mulet Elena, Chakrabarti Amaresh, López-Mesa Belinda, González-Cruz Carmen. Comparison of the Degree of Creativity in the Design Outcomes Using Different Design Methods. Journal of Engineering Design. 2012; 23 :241–69. doi: 10.1080/09544828.2011.624501. [ CrossRef ] [ Google Scholar ]
  • Cinque Maria. ‘Lost in Translation’. Soft Skills Development in European Countries. Tuning Journal for Higher Education. 2016; 3 :389–427. doi: 10.18543/tjhe-3(2)-2016pp389-427. [ CrossRef ] [ Google Scholar ]
  • Cömert Musa, Zill Jördis Maria, Christalle Eva, Dirmaier Jörg, Härter Martin, Scholl Isabelle. Assessing Communication Skills of Medical Students in Objective Structured Clinical Examinations (OSCE) - A Systematic Review of Rating Scales. PLoS ONE. 2016; 11 :e0152717. doi: 10.1371/journal.pone.0152717. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Corazza Giovanni Emanuele. Potential Originality and Effectiveness: The Dynamic Definition of Creativity. Creativity Research Journal. 2016; 28 :258–67. doi: 10.1080/10400419.2016.1195627. [ CrossRef ] [ Google Scholar ]
  • Corazza Giovanni Emanuele, Darbellay Frédéric, Lubart Todd, Panciroli Chiara. Developing Intelligence and Creativity in Education: Insights from the Space–Time Continuum. In: Lemmetty Soila, Collin Kaija, Glăveanu Vlad, Forsman Panu., editors. Creativity and Learning. Springer International Publishing; Cham: 2021. pp. 69–87. [ CrossRef ] [ Google Scholar ]
  • Cotter Katherine N., Beghetto Ronald A., Kaufman James C. Creativity in the Classroom: Advice for Best Practices. In: Lubart Todd, Botella Marion, Bourgeois-Bougrine Samira, Caroff Xavier, Guégan Jérôme, Mouchiroud Christohe, Nelson Julien, Zenasni Franck., editors. Homo Creativus. Springer International Publishing; Cham: 2022. pp. 249–64. [ CrossRef ] [ Google Scholar ]
  • Curtis J. Randall, Back Anthony L., Ford Dee W., Downey Lois, Shannon Sarah E., Doorenbos Ardith Z., Kross Erin K., Reinke Lynn F., Feemster Laura C., Edlund Barbara, et al. Effect of Communication Skills Training for Residents and Nurse Practitioners on Quality of Communication with Patients with Serious Illness: A Randomized Trial. JAMA: The Journal of the American Medical Association. 2013; 310 :2271. doi: 10.1001/jama.2013.282081. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • D’Alimonte Laura, McLaney Elizabeth, Prospero Lisa Di. Best Practices on Team Communication: Interprofessional Practice in Oncology. Current Opinion in Supportive and Palliative Care. 2019; 13 :69–74. doi: 10.1097/SPC.0000000000000412. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • de Freitas Sara. Learning in Immersive Worlds: A Review of Game-Based Learning. JISC; Bristol: 2006. [(accessed on 1 November 2022)]. Available online: http://www.jisc.ac.uk/media/documents/programmes/elearninginnovation/gamingreport_v3.pdf [ Google Scholar ]
  • Détienne Françoise, Baker Michael, Burkhardt Jean-Marie. Perspectives on Quality of Collaboration in Design. CoDesign. 2012; 8 :197–99. doi: 10.1080/15710882.2012.742350. [ CrossRef ] [ Google Scholar ]
  • Diedrich Jennifer, Jauk Emanuel, Silvia Paul J., Gredlein Jeffrey M., Neubauer Aljoscha C., Benedek Mathias. Assessment of Real-Life Creativity: The Inventory of Creative Activities and Achievements (ICAA) Psychology of Aesthetics, Creativity, and the Arts. 2018; 12 :304–16. doi: 10.1037/aca0000137. [ CrossRef ] [ Google Scholar ]
  • Doyle Denise. Creativity in the Twenty First Century. Edited by Anna Hui and Christian Wagner. Springer International Publishing; Cham: 2021. Creative and Collaborative Practices in Virtual Immersive Environments; pp. 3–19. [ CrossRef ] [ Google Scholar ]
  • Drisko James W. Competencies and Their Assessment. Journal of Social Work Education. 2014; 50 :414–26. doi: 10.1080/10437797.2014.917927. [ CrossRef ] [ Google Scholar ]
  • Dul Jan, Ceylan Canan. Work Environments for Employee Creativity. Ergonomics. 2011; 54 :12–20. doi: 10.1080/00140139.2010.542833. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dumitru Daniela, Bigu Dragos, Elen Jan, Ahern Aoife, McNally Ciaran, O’Sullivan John. A European Review on Critical Thinking Educational Practices in Higher Education Institutions. UTAD; Vila Real: 2018. [(accessed on 2 November 2022)]. Available online: http://repositorio.utad.pt/handle/10348/8320 [ Google Scholar ]
  • Edelman Jonathan, Owoyele Babajide, Santuber Joaquin. Design Thinking in Education. Springer International Publishing; Cham: 2022. Beyond Brainstorming: Introducing Medgi, an Effective, Research-Based Method for Structured Concept Development; pp. 209–32. [ CrossRef ] [ Google Scholar ]
  • Etilé Fabrice, Teyssier Sabrina. Signaling Corporate Social Responsibility: Third-Party Certification versus Brands: Signaling CSR: Third-Party Certification versus Brands. The Scandinavian Journal of Economics. 2016; 118 :397–432. doi: 10.1111/sjoe.12150. [ CrossRef ] [ Google Scholar ]
  • Evans Carla. Measuring Student Success Skills: A Review of the Literature on Collaboration. National Center for the Improvement of Educational Assessment; Dover: 2020. [ Google Scholar ]
  • Facione Peter Arthur. The California Critical Thinking Skills Test–College Level. Technical Report# 1. Experimental Validation and Content Validity. [(accessed on 2 November 2022)]; 1990a Available online: https://files.eric.ed.gov/fulltext/ED327549.pdf
  • Facione Peter Arthur. Critical Thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction. Research Findings and Recommendations. ERIC, Institute of Education Sciences; Washington, DC: 1990b. [(accessed on 2 November 2022)]. pp. 1–112. Available online: https://eric.ed.gov/?id=ED315423 [ Google Scholar ]
  • Facione Peter Arthur. Critical thinking: What it is and why it counts. Insight Assessment. 2011; 2007 :1–23. [ Google Scholar ]
  • Faidley Joel. Ph.D. dissertation. East Tennessee State University; Johnson City, TN, USA: 2018. Comparison of Learning Outcomes from Online and Face-to-Face Accounting Courses. [ Google Scholar ]
  • Friedman Hershey H. Cognitive Biases That Interfere with Critical Thinking and Scientific Reasoning: A Course Module. SSRN Electronic Journal. 2017:1–60. doi: 10.2139/ssrn.2958800. [ CrossRef ] [ Google Scholar ]
  • Fryer-Edwards Kelly, Arnold Robert M., Baile Walter, Tulsky James A., Petracca Frances, Back Anthony. Reflective Teaching Practices: An Approach to Teaching Communication Skills in a Small-Group Setting. Academic Medicine: Journal of the Association of American Medical Colleges. 2006; 81 :638–44. doi: 10.1097/01.ACM.0000232414.43142.45. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Glăveanu Vlad Petre. Rewriting the Language of Creativity: The Five A’s Framework. Review of General Psychology: Journal of Division 1, of the American Psychological Association. 2013; 17 :69–81. doi: 10.1037/a0029528. [ CrossRef ] [ Google Scholar ]
  • Glăveanu Vlad Petre. The Psychology of Creativity: A Critical Reading. Creativity Theories Research Applications. 2014; 1 :10–32. doi: 10.15290/ctra.2014.01.01.02. [ CrossRef ] [ Google Scholar ]
  • Goldenberg Olga, Wiley Jennifer. Quality, Conformity, and Conflict: Questioning the Assumptions of Osborn’s Brainstorming Technique. The Journal of Problem Solving. 2011; 3 :96–118. doi: 10.7771/1932-6246.1093. [ CrossRef ] [ Google Scholar ]
  • Graesser Arthur C., Sabatini John P., Li Haiying. Educational Psychology Is Evolving to Accommodate Technology, Multiple Disciplines, and Twenty-First-Century Skills. Annual Review of Psychology. 2022; 73 :547–74. doi: 10.1146/annurev-psych-020821-113042. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Graesser Arthur C., Fiore Stephen M., Greiff Samuel, Andrews-Todd Jessica, Foltz Peter W., Hesse Friedrich W. Advancing the Science of Collaborative Problem Solving. Psychological Science in the Public Interest. 2018; 19 :59–92. doi: 10.1177/1529100618808244. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grassmann Susanne. The pragmatics of word learning. In: Matthews Danielle., editor. Pragmatic Development in First Language Acquisition. John Benjamins Publishing Company; Amsterdam: 2014. pp. 139–60. [ CrossRef ] [ Google Scholar ]
  • Hager Keri, St Hill Catherine, Prunuske Jacob, Swanoski Michael, Anderson Grant, Lutfiyya May Nawal. Development of an Interprofessional and Interdisciplinary Collaborative Research Practice for Clinical Faculty. Journal of Interprofessional Care. 2016; 30 :265–67. doi: 10.3109/13561820.2015.1092951. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Halpern Diane F. Teaching Critical Thinking for Transfer across Domains: Disposition, Skills, Structure Training, and Metacognitive Monitoring. The American Psychologist. 1998; 53 :449–55. doi: 10.1037/0003-066X.53.4.449. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Halpern Diane F., Dunn Dana S. Critical Thinking: A Model of Intelligence for Solving Real-World Problems. Journal of Intelligence. 2021; 9 :22. doi: 10.3390/jintelligence9020022. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hanover Research A Crosswalk of 21st Century Skills. 2012. [(accessed on 15 August 2022)]. Available online: http://www.hanoverresearch.com/wp-content/uploads/2011/12/A-Crosswalk-of-21st-Century-Skills-Membership.pdf
  • Hathaway Julia R., Tarini Beth A., Banerjee Sushmita, Smolkin Caroline O., Koos Jessica A., Pati Susmita. Healthcare Team Communication Training in the United States: A Scoping Review. Health Communication. 2022:1–26. doi: 10.1080/10410236.2022.2036439. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hesse Friedrich, Care Esther, Buder Juergen, Sassenberg Kai, Griffin Patrick. A Framework for Teachable Collaborative Problem Solving Skills. In: Griffin Patrick, Care Esther., editors. Assessment and Teaching of 21st Century Skills. Springer Netherlands; Dordrecht: 2015. pp. 37–56. [ Google Scholar ]
  • Hitchcock David. Critical Thinking. In: Edward Nouri Zalta., editor. The Stanford Encyclopedia of Philosophy (Fall 2020 Edition) Stanford University; Stanford: 2020. [ Google Scholar ]
  • Houdé Olivier. Inhibition and cognitive development: Object, number, categorization, and reasoning. Cognitive Development. 2000; 15 :63–73. doi: 10.1016/S0885-2014(00)00015-0. [ CrossRef ] [ Google Scholar ]
  • Houdé Olivier, Borst Grégoire. Measuring inhibitory control in children and adults: Brain imaging and mental chronometry. Frontiers in Psychology. 2014; 5 :616. doi: 10.3389/fpsyg.2014.00616. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huber Christopher R., Kuncel Nathan R. Does College Teach Critical Thinking? A Meta-Analysis. Review of Educational Research. 2016; 86 :431–68. doi: 10.3102/0034654315605917. [ CrossRef ] [ Google Scholar ]
  • Huizinga Johan. Homo Ludens: A Study of the Play-Elements in Culture. Routledge; London: 1949. [ Google Scholar ]
  • Humphrey Neil, Curran Andrew, Morris Elisabeth, Farrell Peter, Woods Kevin. Emotional Intelligence and Education: A Critical Review. Educational Psychology. 2007; 27 :235–54. doi: 10.1080/01443410601066735. [ CrossRef ] [ Google Scholar ]
  • International Institute for Competency Development 21st Century Skills 4Cs Labelization. 2021. [(accessed on 2 November 2022)]. Available online: https://icd-hr21.org/offers/21st-century-competencies/
  • Jackson Denise. Business Graduate Performance in Oral Communication Skills and Strategies for Improvement. The International Journal of Management Education. 2014; 12 :22–34. doi: 10.1016/j.ijme.2013.08.001. [ CrossRef ] [ Google Scholar ]
  • Jahn Gabriele, Schramm Matthias, Spiller Achim. The Reliability of Certification: Quality Labels as a Consumer Policy Tool. Journal of Consumer Policy. 2005; 28 :53–73. doi: 10.1007/s10603-004-7298-6. [ CrossRef ] [ Google Scholar ]
  • Jauk Emanuel, Benedek Mathias, Neubauer Aljoscha C. The Road to Creative Achievement: A Latent Variable Model of Ability and Personality Predictors. European Journal of Personality. 2014; 28 :95–105. doi: 10.1002/per.1941. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Joie-La Marle Chantal, Parmentier François, Coltel Morgane, Lubart Todd, Borteyrou Xavier. A Systematic Review of Soft Skills Taxonomies: Descriptive and Conceptual Work. 2022. [(accessed on 2 November 2022)]. Available online: [ CrossRef ]
  • Jones Stanley E., LeBaron Curtis D. Research on the Relationship between Verbal and Nonverbal Communication: Emerging Integrations. The Journal of Communication. 2002; 52 :499–521. doi: 10.1111/j.1460-2466.2002.tb02559.x. [ CrossRef ] [ Google Scholar ]
  • Kaendler Celia, Wiedmann Michael, Leuders Timo, Rummel Nikol, Spada Hans. Monitoring Student Interaction during Collaborative Learning: Design and Evaluation of a Training Program for Pre-Service Teachers. Psychology Learning & Teaching. 2016; 15 :44–64. doi: 10.1177/1475725716638010. [ CrossRef ] [ Google Scholar ]
  • Kahneman Daniel. A Perspective on Judgment and Choice: Mapping Bounded Rationality. The American Psychologist. 2003; 58 :697–720. doi: 10.1037/0003-066X.58.9.697. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kahneman Daniel. Thinking, Fast and Slow. Macmillan; New York: 2011. [ Google Scholar ]
  • Karl Katherine A., Peluchette Joy V., Aghakhani Navid. Virtual Work Meetings during the COVID-19 Pandemic: The Good, Bad, and Ugly. Small Group Research. 2022; 53 :343–65. doi: 10.1177/10464964211015286. [ CrossRef ] [ Google Scholar ]
  • Keefer Kateryna V., Parker James D. A., Saklofske Donald H. The Springer Series on Human Exceptionality. Springer International Publishing; Cham: 2018. Three Decades of Emotional Intelligence Research: Perennial Issues, Emerging Trends, and Lessons Learned in Education: Introduction to Emotional Intelligence in Education; pp. 1–19. [ Google Scholar ]
  • Kemp Nenagh, Grieve Rachel. Face-to-Face or Face-to-Screen? Undergraduates’ Opinions and Test Performance in Classroom vs. Online Learning. Frontiers in Psychology. 2014; 5 :1278. doi: 10.3389/fpsyg.2014.01278. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kimery Kathryn, McCord Mary. Third-Party Assurances: Mapping the Road to Trust in E-retailing. The Journal of Information Technology Theory and Application. 2002; 4 :63–82. [ Google Scholar ]
  • Kohn Nicholas W., Smith Steven M. Collaborative Fixation: Effects of Others’ Ideas on Brainstorming. Applied Cognitive Psychology. 2011; 25 :359–71. doi: 10.1002/acp.1699. [ CrossRef ] [ Google Scholar ]
  • Kowaltowski Doris C. C. K., Bianchi Giovana, de Paiva Valéria Teixeira. Methods That May Stimulate Creativity and Their Use in Architectural Design Education. International Journal of Technology and Design Education. 2010; 20 :453–76. doi: 10.1007/s10798-009-9102-z. [ CrossRef ] [ Google Scholar ]
  • Kruijver Irma P. M., Kerkstra Ada, Francke Anneke L., Bensing Jozien M., van de Wiel Harry B. M. Evaluation of Communication Training Programs in Nursing Care: A Review of the Literature. Patient Education and Counseling. 2000; 39 :129–45. doi: 10.1016/S0738-3991(99)00096-8. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lai Emily R. Critical thinking: A literature review. Pearson’s Research Reports. 2011; 6 :40–41. doi: 10.25148/lawrev.11.2.3. [ CrossRef ] [ Google Scholar ]
  • Lamri Jérémy, Lubart Todd. Creativity and Its’ Relationships with 21st Century Skills in Job Performance. Kindai Management Review. 2021; 9 :75–91. [ Google Scholar ]
  • Lamri Jérémy, Barabel Michel, Meier Olivier, Lubart Todd. Le Défi Des Soft Skills: Comment les Développer au XXIe Siècle? Dunod; Paris: 2022. [ Google Scholar ]
  • Landa Rebecca J. Assessment of Social Communication Skills in Preschoolers: Assessing Social Communication Skills in Children. Mental Retardation and Developmental Disabilities Research Reviews. 2005; 11 :247–52. doi: 10.1002/mrdd.20079. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lee Sang M., Choi Jeongil, Lee Sang-Gun. The impact of a third-party assurance seal in customer purchasing intention. Journal of Internet Commerce. 2004; 3 :33–51. doi: 10.1300/J179v03n02_03. [ CrossRef ] [ Google Scholar ]
  • Lewis Arthur, Smith David. Defining Higher Order Thinking. Theory into Practice. 1993; 32 :131–37. doi: 10.1080/00405849309543588. [ CrossRef ] [ Google Scholar ]
  • Liu Ou Lydia, Frankel Lois, Roohr Katrina Crotts. Assessing Critical Thinking in Higher Education: Current State and Directions for next-Generation Assessment: Assessing Critical Thinking in Higher Education. ETS Research Report Series. 2014; 2014 :1–23. doi: 10.1002/ets2.12009. [ CrossRef ] [ Google Scholar ]
  • Lubart Todd. The 7 C’s of Creativity. The Journal of Creative Behavior. 2017; 51 :293–96. doi: 10.1002/jocb.190. [ CrossRef ] [ Google Scholar ]
  • Lubart Todd, Thornhill-Miller Branden. Creativity: An Overview of the 7C’s of Creative Thought. Heidelberg: Heidelberg University Publishing. 2019 doi: 10.17885/HEIUP.470.C6678. [ CrossRef ] [ Google Scholar ]
  • Lubart Todd, Barbot Baptiste, Besançon Maud. Creative Potential: Assessment Issues and the EPoC Battery/Potencial Creativo: Temas de Evaluación y Batería EPoC. Estudios de Psicologia. 2019; 40 :540–62. doi: 10.1080/02109395.2019.1656462. [ CrossRef ] [ Google Scholar ]
  • Lubart Todd, Zenasni Franck, Barbot Baptiste. Creative potential and its measurement. International Journal of Talent Development and Creativity. 2013; 1 :41–51. [ Google Scholar ]
  • Lubart Tubart, Thornhill-Miller Branden. Creativity in Law: Legal Professions and the Creative Profiler Approach. In: Masson Antoine, Robinson Gavin., editors. Mapping Legal Innovation: Trends and Perspectives. Springer International Publishing; Cham: 2021. pp. 1–19. [ CrossRef ] [ Google Scholar ]
  • Lubin Jeffrey, Hendrick Stephan, Thornhill-Miller Branden, Mercier Maxence, Lubart Todd. Creativity in Solution-Focused Brief Therapy Forthcoming.
  • Lucas Bill. Why We Need to Stop Talking about Twenty-First Century Skills. Centre for Strategic Education; Melbourne: 2019. [ Google Scholar ]
  • Lucas Bill. Creative Thinking in Schools across the World. The Global Institute of Creative Thinking; London: 2022. [ Google Scholar ]
  • Lucas Bill, Claxton Guy. Wider Skills for Learning: What Are They, How Can They Be Cultivated, How Could They Be Measured and Why Are They Important for Innovation? NESTA; London: 2009. [ Google Scholar ]
  • Malaby Thomas M. Beyond Play: A New Approach to Games. Games and Culture. 2007; 2 :95–113. doi: 10.1177/1555412007299434. [ CrossRef ] [ Google Scholar ]
  • Marin Lisa M., Halpern Diane F. Pedagogy for developing critical thinking in adolescents: Explicit instruction produces greatest gains. Thinking Skills and Creativity. 2011; 6 :1–13. doi: 10.1016/j.tsc.2010.08.002. [ CrossRef ] [ Google Scholar ]
  • Mathieu John E., Hollenbeck John R., van Knippenberg Daan, Ilgen Daniel R. A Century of Work Teams in the Journal of Applied Psychology. The Journal of Applied Psychology. 2017; 102 :452–67. doi: 10.1037/apl0000128. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Matthews Danielle. Pragmatic Development in First Language Acquisition. Amsterdam: John Benjamins Publishing Company. 2014 doi: 10.1075/tilar.10. [ CrossRef ] [ Google Scholar ]
  • McDonald Skye, Gowland Alison, Randall Rebekah, Fisher Alana, Osborne-Crowley Katie, Honan Cynthia. Cognitive Factors Underpinning Poor Expressive Communication Skills after Traumatic Brain Injury: Theory of Mind or Executive Function? Neuropsychology. 2014; 28 :801–11. doi: 10.1037/neu0000089. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moore Brooke Noel, Parker Richard. Critical Thinking. 20th ed. McGraw-Hill Education; New York: 2016. [ Google Scholar ]
  • Morreale Sherwyn P., Valenzano Joseph M., Bauer Janessa A. Why Communication Education Is Important: A Third Study on the Centrality of the Discipline’s Content and Pedagogy. Communication Education. 2017; 66 :402–22. doi: 10.1080/03634523.2016.1265136. [ CrossRef ] [ Google Scholar ]
  • Mourad Maha. Quality Assurance as a Driver of Information Management Strategy: Stakeholders’ Perspectives in Higher Education. Journal of Enterprise Information Management. 2017; 30 :779–94. doi: 10.1108/JEIM-06-2016-0104. [ CrossRef ] [ Google Scholar ]
  • National Education Association . Preparing 21st Century Students for a Global Society: An Educator’s Guide to the “Four Cs”. National Education Association; Alexandria: 2011. [ Google Scholar ]
  • Nouri Jalal, Åkerfeldt Anna, Fors Uno, Selander Staffan. Assessing Collaborative Problem Solving Skills in Technology-Enhanced Learning Environments—The PISA Framework and Modes of Communication. International Journal of Emerging Technologies in Learning (IJET) 2017; 12 :163. doi: 10.3991/ijet.v12i04.6737. [ CrossRef ] [ Google Scholar ]
  • O’Carroll Veronica, Owens Melissa, Sy Michael, El-Awaisi Alla, Xyrichis Andreas, Leigh Jacqueline, Nagraj Shobhana, Huber Marion, Hutchings Maggie, McFadyen Angus. Top Tips for Interprofessional Education and Collaborative Practice Research: A Guide for Students and Early Career Researchers. Journal of Interprofessional Care. 2021; 35 :328–33. doi: 10.1080/13561820.2020.1777092. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • OECD . PISA 2015 Assessment and Analytical Framework: Science, Reading, Mathematic, Financial Literacy and Collaborative Problem Solving. OECD Publishing; Paris: 2017. PISA 2015 collaborative problem-solving framework. [ CrossRef ] [ Google Scholar ]
  • OECD . Framework for the Assessment of Creative Thinking in PISA 2021: Third Draft. OECD; Paris: 2019a. [(accessed on 2 November 2022)]. Available online: https://www.oecd.org/pisa/publications/PISA-2021-creative-thinking-framework.pdf [ Google Scholar ]
  • OECD . Future of Education and Skills 2030: A Series of Concept Notes. OECD Learning Compass; Paris: 2019b. [(accessed on 2 November 2022)]. Available online: https://www.oecd.org/education/2030-project/teaching-and-learning/learning/learning-compass-2030/OECD_Learning_Compass_2030_Concept_Note_Series.pdf [ Google Scholar ]
  • Osborn A. F. Applied Imagination. Charles Scribner’s Sons; New York: 1953. [ Google Scholar ]
  • Parkinson Thomas L. The Role of Seals and Certifications of Approval in Consumer Decision-Making. The Journal of Consumer Affairs. 1975; 9 :1–14. doi: 10.1111/j.1745-6606.1975.tb00545.x. [ CrossRef ] [ Google Scholar ]
  • Partnership for 21st Century Skills . 21st Century Skills Education and Competitiveness: A Resource and Policy Guide. Partnership for 21st Century Skills; Tuscon: 2008. [ Google Scholar ]
  • Pasquinelli Elena, Bronner Gérald. Éduquer à l’esprit critique. Bases théoriques et indications pratiques pour l’enseignement et la formation. Ministère de l’Éducation Nationale, de la JEUNESSE et des Sports; Paris: 2021. Rapport du Conseil Scientifique de l’Éducation Nationale. [ Google Scholar ]
  • Pasquinelli Elena, Farina Mathieu, Bedel Audrey, Casati Roberto. Naturalizing Critical Thinking: Consequences for Education, Blueprint for Future Research in Cognitive Science. Mind, Brain and Education: The Official Journal of the International Mind, Brain, and Education Society. 2021; 15 :168–76. doi: 10.1111/mbe.12286. [ CrossRef ] [ Google Scholar ]
  • Paul Richard, Elder Linda. Critical thinking: The nature of critical and creative thought. Journal of Developmental Education. 2006; 30 :34–35. [ Google Scholar ]
  • Paulus Paul B., Yang Huei-Chuan. Idea Generation in Groups: A Basis for Creativity in Organizations. Organizational Behavior and Human Decision Processes. 2000; 82 :76–87. doi: 10.1006/obhd.2000.2888. [ CrossRef ] [ Google Scholar ]
  • Paulus Paul B., Kenworthy Jared B. Effective brainstorming. In: Paulus Paul B., Nijstad Bernard A., editors. The Oxford Handbook of Group Creativity and Innovation. Oxford University Press; New York: 2019. [ CrossRef ] [ Google Scholar ]
  • Paulus Paul B., Dzindolet Mary T. Social Influence Processes in Group Brainstorming. Journal of Personality and Social Psychology. 1993; 64 :575–86. doi: 10.1037/0022-3514.64.4.575. [ CrossRef ] [ Google Scholar ]
  • Paulus Paul B., Brown Vincent R. Toward More Creative and Innovative Group Idea Generation: A Cognitive-Social-Motivational Perspective of Brainstorming: Cognitive-Social-Motivational View of Brainstorming. Social and Personality Psychology Compass. 2007; 1 :248–65. doi: 10.1111/j.1751-9004.2007.00006.x. [ CrossRef ] [ Google Scholar ]
  • Peddle Monica, Bearman Margaret, Radomski Natalie, Mckenna Lisa, Nestel Debra. What Non-Technical Skills Competencies Are Addressed by Australian Standards Documents for Health Professionals Who Work in Secondary and Tertiary Clinical Settings? A Qualitative Comparative Analysis. BMJ Open. 2018; 8 :e020799. doi: 10.1136/bmjopen-2017-020799. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peña-López Ismaël. PISA 2015 Results (Volume V): Collaborative Problem Solving. PISA, OECD Publishing; Paris: 2017. [ Google Scholar ]
  • Popil Inna. Promotion of Critical Thinking by Using Case Studies as Teaching Method. Nurse Education Today. 2011; 31 :204–7. doi: 10.1016/j.nedt.2010.06.002. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pornpitakpan Chanthika. The Persuasiveness of Source Credibility: A Critical Review of Five Decades’ Evidence. Journal of Applied Social Psychology. 2004; 34 :243–81. doi: 10.1111/j.1559-1816.2004.tb02547.x. [ CrossRef ] [ Google Scholar ]
  • Possin Kevin. Critique of the Watson-Glaser Critical Thinking Appraisal Test: The More You Know, the Lower Your Score. Informal Logic. 2014; 34 :393–416. doi: 10.22329/il.v34i4.4141. [ CrossRef ] [ Google Scholar ]
  • Proctor Robert W., Dutta Addie. Skill Acquisition and Human Performance. Sage Publications, Inc.; Thousand Oaks: 1995. [ Google Scholar ]
  • Putman Vicky L., Paulus Paul B. Brainstorming, Brainstorming Rules and Decision Making. The Journal of Creative Behavior. 2009; 43 :29–40. doi: 10.1002/j.2162-6057.2009.tb01304.x. [ CrossRef ] [ Google Scholar ]
  • Reiman Joey. Success: The Original Handbook. Longstreet Press; Atlanta: 1992. [ Google Scholar ]
  • Ren Xuezhu, Tong Yan, Peng Peng, Wang Tengfei. Critical Thinking Predicts Academic Performance beyond General Cognitive Ability: Evidence from Adults and Children. Intelligence. 2020; 82 :101487. doi: 10.1016/j.intell.2020.101487. [ CrossRef ] [ Google Scholar ]
  • Renard Marie-Christine. Quality Certification, Regulation and Power in Fair Trade. Journal of Rural Studies. 2005; 21 :419–31. doi: 10.1016/j.jrurstud.2005.09.002. [ CrossRef ] [ Google Scholar ]
  • Restout Emilie. Labels RSE: Un décryptage des entreprises labellisées en France. Goodwill Management. 2020. [(accessed on 2 November 2022)]. Available online: https://goodwill-management.com/labels-rse-decryptage-entreprises-labellisees/
  • Rhodes Mel. An Analysis of Creativity. The Phi Delta Kappan. 1961; 42 :305–10. [ Google Scholar ]
  • Rider Elizabeth A., Keefer Constance H. Communication Skills Competencies: Definitions and a Teaching Toolbox: Communication. Medical Education. 2006; 40 :624–29. doi: 10.1111/j.1365-2929.2006.02500.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Riemer Marc J. Communication Skills for the 21st Century Engineer. Global Journal of Engineering Education. 2007; 11 :89. [ Google Scholar ]
  • Rietzschel Eric F., Nijstad Bernard A., Stroebe Wolfgang. Productivity Is Not Enough: A Comparison of Interactive and Nominal Brainstorming Groups on Idea Generation and Selection. Journal of Experimental Social Psychology. 2006; 42 :244–51. doi: 10.1016/j.jesp.2005.04.005. [ CrossRef ] [ Google Scholar ]
  • Ross David. Why the Four Cs Will Become the Foundation of Human-AI Interface. 2018. [(accessed on 2 November 2022)]. Available online: https://www.gettingsmart.com/2018/03/04/why-the-4cs-will-become-the-foundation-of-human-ai-interface/
  • Rothermich Kathrin. Social Communication Across the Lifespan: The Influence of Empathy [Preprint] SocArXiv. 2020 doi: 10.31235/osf.io/adgmy. [ CrossRef ] [ Google Scholar ]
  • Rusdin Norazlin Mohd, Ali Siti Rahaimah. Practice of Fostering 4Cs Skills in Teaching and Learning. International Journal of Academic Research in Business and Social Sciences. 2019; 9 :1021–35. doi: 10.6007/IJARBSS/v9-i6/6063. [ CrossRef ] [ Google Scholar ]
  • Rychen Dominique Simone, Hersch Salganik Laura., editors. Key Competencies for a Successful Life and a Well-Functioning Society. Hogrefe and Huber; Cambridge: 2003. [ Google Scholar ]
  • Sahin Mehmet Can. Instructional Design Principles for 21st Century Learning Skills. Procedia, Social and Behavioral Sciences. 2009; 1 :1464–68. doi: 10.1016/j.sbspro.2009.01.258. [ CrossRef ] [ Google Scholar ]
  • Salas Eduardo, Stagl Kevin C., Burke C. Shawn. International Review of Industrial and Organizational Psychology. John Wiley & Sons, Ltd.; Chichester: 2004. 25 Years of Team Effectiveness in Organizations: Research Themes and Emerging Needs; pp. 47–91. [ CrossRef ] [ Google Scholar ]
  • Salas Eduardo, Shuffler Marissa L., Thayer Amanda L., Bedwell Wendy L., Lazzara Elizabeth H. Understanding and Improving Teamwork in Organizations: A Scientifically Based Practical Guide. Human Resource Management. 2015; 54 :599–622. doi: 10.1002/hrm.21628. [ CrossRef ] [ Google Scholar ]
  • Salmi Jamil. The Tertiary Education Imperative: Knowledge, Skills and Values for Development. Springer; Cham: 2017. [ Google Scholar ]
  • Samani Sanaz Ahmadpoor, Rasid Siti Zaleha Binti Abdul, bt Sofian Saudah. A Workplace to Support Creativity. Industrial Engineering & Management Systems. 2014; 13 :414–20. doi: 10.7232/iems.2014.13.4.414. [ CrossRef ] [ Google Scholar ]
  • Saroyan Alenoush. Fostering Creativity and Critical Thinking in University Teaching and Learning: Considerations for Academics and Their Professional Learning. OECD; Paris: 2022. [ CrossRef ] [ Google Scholar ]
  • Sasmita Jumiati, Suki Norazah Mohd. Young consumers’ insights on brand equity: Effects of brand association, brand loyalty, brand awareness, and brand image. International Journal of Retail & Distribution Management. 2015; 43 :276–92. doi: 10.1108/IJRDM-02-2014-0024. [ CrossRef ] [ Google Scholar ]
  • Schlegel Claudia, Woermann Ulrich, Shaha Maya, Rethans Jan-Joost, van der Vleuten Cees. Effects of Communication Training on Real Practice Performance: A Role-Play Module versus a Standardized Patient Module. The Journal of Nursing Education. 2012; 51 :16–22. doi: 10.3928/01484834-20111116-02. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schleicher Andreas. Why Creativity and Creative Teaching and Learning Matter Today and for Tomorrow’s World. GloCT in Collaboration with OECD CERI; Paris: 2022. Creativity in Education Summit 2022. [ Google Scholar ]
  • Schneider Bertrand, Sharma Kshitij, Cuendet Sebastien, Zufferey Guillaume, Dillenbourg Pierre, Pea Roy. Leveraging Mobile Eye-Trackers to Capture Joint Visual Attention in Co-Located Collaborative Learning Groups. International Journal of Computer-Supported Collaborative Learning. 2018; 13 :241–61. doi: 10.1007/s11412-018-9281-2. [ CrossRef ] [ Google Scholar ]
  • Schultz David M. Eloquent Science: A course to improve scientific and communication skills; Paper presented at the 19th Symposium on Education; Altanta, GA, USA. January 18–21; 2010. [ Google Scholar ]
  • Scialabba George. Mindplay. Harvard Magazine. 1984; 16 :19. [ Google Scholar ]
  • Scott Ginamarie, Leritz Lyle E., Mumford Michael D. The Effectiveness of Creativity Training: A Quantitative Review. Creativity Research Journal. 2004; 16 :361–88. doi: 10.1080/10400410409534549. [ CrossRef ] [ Google Scholar ]
  • Sigafoos Jeff, Schlosser Ralf W., Green Vanessa A., O’Reilly Mark, Lancioni Giulio E. Communication and Social Skills Assessment. In: Matson Johnny L., editor. Clinical Assessment and Intervention for Autism Spectrum Disorders. Elsevier; Amsterdam: 2008. pp. 165–92. [ CrossRef ] [ Google Scholar ]
  • Simonton Dean Keith. Creativity from a Historiometric Perspective. In: Sternberg Robert J., editor. Handbook of Creativity. Cambridge University Press; Cambridge: 1999. pp. 116–34. [ CrossRef ] [ Google Scholar ]
  • Singh Pallavi, Bala Hillol, Dey Bidit Lal, Filieri Raffaele. Enforced Remote Working: The Impact of Digital Platform-Induced Stress and Remote Working Experience on Technology Exhaustion and Subjective Wellbeing. Journal of Business Research. 2022; 151 :269–86. doi: 10.1016/j.jbusres.2022.07.002. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Spada Hans, Meier Anne, Rummel Nikol, Hauser Sabine. Proceedings of the 2005 Conference on Computer Support for Collaborative Learning Learning 2005: The next 10 Years!—CSCL’05, Taipei, Taiwan, May 30–June 4. Association for Computational Linguistics; Morristown: 2005. A New Method to Assess the Quality of Collaborative Process in CSCL. [ Google Scholar ]
  • Spitzberg Brian H. Methods of interpersonal skill assessment. In: Greene John O., Burleson Brant R., editors. The Handbook of Communication and Social Interaction Skills. Lawrence Erlbaum Associates; Mahwah: 2003. [ Google Scholar ]
  • Sternberg Robert. Intelligence, Wisdom, and Creativity: Three Is Better than One. Educational Psychologist. 1986; 21 :175–90. doi: 10.1207/s15326985ep2103_2. [ CrossRef ] [ Google Scholar ]
  • Sternberg Robert J., Funke Joachim. The Psychology of Human Thought: An Introduction. Heidelberg University Publishing (heiUP); Heidelberg: 2019. [ CrossRef ] [ Google Scholar ]
  • Sursock Andrée. Quality assurance and rankings: Some European lessons. In: Hazelkorn Ellen, Mihut Georgiana., editors. Research Handbook on University Rankings. Edward Elgar Publishing; Cheltenham: 2021. pp. 185–96. [ CrossRef ] [ Google Scholar ]
  • Sursock Andrée, Vettori Oliver. Qualitätskultur. Ein Blick in Die Gelebte Praxis der Hochschulen. Agency for Quality Assurance and Accreditation; Vienna: 2017. [(accessed on 2 November 2022)]. Quo vadis, quality culture? Theses from different perspectives; pp. 13–18. Available online: https://www.aq.ac.at/de/ueber-uns/publikationen/sonstige-publikationen.php [ Google Scholar ]
  • Sutter Éric. Certification et Labellisation: Un Problème de Confiance. Bref Panorama de La Situation Actuelle. Documentaliste-Sciences de l Information. 2005; 42 :284–90. doi: 10.3917/docsi.424.0284. [ CrossRef ] [ Google Scholar ]
  • Taddei François. Training Creative and Collaborative Knowledge-Builders: A Major Challenge for 21st Century Education. OCDE; Paris: 2009. [ Google Scholar ]
  • Thomas Keith, Lok Beatrice. Teaching Critical Thinking: An Operational Framework. In: Davies Martin, Barnett Ronald., editors. The Palgrave Handbook of Critical Thinking in Higher Education. Palgrave Macmillan US; New York: 2015. pp. 93–105. [ CrossRef ] [ Google Scholar ]
  • Thompson Jeri. Measuring Student Success Skills: A Review of the Literature on Complex Communication. National Center for the Improvement of Educational Assessment; Dover: 2020. [ Google Scholar ]
  • Thorndahl Kathrine L., Stentoft Diana. Thinking Critically about Critical Thinking and Problem-Based Learning in Higher Education: A Scoping Review. Interdisciplinary Journal of Problem-Based Learning 14. 2020 doi: 10.14434/ijpbl.v14i1.28773. [ CrossRef ] [ Google Scholar ]
  • Thornhill-Miller Branden. ‘Crea-Critical-Collab-ication’: A Dynamic Interactionist Model of the 4Cs (Creativity, Critical Thinking, Collaboration and Communication) 2021. [(accessed on 2 November 2022)]. Available online: http://thornhill-miller.com/newWordpress/index.php/current-research/
  • Thornhill-Miller Branden, Dupont Jean-Marc. Virtual Reality and the Enhancement of Creativity and Innovation: Underrecognized Potential Among Converging Technologies? Journal for Cognitive Education and Psychology. 2016; 15 :102–21. doi: 10.1891/1945-8959.15.1.102. [ CrossRef ] [ Google Scholar ]
  • Thornhill-Miller Branden, Millican Peter. The Common-Core/Diversity Dilemma: Revisions of Humean Thought, New Empirical Research, and the Limits of Rational Religious Belief. European Journal for Philosophy of Religion. 2015; 7 :1–49. doi: 10.24204/ejpr.v7i1.128. [ CrossRef ] [ Google Scholar ]
  • Tomasello Michael. Constructing a Language: A Usage-Based Theory of Language Acquisition. Harvard University Press; Cambridge: 2005. [ CrossRef ] [ Google Scholar ]
  • Uribe-Enciso Olga Lucía, Uribe-Enciso Diana Sofía, Vargas-Daza María Del Pilar. Pensamiento Crítico y Su Importancia En La Educación: Algunas Reflexiones. Rastros Rostros. 2017; 19 doi: 10.16925/ra.v19i34.2144. [ CrossRef ] [ Google Scholar ]
  • van der Vleuten Cees, van den Eertwegh Valerie, Giroldi Esther. Assessment of Communication Skills. Patient Education and Counseling. 2019; 102 :2110–13. doi: 10.1016/j.pec.2019.07.007. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • van Klink Marcel R., Boon Jo. Competencies: The triumph of a fuzzy concept. International Journal of Human Resources Development and Management. 2003; 3 :125–37. doi: 10.1504/IJHRDM.2003.002415. [ CrossRef ] [ Google Scholar ]
  • van Laar Ester, Van Deursen Alexander J. A. M., Van Dijk Jan A. G. M., de Haan Jos. The Relation between 21st-Century Skills and Digital Skills: A Systematic Literature Review. Computers in Human Behavior. 2017; 72 :577–88. doi: 10.1016/j.chb.2017.03.010. [ CrossRef ] [ Google Scholar ]
  • van Rosmalen Peter, Boyle Elizabeth A., Nadolski Rob, van der Baaren John, Fernández-Manjón Baltasar, MacArthur Ewan, Pennanen Tiina, Manea Madalina, Star Kam. Lecture Notes in Computer Science. Springer International Publishing; Cham: 2014. Acquiring 21st Century Skills: Gaining Insight into the Design and Applicability of a Serious Game with 4C-ID; pp. 327–34. [ CrossRef ] [ Google Scholar ]
  • Vincent-Lancrin Stéphan, González-Sancho Carlos, Bouckaert Mathias, de Luca Federico, Fernández-Barrerra Meritxell, Jacotin Gwénaël, Urgel Joaquin, Vidal Quentin. Fostering Students’ Creativity and Critical Thinking: What It Means in School. OECD Publishing; Paris: 2019. [ CrossRef ] [ Google Scholar ]
  • Voogt Joke, Roblin Natalie Pareja. A Comparative Analysis of International Frameworks for 21st Century Competences: Implications for National Curriculum Policies. Journal of Curriculum Studies. 2012; 44 :299–321. doi: 10.1080/00220272.2012.668938. [ CrossRef ] [ Google Scholar ]
  • Waizenegger Lena, McKenna Brad, Cai Wenjie, Bendz Taino. An Affordance Perspective of Team Collaboration and Enforced Working from Home during COVID-19. European Journal of Information Systems: An Official Journal of the Operational Research Society. 2020; 29 :429–42. doi: 10.1080/0960085X.2020.1800417. [ CrossRef ] [ Google Scholar ]
  • Watson Goodwin. Watson-Glaser Critical Thinking Appraisal. Psychological Corporation; San Antonio: 1980. [ Google Scholar ]
  • Watson Goodwin, Glaser Edwin M. Technical Manual and User’s Guide. Pearson; Kansas City: 2010. Watson-Glaser TM II critical thinking appraisal. [ Google Scholar ]
  • Weick Karl E. The collapse of sensemaking in organizations: The Mann Gulch disaster. Administrative Science Quarterly. 1993; 38 :628–52. doi: 10.2307/2393339. [ CrossRef ] [ Google Scholar ]
  • West Richard F., Toplak Maggie E., Stanovich Keith E. Heuristics and Biases as Measures of Critical Thinking: Associations with Cognitive Ability and Thinking Dispositions. Journal of Educational Psychology. 2008; 100 :930–41. doi: 10.1037/a0012842. [ CrossRef ] [ Google Scholar ]
  • Whitmore Paul G. What are soft skills; Paper presented at the CONARC Soft Skills Conference; Fort Bliss, TX, USA. December 12–13; 1972. pp. 12–13. [ Google Scholar ]
  • Willingham Daniel T. Critical Thinking: Why Is It so Hard to Teach? Arts Education Policy Review. 2008; 109 :21–32. doi: 10.3200/AEPR.109.4.21-32. [ CrossRef ] [ Google Scholar ]
  • Wilson Sarah Beth, Varma-Nelson Pratibha. Small Groups, Significant Impact: A Review of Peer-Led Team Learning Research with Implications for STEM Education Researchers and Faculty. Journal of Chemical Education. 2016; 93 :1686–702. doi: 10.1021/acs.jchemed.5b00862. [ CrossRef ] [ Google Scholar ]
  • Winterton Jonathan, Deist Françoise Delamare-Le, Stringfellow Emma. Typology of Knowledge, Skills and Competences: Clarification of the Concept and Prototype. Office for Official Publications of the European Communities; Luxembourg: 2006. [ Google Scholar ]
  • World Economic Forum . New Vision for Education: Unlocking the Potential of Technology. World Economic Forum; Geneva: 2015. [ Google Scholar ]
  • World Economic Forum The Future of Jobs Report 2020. 2020. [(accessed on 2 November 2022)]. Available online: https://www.weforum.org/reports/the-future-of-jobs-report-2020
  • World Health Organization . Framework for Action on Interprofessional Education and Collaborative Practice. World Health Organization; Geneva: 2010. No. WHO/HRH/HPN/10.3. [ PubMed ] [ Google Scholar ]
  • Yue Meng, Zhang Meng, Zhang Chunmei, Jin Changde. The Effectiveness of Concept Mapping on Development of Critical Thinking in Nursing Education: A Systematic Review and Meta-Analysis. Nurse Education Today. 2017; 52 :87–94. doi: 10.1016/j.nedt.2017.02.018. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zielke Stephan, Dobbelstein Thomas. Customers’ Willingness to Purchase New Store Brands. Journal of Product & Brand Management. 2007; 16 :112–21. doi: 10.1108/10610420710739982. [ CrossRef ] [ Google Scholar ]
  • Zlatić Lidija, Bjekić Dragana, Marinković Snežana, Bojović Milevica. Development of Teacher Communication Competence. Procedia, Social and Behavioral Sciences. 2014; 116 :606–10. doi: 10.1016/j.sbspro.2014.01.265. [ CrossRef ] [ Google Scholar ]

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 16 April 2019

BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains

  • Linxing Jiang 1 ,
  • Andrea Stocco   ORCID: orcid.org/0000-0001-8919-3934 2 , 3 , 6 , 7 ,
  • Darby M. Losey 4 , 5 ,
  • Justin A. Abernethy 2 , 3 ,
  • Chantel S. Prat 2 , 3 , 6 , 7 &
  • Rajesh P. N. Rao 1 , 6 , 7  

Scientific Reports volume  9 , Article number:  6115 ( 2019 ) Cite this article

110k Accesses

72 Citations

1939 Altmetric

Metrics details

  • Human behaviour
  • Neural decoding
  • Problem solving

We present BrainNet which, to our knowledge, is the first multi-person non-invasive direct brain-to-brain interface for collaborative problem solving. The interface combines electroencephalography (EEG) to record brain signals and transcranial magnetic stimulation (TMS) to deliver information noninvasively to the brain. The interface allows three human subjects to collaborate and solve a task using direct brain-to-brain communication. Two of the three subjects are designated as “Senders” whose brain signals are decoded using real-time EEG data analysis. The decoding process extracts each Sender’s decision about whether to rotate a block in a Tetris-like game before it is dropped to fill a line. The Senders’ decisions are transmitted via the Internet to the brain of a third subject, the “Receiver,” who cannot see the game screen. The Senders’ decisions are delivered to the Receiver’s brain via magnetic stimulation of the occipital cortex. The Receiver integrates the information received from the two Senders and uses an EEG interface to make a decision about either turning the block or keeping it in the same orientation. A second round of the game provides an additional chance for the Senders to evaluate the Receiver’s decision and send feedback to the Receiver’s brain, and for the Receiver to rectify a possible incorrect decision made in the first round. We evaluated the performance of BrainNet in terms of (1) Group-level performance during the game, (2) True/False positive rates of subjects’ decisions, and (3) Mutual information between subjects. Five groups, each with three human subjects, successfully used BrainNet to perform the collaborative task, with an average accuracy of 81.25%. Furthermore, by varying the information reliability of the Senders by artificially injecting noise into one Sender’s signal, we investigated how the Receiver learns to integrate noisy signals in order to make a correct decision. We found that like conventional social networks, BrainNet allows Receivers to learn to trust the Sender who is more reliable, in this case, based solely on the information transmitted directly to their brains. Our results point the way to future brain-to-brain interfaces that enable cooperative problem solving by humans using a “social network” of connected brains.

Similar content being viewed by others

collaborative problem solving communication

Anytime collaborative brain–computer interfaces for enhancing perceptual group decision-making

Saugat Bhattacharyya, Davide Valeriani, … Riccardo Poli

collaborative problem solving communication

Optimising non-invasive brain-computer interface systems for free communication between naïve human participants

Angela I. Renton, Jason B. Mattingley & David R. Painter

collaborative problem solving communication

Brain computer interface to distinguish between self and other related errors in human agent collaboration

Viktorija Dimova-Edeleva, Stefan K. Ehrlich & Gordon Cheng

Introduction

Direct brain-to-brain interfaces (BBIs) in humans 1 , 2 , 3 , 4 , 5 are interfaces which combine neuroimaging and neurostimulation methods to extract and deliver information between brains, allowing direct brain-to-brain communication. A BBI extracts specific content from the neural signals of a “Sender” brain, digitizes it, and delivers it to a “Receiver” brain. Because of ethical and safety considerations, existing human BBIs rely on non-invasive technologies, typically electroencephalography (EEG), to record neural activity and transcranial magnetic stimulation (TMS) to deliver information to the brain. For example, the first human BBI demonstrated by Rao and colleagues in 2013 2 decoded motor intention signals using EEG in the Sender and conveyed the intention via TMS directly to the motor cortex of the Receiver to complete a visual-motor task 1 . Stocco and colleagues 5 extended these results by showing that a Sender and a Receiver can iteratively exchange information using a BBI to identify an unknown object from a list, using a question-and-answer paradigm akin to “20 Questions.” Grau and colleagues 4 proposed a related but offline non-iterative BBI.

Early interest in human BBIs came from the potential for expanding human communication and social interaction capabilities 6 , 7 , 8 , 9 , 10 . However, previous BBIs have lacked several key features of real-world human communication. First, the degree of interactivity has been minimal; for example, in the case of the “20 Questions” BBI 5 , the Sender only responds to the question the Receiver chooses, and the Receiver’s performance does not affect the Sender’s decision. Second, their interface required physical action: the Receiver touched the screen to select a question. Thus, the communication loop was completed via a motor output channel rather than a brain interface. Third, all past human BBIs have only allowed two subjects. Human communication, on the other hand, has become increasingly dominated by means such as social media that allow multiple parties to interact in a network. The potential for BBIs that allow interactions between multiple humans has previously been theorized 3 , 11 but not demonstrated.

Here, we present BrainNet (Fig.  1 ), a next-generation BBI that addresses many of the limitations of past BBIs. First, BrainNet is designed to be a BBI for more than two human subjects; its current implementation allows two Senders and one Receiver to communicate, but it can be readily scaled up to include larger numbers of Senders. The Senders have the same role in observing the current state of the task and conveying their decisions to the Receiver. The Receiver has the role of integrating these independent decisions and deciding on a course of action. Second, BrainNet’s design incorporates a second round of interactions between the Senders and the Receiver, so that the action of the Receiver in the first round can be perceived by the Senders, giving them a second chance to convey (potentially corrective) decisions to the Receiver. Third, the Receiver is equipped with both TMS (to receive Senders’ decisions) and EEG (to perform an action in the task), thereby completely eliminating the need to use any physical movements to convey information. We report results from five groups, each with three human subjects (henceforth, “triad”), who successfully used BrainNet to perform a collaborative task based on a Tetris-like game.

figure 1

Architecture of BrainNet. Two participants (“Sender 1” and “Sender 2”) each use a Brain-Computer Interface (BCI) based on EEG to convey information about a collaborative task (here, a Tetris-like game) directly to the brain of the third participant (“Receiver”). Information from each Sender is transmitted over the internet to the Receiver’s brain via a Computer-Brain Interface (CBI) based on TMS. After consciously processing the two inputs from the Senders, the Receiver uses a BCI based on EEG to execute an action in the task. The Senders see the result of this action on their screens (the same updated game state is shown on both screens, as indicated by the red arrow from one Sender’s screen to the other). The Senders then have another opportunity to convey to the Receiver’s brain new information to potentially rectify an incorrect choice in the first round. While our experiment only used two rounds, BrainNet allows an arbitrary number of interactions between the Senders and the Receiver as they collaborate to solve a task. BrainNet differs from a previous interface called “Brainet” 12 which combines recordings from multiple monkey brains to perform a common motor task but is unidirectional and does not use stimulation to communicate information back to any of the brains.

An important feature of communication in social networks is deciding which sources of information to pay attention to when deciding on a course of action 13 . To investigate whether BrainNet allows such a capability, we additionally explored whether the Receiver can learn the reliability of each Sender over the course of their brain-to-brain interactions. We varied the reliability of the information from one Sender compared to information from the other by injecting noise into the signals from one randomly chosen Sender. Our results show that like conventional social networks, BrainNet allows a Receiver to learn to trust the Sender who is more reliable, i.e., whose signal quality is not affected by our manipulation.

To measure the direct brain-to-brain communication capabilities of BrainNet, we asked each triad of participants to perform 16 trials of an iterative Tetris-like game. In each trial, one participant, designated as the Receiver, is in charge of deciding whether or not to rotate a block before it drops to fill a gap in a line at the bottom of the screen. Critically, the Receiver is prevented from seeing the bottom part of the screen and must rely on the counsel of the other two participants, designated as the Senders, who can see the screen in its entirety. These Senders are tasked with making the correct decision (rotate or not) based on the shape of the current block and the gap at the bottom, and informing the Receiver of the decision via the brain-to-brain interface. All members of the triad communicate their decisions through an EEG-based interface using steady state visually evoked potentials (SSVEPs; see Methods). The Senders’ decisions are delivered to the Receiver through two TMS pulses delivered sequentially to the occipital cortex, eliciting a phosphene for a “yes” decision or no phosphene for a “no” rotation decision for each Sender (see Methods). Each trial is composed of two rounds: the first round is as described above; after the first round, the Senders are given the opportunity to examine the Receiver’s decision, shown on their screen as the block (now potentially rotated) mid-way through its fall. The Senders are then given another chance to make new (possibly corrective) suggestions to the Receiver through the brain-to-brain interface. A successful completion of a trial thus requires accurate communication between the Senders and the Receiver across these two rounds (see Fig.  2 ). Further, to examine the issue of reliability of the Senders, our software randomly chooses one Sender to be less reliable by making the decision sent to the Receiver from that Sender incorrect in ten out of sixteen trials. The order of trials requiring the block to be rotated and trials not requiring rotation was pseudo-randomized, with the constraint that each half of the session contained 4 rotation and 4 non-rotation trials. Trials 8–12 for the first triad were excluded from all analysis due to a problem with the timestamp routine. We analyzed both the EEG and behavioral data from the subjects in the remaining trials.

figure 2

Examples of Screens seen by the Receiver and the Senders across Two Rounds. The Receiver sees the three example screens on the left side and the Senders see the screens on the right side. (Top Row) Screens at the beginning of the trial. Note that the Receiver does not see the bottom line with the gap but the Senders do. The Receiver must rely on the Senders to decide whether or not the red block must be rotated to fill the gap and clear the line. (Middle Row) After the Receiver makes a decision in the first round (in this case, “Rotate”), the game state is updated to show the rotated block. (Bottom Row) After the second round, all participants see the results of the Receiver’s action and whether the line was cleared. In this example, the Receiver executed a corrective action to rotate the block again, thereby filling the gap with the bottom part of the block and clearing the line.

Overall Performance

The simplest measure of overall performance of the interface is the proportion of correct block rotations (equivalently, the proportion of number of lines cleared, or the proportion of the maximum theoretical total score, i.e. 16 points, achieved) for each of the five triads of participants. Figure  3 shows the results. The mean accuracy across all triads was 0.8125, corresponding to 13 correct trials out of 16. A corresponding p -value was calculated using the binomial distribution, which confirmed that the mean performance was indeed higher than expected by chance ( p  = 0.002).

figure 3

Performance of Triads of Participants using BrainNet. The plot shows accuracy achieved by each of the five triads of participants. Accuracy was defined as the proportion of correct block rotations achieved by the triad. The dashed line shows the theoretical chance accuracy (0.5).

Another important metric is the mean performance of participants in the SSVEP task since both Senders and the Receiver in each triad had to use this method to share information. In the task, subjects focused their attention on a 17 Hz flashing LED to indicate a “Rotate” decision and a 15 Hz flashing LED to indicate a “Do Not Rotate” decision. Figure  4 shows that before and after the SSVEP task, the 17 Hz and 15 Hz average power values overlap, whereas during the task, the average power of the frequency corresponding to the correct answer in the trial is significantly larger than that of the frequency corresponding to the wrong answer (two-sample t -test; t (15) = 9.709, p  < 0.0001 for “Rotate” signal; t (15) = 10.725, p  < 0.0001 for “Do Not Rotate” signal). Since our SSVEP classifier compares the magnitude of power values to decode a Sender’s decision, the large difference in power values implies good performance for our EEG-based brain interfaces.

figure 4

Average Power Spectra of EEG Signals across Subjects during the SSVEP Task. Power values were averaged across one-second epochs and across subjects. The plots show the average power values during the SSVEP task (between dashed lines) and for comparison, the power values three seconds before and after the task. Note that before and after the task, the power values overlap for the two frequencies, whereas during the task, the power of the frequency corresponding to the correct answer is significantly larger.

As noted in previous studies from our group 1 , 5 , 14 , raw accuracy can be an inadequate measure of performance because it does not differentiate the kind of mistakes being made, i.e., whether they are misses or false positives. A better measure of performance can be obtained by calculating each triad’s Receiver Operating Characteristic (ROC) curve 15 , which plots the True Positive Rate versus the False Positive rate, and calculating the area under this curve (AUC; Fig.  5 ). Responding uniformly at random yields an AUC of 0.5 while the AUC of an ideal observer is 1.0. Because the distribution of AUC values is constrained between 0 and 1, it does not conform to the normal distribution. Thus, to properly conduct statistical tests of these values, we followed two complementary approaches. First, we conducted t -tests on the angular transformation (i.e., the arcsine square root transformation) of the AUC values, a common technique used to normalize data distributions 16 . Second, we entered the raw, untransformed values in a Wilcoxon test, a non-parametric test with a continuity correction. Both tests confirmed that the mean AUC value of 0.83 across all triads of participants was significantly higher than the performance expected by chance (one-sample t -test on angular transformed data: t (4) = 11.366, p  < 0.001; one-sample Wilcoxon test: V  = 15, p  = 0.031).

figure 5

ROC Curves for the Five Triads of Participants. The plot shows the overall performance of each triad (blue dots) as well as the performances of the two types of Senders (“Good” versus “Bad”) in each triad (green and red dots). See text for details on the experimental design used to create a “Good” versus “Bad” Sender. The superscript on each dot denotes the triad number. Shaded areas represent the area under the curve (AUC) for each triad’s ROC curve. The dashed line denotes chance performance.

As Fig.  5 shows, the overall AUC value for each triad of brains is affected by the bad Sender’s performance, but not by much. The overall AUC values are smaller than the AUC values of the good Senders (two-sample t -test on angular transformed data: t (4) = −2.897, p  = 0.021; Wilcoxon test: W  = 2, p  = 0.036) but significantly larger than those of the bad Senders (two-sample t -test on angular transformed data: t (4) = 9.184, p  < 0.001; Wilcoxon test: W  = 25, p  = 0.008).

Mutual Information Between Participants

An important measure of a brain-to-brain interface is the mutual information (MI) 17 transmitted between subjects, which is defined as:

where r represents a decision made by the Receiver (0 or 1 corresponding to “do not rotate” or “rotate”), s represents a decision made by one of the Senders, p R ( r ) represents the probability of the Receiver making the decision r , p S ( s ) represents the probability of one of the Senders making the decision s , and p R , S ( r , s ) represents the joint probability of the Receiver making the decision r and a Sender making the decision s . Note that, in this case, chance performance corresponds to MI  = 0.0 while perfect communication corresponds to MI  = 1.0. Because mutual information values are also constrained between 0 and 1 and, therefore, are not normally distributed, we analyzed them using the statistical methods we applied to the AUC values (i.e., t -test on angular transformed data and Wilcoxon test with continuity correction).

Due to our experimental design, we expect significantly higher MI values (i.e., larger amounts of information being transferred) between a good Sender and the Receiver than between a bad Sender and the Receiver. This is corroborated by our results (Fig.  6 ).

figure 6

Mutual Information transmitted between the Senders and the Receiver. Across all five triads of BrainNet participants, the mutual information transmitted between the Receiver and the “Good” Sender is significantly higher than that between the Receiver and the “Bad” Sender.

The information transmitted was significantly greater than the MI for chance performance for both the good Senders ( MI  = 0.336, t -test on angular transformed data: t (4) = 5.374, p  = 0.006; Wilcoxon test: V  = 15, p  = 0.031) and the bad Senders ( MI  = 0.051; t -test on angular transformed data: t (4) = 3.544, p  = 0.024; Wilcoxon test: V  = 15, p  = 0.031). The difference between good and bad Senders was also statistically significant (two-sided t -test on angular transformed data, t (8) = 5.187, p  = 0.002; Wilcoxon test: W  = 0, p  = 0.031), with the good Senders transmitting, on average, more information than the bad Senders.

For consistency with previous studies 1 , 5 , 6 , we have reported uncorrected estimates of MI. Given the relatively small number of samples, uncorrected MI values might overestimate the true amount of information shared by two participants. For this reason, we used a recently proposed method 18 to calculate the amount of bias in our estimates. Under the conditions of our experiment, the bias b can be approximated as b  = − N R /[2 ×  N S  × log(2)], with N R being the number of possible responses (in our case, N R  = 2) and N S the number of samples (in our case, N S  = 32 for each pair of participants). The bias thus estimated was found to be negligible ( b  = −0.045) and does not affect the results of any of our statistical tests.

Learning of Sender Reliability by Receiver

The differences in accuracy and mutual information between the “good” and “bad” Senders in the previous section suggest that the Receiver successfully learned which of the two Senders is a more reliable source of information. Confirming that this is indeed the case would bring BrainNet a step closer to conventional social networks where users utilize differential weighting for different sources of information. To further investigate this issue, we divided each experimental session into four consecutive blocks of four trials each. We quantified the time course of the Receiver’s learning process using two measures: (1) block-by-block estimates of the linear regression weights for the Receiver’s decisions versus each Sender’s decisions; and (2) the block-by-block correlation of decisions made by the Receiver and by each Sender 1 . Because of the small number of trials ( N  = 4) in each block, the decision vectors for Senders and Receivers were created by concatenating the decisions of participants with the same role (Receiver, good Sender, or bad Sender) across the five triads; this procedure captures group-level behavior and is less sensitive to outliers. Thus, if \(R\in {{\mathbb{R}}}^{20\times 1}\) is a decision vector for the five Receivers in a four-trial block (each decision is encoded as a 0 or 1), and \(S\in {{\mathbb{R}}}^{20\times 1}\) is a decision vector for one type of Sender (“good” or “bad”), the linear regression weights β can be estimated using the standard pseudoinverse method 19 as: β  = ( S T S ) −1 S T R . The same concatenated vectors R and S were also used to estimate the Pearson correlation coefficients for each four-trial block.

As shown in Fig.  7 , the time course of both the beta weights and correlation coefficients show a steep ascending trend for the good Sender, but not for the bad Sender. To test the difference between these trends, we estimated two simple linear trend models of the relationship between each measure and the block number, one for the good Sender and one for the bad Sender. The difference between the linear trend model’s slope coefficients β g and β b for the good and bad Senders respectively was then tested for statistical significance using the formula derived by Paternoster and colleagues 20 :

where \(SE{\beta }_{g}^{2}\) and \(SE{\beta }_{b}^{2}\) are the variances of β g and β b , respectively. The difference in linear trends was statistically significant for both measures (beta weight measure: Z = 5.87, p  < 0.001; correlation coefficient measure: Z = 7.31, p  < 0.001). These results strongly suggest that Receivers were able to learn which Sender was more reliable during the course of their brain-to-brain interactions with the two Senders.

figure 7

Quantification of Learning of Sender’s Reliability by Receiver. (Left Panel) Evolution over time of linear regression weights (Beta) for the Receivers’ decision vector and decision vector for each type of Sender for each 4-trial block (see text for details). (Right Panel) Evolution over time of Pearson Correlation Coefficient between the decisions of Receivers and Senders of each type. Both plots exhibit ascending trends for the “Good” Sender but not the “Bad” Sender, suggesting that Receivers learned which Sender was more reliable during the course of their brain-to-brain interactions with the two Senders.

This paper presents, to our knowledge, the first successful demonstration of multi-person non-invasive direct brain-to-brain interactions for collaboratively solving a task. We believe our brain-to-brain interface, which we call BrainNet, improves upon previous human brain-to-brain interfaces (BBIs) on three fronts: (1) BrainNet expands the scale of BBIs to multiple human subjects working collaboratively to solve a task. (2) BrainNet is the first BBI to combine brain recording (EEG) and brain stimulation (TMS) in a single human subject, eliminating the need to use any physical movements to convey information (although we did not explicitly instruct subjects to avoid eye movements when using the SSVEP interface, other researchers have shown that an SSVEP BCI can be operated without eye movements 21 , 22 ). With sufficient hardware, our system can be scaled to the case where every subject can both send and receive information using the brain interface. (3) Using only the information delivered by BrainNet, Receivers are able to learn the reliability of information conveyed to their brains by other subjects and choose the more reliable sender. This makes the information exchange mediated by BrainNet similar to real-life social communication, bringing us a step closer to a “social network of brains.”

Our results on combining information from multiple users builds on previous work in the field of brain-computer interfaces (BCIs) linking the individual contributions of more than two brains to control a computer. In humans, researchers have studied “collaborative BCIs” (rather than BBIs) that pool information from multiple human brains to improve performance in a delayed saccade-or-reach task 23 ; however, subjects performed the task on different days and no brain stimulation was used to convey information directly to subjects’ brains. A different study 12 demonstrated that three non-human primates can jointly control a 3D virtual avatar arm using brain signals recorded with invasive electrodes implanted in the motor cortex; again, the goal was distributing a single task across multiple individuals linked to a common BCI without encoding any neural information for feedback and interaction via stimulation. More closely related to our study is the work of Pais-Vieira et al . 24 , who used implanted electrodes to both decode information from and transmit information to the somatosensory cortices of multiple rodents to demonstrate the possibility of distributing computations across multiple brains. The brains of the rats were linked to solve several computational problems including a weather forecasting task based on weather data from a local airport. However, the animals were entirely unaware of both the actual task being solved and of their collaboration with others; by contrast, in BrainNet, the participants are completely aware of the task and are conscious of being collaborators within a “network of brains.”

BrainNet could be improved in several ways: (1) From the first human BBI 1 to BrainNet, the level of information complexity has remained binary, i.e., only a bit of information is transmitted during each iteration of communication. Additionally, this low bit rate required a disproportionate amount of technical hardware and setup. To address the limitation of low bit rate, we are currently exploring the use of functional Magnetic Resonance Imaging (fMRI) to increase the bandwidth of human BBIs. Other approaches worth exploring include combining EEG and fMRI to achieve both high spatial and temporal resolution 25 for decoding, and using TMS to stimulate higher-order cortical areas to deliver more complex information such as semantic concepts. (2) We purposefully introduced a “bad” sender in BrainNet design to study whether the Receiver can learn which Sender is more reliable. It would be interesting to investigate whether the Receiver can learn the reliability of Senders in more natural scenarios where the unreliability originates from the noisy nature of a Sender’s brain recordings or from a Sender’s lack of knowledge, diminished attention, or even malicious intent. (3) From an implementation standpoint, BrainNet uses a typical server-client TCP protocol to transmit information between computers. However, the server is solely designed for BrainNet’s experimental task and is not a general-purpose server. A cloud-based BBI server could direct information transmission between any set of devices on the BBI network and make it globally operable through the Internet, thereby allowing cloud-based interactions between brains on a global scale. Such BBIs, when developed within an ethically-grounded framework, have the potential to not only open new frontiers in human communication and collaboration but also provide a new scientific tool to explore questions in neuroscience and gain a deeper understanding of the human brain.

Participants

Fifteen healthy participants (aged 18–35 yrs, average 22.7 yrs, eight female), took part in a controlled laboratory experiment. All participants were recruited through word of mouth, were fully informed about the experimental procedure and its potential risks and benefits, and gave written consent prior to the beginning of the experiment according to the guidelines put forth by the University of Washington. Both the experimental and the recruitment procedures were reviewed and approved by the Institutional Review Board of the University of Washington (IRB Application #52392). The participants were divided into five groups, with each group being a triad of one participant playing the role of the “Receiver” and two playing the roles of “Senders.” To maintain their decision to participate free of any external influence, all participants received monetary compensation that was independent of their role and proportional to the total amount of time devoted to the study.

Experimental Task

During each session, a triad of three participants collaborated to play a simplified Tetris-like game. The game consisted of independent trials, each of which involved deciding whether or not to rotate a single block of a particular shape by 180 degrees. At the bottom of the screen, there was a partially filled line whose gaps could be filled by either the top or bottom part of the block at the top of screen. The goal of the game was to achieve the highest possible score by making the correct decision to rotate or not rotate the current block so that when dropped at the end of the trial, it would fill the missing parts of the line at the bottom. We designed the task such that the actual player of the game, namely the Receiver, could only see the block at the top of screen and not the bottom line. The other two subjects, namely the Senders, could see both the block at top and the line at bottom (see Fig.  2 ). Thus, the only way for the Receiver to achieve a high score was by integrating the decisions transmitted by both Senders and make his/her own decision for the game.

Each session was made of sixteen independent trials; in half of them the falling block had to be rotated and in the other half, it had to be left in the original orientation. The order of rotation and non-rotation trials was randomized, with the constraint that each half of the session had to contain 4 rotation and 4 non-rotation trials.

Each trial comprised of two rounds of interactions between the Senders and the Receiver. Each round offered a chance to rotate the block. After the first round, the block was rotated or remained in the same orientation based on Receiver’s decision. The block then dropped halfway and the screens shown to all three subjects were updated to show the (possibly rotated) block at the halfway location (see Fig.  2 ). Note that one decision is sufficient to complete the task of filling the bottom line but because of our two-step design, the Senders receive feedback on the Receiver’s action in the first round and can send the Receiver new suggestons, allowing the Receiver to potentially correct a mistake made in the first round and still successfully complete a trial.

The three participants in a triad were located in different rooms in the same building on the University of Washington campus and could only communicate with each other through the brain-to-brain interface.

BrainNet: Multi-Person Brain-to-Brain Interface

Figure  1 depicts the architecture of BrainNet. BrainNet relies on two well-known technologies: Electroencephalography (EEG) 26 for non-invasively recording brain signals from the scalp and transcranial magnetic stimulation (TMS) 27 for non-invasively stimulating the visual cortex. The Senders convey their decisions of “rotate” or “do not rotate” by controlling a horizontally moving cursor (Fig.  8 ) using steady-state visually-evoked potentials (SSVEPs) 28 : to convey a “Rotate” decision, Senders focused their attention on a “Yes” LED light flashing at 17 Hz placed on the left side of their computer screen; to convey a “Do Not Rotate” decision, they focused on the “No” LED light flashing at 15 Hz placed on the right side. These LEDs are depicted as circles attached to the screens in Fig.  1 . The cursor position provided real-time visual feedback to the Senders. The direction of movement of the cursor was determined by comparing the EEG power at 17 Hz versus 15 Hz, with a higher power at 17 Hz over that at 15 Hz moving the cursor towards the left side near the “Yes” LED, and vice-versa for the “No” LED. A “Rotate” (“Do Not Rotate”) decision was made when the cursor hit the side of the screen appropriately marked “YES” (“NO”) (see Fig.  8 ). In trials where the cursor did not reach either side of the screen due to trial time elapsing, the decision closest to the last location of the cursor was chosen as the subject’s decision.

figure 8

SSVEP-Based EEG Brain-Computer Interface used by the Senders and the Receiver. Participants conveyed their decisions regarding whether or not to rotate the current block by controlling a cursor (white filled circle) using EEG based steady state visually evoked potentials (SSVEPs). Participants focused on a flashing LED to the left of the screen (depicted as a circle attached to the screen in Fig.  1 ) to move the cursor leftwards towards the “Yes” side. Focusing on the LED to the right of the screen (flashing at a different frequency) caused the cursor to move rightwards towards the “No” side. If the cursor reached the green “Yes” bar, the interface interpreted the participant’s decision to be rotation of the block (by 180 degrees). If the cursor reached the “No” bar, the interface took the participant’s decision to be to keep the block’s current orientation.

The decisions of the two Senders were sent to the Receiver’s computer through a TCP/IP network and were further translated into two pulses of transcranial magentic stimulation (TMS) delivered sequentially to the occipital cortex of the Receiver. Each TMS pulse lasted 1 ms. An eight-second delay was enforced between the two pulses to remain within the strictest safety guidelines of TMS stimulation 29 . The intensity of the stimulation was set above or below the threshold at which the Receiver could perceive a flash of light known as a phosphene: a “Yes” response was translated to an intensity above the threshold, and “No” was translated to an intensity below the threshold. During each round of trials, the Receiver always received the decision from one Sender first, then the other. The screen the Receiver saw also had visual prompts to remind them whose decision the current TMS stimulation was conveying. Receivers made their decision based on whether a phosphene was perceived and conveyed their decision (rotate or do not rotate) to the game using the same SSVEP-based procedure used by both Senders. After the game state was updated, the trial moved into the second round and the above process was repeated. At the end of each trial, all three subjects received feedback on the result of the trial (Fig.  2 , bottom row).

Differential Reliability of Senders

When the decisions from the two Senders do not agree with each other, the Receiver must decide which Sender to trust. To investigate whether the Receiver can learn the reliability of each Sender and choose the more reliable Sender for making decisions, we designed the system to deliberately make one of the Senders less accurate than the other. Specifically, for each session, one Sender was randomly chosen as the “Bad” Sender and, in 10 out of sixteen trials, this Sender’s decision when sent to the Receiver was forced to be incorrect, both in the first and second round of each trial.

EEG Procedure for Senders

Each Sender performed the task in a dedicated room in front of a 21” LCD monitor, with two Arduino-controlled LED lights attached to the left and right outer frames of the monitor for eliciting SSVEPs. EEG signals were recorded through an 8-channel OpenBCI Cyton system (OpenBCI: Brooklyn, NY) with a sampling rate of 250 Hz at a resolution of 16 bits. Signals were acquired from gold-plated electrodes and a layer of electro-conductive paste was applied between each electrode and the participant’s scalp. For the experimental session, three electrodes were set up along the midline in a custom montage with the signal recorded from one occipital electrode (location Oz in the 10–10 placement system) and two frontal electrodes (locations AFz and FCz in the 10–10 system) used as the ground and reference, respectively.

Incoming EEG data was passed through a 4th-order Butterworth filter 30 between 0 and 30 Hz to remove signal drifting and line noise. The time-series EEG data was then divided into 1-second epochs and transformed to the frequency domain using Welch’s method 31 . The intention to rotate the falling block or not was decoded by comparing the power at 17 Hz and 15 Hz obtained from Welch’s method. The final decision was made by tallying up the number of epochs in which the greatest power was recorded at either 17 Hz or 15 Hz over a 10-second period. Signal processing and data storage were managed through a custom software library developed by two of the authors (LJ and DL).

There is no prior training required for controlling the cursor using SSVEPs. During the experiment, the Sender’s monitor displays either the cursor-control interface or a gray background with a text prompt indicating that the Receiver is making a decision.

TMS Procedure for the Receiver

Participants playing the role of the Receiver came in for two consecutive sessions. During the first session, as part of informed consent, they were asked to complete a TMS safety screening questionnaire, aimed at identifying potential conditions (such as family history of seizures or frequent migraines) that might represent potential risk factors for adverse side effects of TMS. No participant was rejected for failing the safety questionnaire. In addition to the safety screening, all Receivers underwent a procedure to determine their absolute phosphene threshold, that is, the minimum amount of stimulation necessary to elicit the perception of an induced phosphene 50% of the time. The absolute threshold was assessed using the PEST method 32 . The absolute threshold was then used as the starting point to identify the stimulation levels associated with the binary “Rotate” and “Do Not Rotate” decisions. Starting from the absolute threshold, the stimulation intensity was first adjusted upwards in increments of 5% until phosphenes could be elicited for 10 consecutive pulses; this value was then used for conveying a “Rotate” decision from a Sender. Then, starting from the absolute threshold value, the stimulation intensity was lowered in 5% increments until no phosphene was elicited for 10 consecutive pulses. This value was then used to convey a “Do Not Rotate” decision from a Sender. During both the testing session and the experimental session, TMS was delivered through a 70-mm Figure-8 Alpha coil (Magstim, UK) positioned over the left occipital lobe in a location corresponding to site O1 in the 10–20 system. The coil was positioned flushed to the head, with the handle parallel to the ground and extending towards the left. The coil was attached to a SuperRapid2 magnetic stimulator (Magstim, UK). The maximum intensity of the electric field for our TMS equipment is 530 V/m, and with our coil, the maximum intensity of the induced magnetic field is 2.0 T.

EEG Procedure for the Receiver

The EEG procedure for the Receiver was identical to that used for the Senders, except that the signal was acquired from a BrainAmp system (BrainVision, Berlin, Germany) with a sampling rate of 5000 Hz and a resolution of 20 bits. The system was equipped with a DC voltage amplifier to reduce signal distortions due to the TMS pulses. Participants wore a standard 32-channel headcap, using AFz and FCz as the ground and reference, respectively. As in the case of the Senders, only data from the Oz channel was recorded. After downsampling to 500 Hz, the incoming data underwent the same preprocessing steps described above for the Senders.

Data Availability

Experiment data and code are available upon request.

Rao, R. P. N. et al . A direct brain-to-brain interface in humans. PLOS ONE 9 , 1–12, https://doi.org/10.1371/journal.pone.0111332 (2014).

Article   CAS   Google Scholar  

Rao, R. P. N. et al . Direct brain-to-brain communication in humans: A pilot study. http://homes.cs.washington.edu/~rao/brain2brain, Accessed 27 January 2019 (2013).

Min, B.-K., J Marzelli, M. & Yoo, S.-S. Neuroimaging-based approaches in brain-computer interface. Trends in biotechnology 28 , 552–60 (2010).

Grau, C. et al . Conscious brain-to-brain communication in humans using non-invasive technologies. PLOS ONE 9 , 1–6, https://doi.org/10.1371/journal.pone.0105225 (2014).

Article   MathSciNet   CAS   Google Scholar  

Stocco, A. et al . Playing 20 questions with the mind: Collaborative problem solving by humans using a brain-to-brain interface. PLOS ONE 10 , 1–15, https://doi.org/10.1371/journal.pone.0137303 (2015).

Rao, R. P. & Stocco, A. When two brains connect. Scientific American Mind 25 , 36–39 (2014).

Article   Google Scholar  

Dingemanse, M. Brain-to-brain interfaces and the role of language in distributing agency. Distributed Agency 59 (2017).

Kyriazis, M. Systems neuroscience in focus: from the human brain to the global brain? Frontiers in systems neuroscience 9 , 7 (2015).

Hongladarom, S. Brain-brain integration in 2035: metaphysical and ethical implications. Journal of Information, Communication and Ethics in Society 13 , 205–217 (2015).

Montague, P. R. Hyperscanning: Simultaneous fMRI during linked social interactions. NeuroImage 16 , 1159–1164 (2002).

Nicolelis, M. A. L. Beyond Boundaries (Macmillan, 2011).

Ramakrishnan, A. et al . Computing arm movements with a monkey brainet. Scientific Reports 5 , https://doi.org/10.1038/srep10767 (2015).

Bakshy, E., Rosenn, I., Marlow, C. & Adamic, L. The role of social networks in information diffusion. In Proceedings of the 21st international conference on World Wide Web , 519–528 (ACM, 2012).

Losey, D. M., Stocco, A., Abernethy, J. A. & Rao, R. P. Navigating a 2D virtual world using direct brain stimulation. Frontiers in Robotics and AI 3 , 72 (2016).

Fawcett, T. An introduction to ROC analysis. Pattern Recogn. Lett. 27 , 861–874, https://doi.org/10.1016/j.patrec.2005.10.010 (2006).

Rao, P. V. et al . Statistical research methods in the life sciences (Duxbury Press, 1998).

Cover, T. M. & Thomas, J. A. Elements of Information Theory (Wiley, 2006).

Panzeri, S., Senatore, R., Montemurro, M. A. & Petersen, R. S. Correcting for the sampling bias problem in spike train information measures. Journal of Neurophysiology (2017).

Kenney, J. & Keeping, E. Mathematics of statistics (Part I) (Van Nostrand, 1947).

Paternoster, R., Brame, R., Mazerolle, P. & Piquero, A. Using the correct statistical test for the equality of regression coefficients. Criminology 36 , 859–866 (1998).

Kelly, S., Lalor, E., Reilly, R. & Foxe, J. Visual spatial attention tracking using high-density SSVEP data for independent brain–computer communication. IEEE Transactions on Neural Systems and Rehabilitation Engineering 13 , 172–178, https://doi.org/10.1109/tnsre.2005.847369 (2005).

Article   PubMed   Google Scholar  

Allison, B. Z. et al . Towards an independent brain–computer interface using steady state visual evoked potentials. Clinical Neurophysiology 119 , 399–408, https://doi.org/10.1016/j.clinph.2007.09.121 (2008).

Article   PubMed   PubMed Central   Google Scholar  

Wang, Y. & Jung, T.-P. A collaborative brain-computer interface for improving human performance. PLOS ONE 6 , 1–11, https://doi.org/10.1371/journal.pone.0020422 (2011).

Pais-Vieira, M., Chiuffa, G., Lebedev, M., Yadav, A. & Nicolelis, M. A. L. Building an organic computing device with multiple interconnected brains. Scientific Reports 5 , 11869 EP– Article (2015)

Huster, R. J., Debener, S., Eichele, T. & Herrmann, C. S. Methods for simultaneous EEG-fMRI: An introductory review. Journal of Neuroscience 32 , 6053–6060, https://doi.org/10.1523/jneurosci.0447-12.2012 (2012).

Article   CAS   PubMed   Google Scholar  

Nunez, P. L. & Srinivasan, R. Electric Fields of the Brain (Oxford University Press, 2005).

O’Shea, J. & Walsh, V. Transcranial magnetic stimulation. Current Biology 17 , R196–R199, https://doi.org/10.1016/j.cub.2007.01.030 (2007).

Vialatte, F.-B., Maurice, M., Dauwels, J. & Cichocki, A. Steady-state visually evoked potentials: Focus on essential paradigms and future perspectives. Progress in Neurobiology 90 , 418–438, https://doi.org/10.1016/j.pneurobio.2009.11.005 (2010).

Rossi, S. et al . Safety, ethical considerations, and application guidelines for the use of transcranial magnetic stimulation in clinical practice and research. Clinical neurophysiology 120 , 2008–2039 (2009).

Sorrentino, R. & Bianchi, G. Electronic Filter Simulation & Design (McGraw-Hill Education, US, 2007).

Welch, P. The use of fast Fourier transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms. IEEE Transactions on Audio and Electroacoustics 15 , 70–73, https://doi.org/10.1109/tau.1967.1161901 (1967).

Taylor, M. M. & Creelman, C. D. PEST: Efficient estimates on probability functions. The Journal of the Acoustical Society of America 41 , 782–787, https://doi.org/10.1121/1.1910407 (1967).

Article   ADS   Google Scholar  

Download references

Acknowledgements

This work is made possible by a W.M. Keck Foundation Award to AS, CP, and RPNR, and a Levinson Emerging Scholars Award to LJ. RPNR was also supported by NSF grant no. EEC-1028725 and a CJ and Elizabeth Hwang Endowed Professorship. We thank Nolan Strait for software testing.

Author information

Authors and affiliations.

University of Washington, Paul G. Allen School of Computer Science & Engineering, Seattle, WA, 98195, USA

Linxing Jiang & Rajesh P. N. Rao

University of Washington, Department of Psychology, Seattle, WA, 98195, USA

Andrea Stocco, Justin A. Abernethy & Chantel S. Prat

University of Washington, Institute for Learning and Brain Sciences, Seattle, WA, 98195, USA

Carnegie Mellon University, Department of Machine Learning, Pittsburgh, PA, 15213, USA

Darby M. Losey

Carnegie Mellon University, Center for the Neural Basis of Cognition, Pittsburgh, PA, 15213, USA

University of Washington Institute for Neuroengineering, Seattle, WA, 98195, USA

Andrea Stocco, Chantel S. Prat & Rajesh P. N. Rao

University of Washington, Center for Neurotechnology, Seattle, WA, 98195, USA

You can also search for this author in PubMed   Google Scholar

Contributions

R.P.N.R., A.S., C.P. conceived the experiment, L.J., J.A. conducted the experiment, L.J., D.L. implemented the software, L.J. and A.S. analyzed the results. All authors were involved in writing and reviewing the manuscript.

Corresponding author

Correspondence to Rajesh P. N. Rao .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Jiang, L., Stocco, A., Losey, D.M. et al. BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains. Sci Rep 9 , 6115 (2019). https://doi.org/10.1038/s41598-019-41895-7

Download citation

Received : 15 October 2018

Accepted : 20 March 2019

Published : 16 April 2019

DOI : https://doi.org/10.1038/s41598-019-41895-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Guiding human navigation with noninvasive vestibular stimulation and evoked mediolateral sway.

  • Tad T. Brunyé
  • Ester Navarro
  • Holly A. Taylor

Journal of Cognitive Enhancement (2024)

Deep learning based computer aided diagnosis of Alzheimer’s disease: a snapshot of last 5 years, gaps, and future directions

  • Anish Bhandarkar
  • Pratham Naik
  • Santosh Pattar

Artificial Intelligence Review (2024)

Cybernetic governance: implications of technology convergence on governance convergence

  • Andrej Zwitter

Ethics and Information Technology (2024)

The Mystery of Mental Integrity: Clarifying Its Relevance to Neurotechnologies

  • Hazem Zohny
  • David M. Lyreskog
  • Julian Savulescu

Neuroethics (2023)

Brain to Brain Interfaces (BBIs) in future military operations; blurring the boundaries of individual responsibility

  • Sahar Latheef

Monash Bioethics Review (2023)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

collaborative problem solving communication

The effects of online simulation-based collaborative problem-solving on students’ problem-solving, communication and collaboration attitudes

  • Open access
  • Published: 18 March 2024

Cite this article

You have full access to this open access article

  • Meng-Jun Chen 1 ,
  • Hsiao-Ching She   ORCID: orcid.org/0000-0002-5316-4426 1 &
  • Pei-Yi Tsai 1  

248 Accesses

Explore all metrics

Despite national curricula and instructional reforms calling for collaborative problem-solving skills (CPS), however, there is an absence of a theory-laden model showing how to effectively construct CPS for science learning. We therefore developed and validated a simulation-based CPS model that exploits its constructs, sequences, and causal relationships, and evaluating its effectiveness on students’ problem-solving. Over the span of a two-week physics science course, 57 ninth-grade students were recruited from two intact middle school classes to engage in this online simulation-based collaborative problem-solving (CPS) program. This program consisted of nine electrochemistry problem-solving lessons spread across four class sessions, each lasting 45 min. Results indicated that the simulation-based CPS model was validated and proven to contribute to effective problem-solving by linking PS solution proposing, peer communication, implementing PS solutions with simulation, and providing evidence-based explanations. The simulation-based CPS model successfully improved the performance of both high- and low-achieving students. With the support and presence of high-achievers, low-achievers’ collaboration attitude was boosted, which lead them to achieve similar learning success.

Similar content being viewed by others

collaborative problem solving communication

Can CPS better prepare 8th graders for problem-solving in electromagnetism and bridging the gap between high- and low-achievers than IPS?

Jiun-Wei Guo, Hsiao-Ching She, … Pei-Yi Tsai

collaborative problem solving communication

Comparing Simulation Sequencing in a Chemistry Online-Supported Project-Based Learning Unit

Mingyang Li, Dermot Francis Donnelly-Hermosillo & Jennifer Click

collaborative problem solving communication

Promoting Students’ Writing Skills in Science through an Educational Simulation: The GlobalEd 2 Project

Avoid common mistakes on your manuscript.

1 Introduction

Collaborative problem-solving (CPS) has become increasingly recognized as a powerful tool for helping students solve complex scientific problems collaboratively, thus national curriculums and instructional reforms across many nations (Binkley et al., 2012 ; Darling-Hammond & McLaughlin, 2011 ) have incorporated CPS skills. OECD defines CPS as an individual’s ability to share and integrate their existing knowledge and perspectives with others when solving problems together (OECD, 2013 ). Collaborative learning offers students the opportunity to construct a shared understanding of knowledge and meaning-making of the content (Fischer et al., 2013 ). Understanding chemistry concepts can be challenging because they require understanding three representational levels, macroscopic, microscopic, and symbolic (Johnstone, 1993 ). Electrochemistry is one of the most complex topics in the study of chemistry (Supasorn et al., 2014 ). The primary reason that electrochemistry is considered one of the difficult topics both at the high school and undergraduate levels is that most processes involve the microscopic level (molecular level) that cannot be observed directly (Rahayu et al., 2022 ), or involve the complex nature and too many concepts (Akram et al., 2014 ). Individuals may find it challenging to grasp microscopic concepts and solve complex problems on their own. The inclusion of CPS skills in educational and professional settings has the potential to equip individuals with the necessary skills and tools to tackle complex problems and thrive in the twenty-first century (Griffin & Care, 2014 ). Low academic achievement may hinder students’ school learning and future careers (Al-Zoubi & Younes, 2015 ). Cook et al. ( 2008 ) reported that students’ academic achievement and prior knowledge are critical for predicting their knowledge construction and comprehension. Other studies suggested that low-achieving students can be as proficient at problem-solving skills as high-achieving students with appropriate instruction (Ben‐David & Zohar, 2009 ; Grimberg & Hand, 2009 ). While science instruction and curriculum reforms have been widespread, a theory-laden model on how to build an effective simulation-based CPS for science learning is still lacking. The purpose of this study is therefore to develop and validate a simulation-based CPS model and investigate its effectiveness in promoting students' learning of science and minimizing the achievement gap between low- and high-achievers.

2 Theoretical frameworks

Studies of CPS have found that it improves student problem-solving competency (Malik et al., 2019 ), engagement (Unal & Cakir, 2021 ) and content knowledge (Harskamp & Ding, 2007 ). Garrison ( 1991 ) decomposed the problem-solving process into problem identification, problem description, problem exploration, applicability, and integration. In some studies, the PS process is divided into problem representation, solutions search, and solutions implementation (Bransford & Schwartz, 1999 ; Newell & Simon, 1972 ), or meeting the problem, analyzing problems and issues, discovering, and reporting, and presenting and evaluating solutions (Chua et al., 2016 ). Another study delineated physics problem-solving as identifying known conceptions, providing possible solutions, evaluating solutions, implementing solutions, and providing evidence-based explanations (Cheng et al., 2017 ). Considering that the literatures above share the PS components of proposing problem solutions, implementing solutions, and providing evidence-based explanations, we incorporate them into our CPS model.

Collaborative learning improves the acquisition and retention of knowledge and helps students solve problems (García-Valcárcel et al., 2014 ). Science is a process in which scientific knowledge is socially constructed, and in which discursive activity is central to the science process (Driver et al., 2000 ). Duran ( 2014 ) noted that communication helps obtain information or new ideas that can help understand a problem better and working together to develop effective solutions to complex problems. Dialogues and the discussions of ideas encourage students’ thinking and learning (Faranda & Clarke, 2004 ). CPS provides students with a communication platform to reconstruct their knowledge and thinking, filling gaps in their understanding and formulating strategies that can collaboratively tackle complex issues (Fawcett & Garton, 2005 ). Studies of group work found that a critical relationship existed between providing explanations and achievement (Howe & Tolmie, 2003 ; Veenman & Spaans, 2005 ). Moreover, explaining to others can enhance learning since the explainer can reorganize and clarify the material, recognize misconceptions, fill in the gaps in their understanding, internalize and acquire new strategies and knowledge, and develop new perspectives and understanding (Saxe et al., 2002 ). The groups failed to make progress or seemed to be functioning ineffectively when no group member could answer the question, exhibited problems communicating, and worked without allowing true dialogue (Johnson & Johnson, 2009 ). Communication is a key component of collaboration, which enables students to solve problems together.

Computer simulation has been recognized as a promising tool for supporting CPS activities during scientific learning (Andrews-Todd & Forsyth, 2020 ; Ceberio et al., 2016 ). The simulation can provide opportunities for students to test the invisible and abstract phenomena in the real world and integrate multiple perspectives from their team members, which ultimately aids their understanding of scientific concepts (Akpınar, 2014 ; Lu & Lin, 2017 ). Simulations can reveal invisible, abstract, and microscopic phenomena that are difficult to view in the real world (Chou et al., 2022 ; Sinensis et al., 2019 ), and thus help students construct knowledge by observing concrete simulated phenomena (Saab et al., 2012 ). Simulations offer a unique opportunity to engage students in interactive, hands-on learning experiences that can support their learning of science (Rutten et al., 2012 ). Providing learners with simulations can help them gain a deeper understanding of complex concepts and microscopic phenomena.

3 Hypotheses development and research model for simulation-based CPS

As identified in the literature, communicating with one’s partner, proposing solutions to a problem, implementing those solutions with simulation, and developing evidence-based explanations are essential for CPS. However, their constructs, sequences, and causal relationships remain unclear. Based on the theoretical frameworks above, we have proposed the constructs and causal relationships among these elements that govern our research hypothesis in Fig.  1 . We hypothesize that including communication among group members may lead to the development of PS solutions, which further influence the implementation PS solutions with simulations and evidence-based explanations, and thereby contributing to their problem-solving performance. The following hypotheses were proposed to validate its effectiveness using the partial least squares structural equation model (PLS-SEM).

H1. Communication dialogues between students have a significant positive effect on their PS solution generation.

H2. PS solutions proposed by students have a significant positive effect on their implementation of PS solutions with simulations.

H3. PS solutions proposed by students have a significant positive effect on their ability to make evidence-based explanations of the results.

H4. Implementing PS solutions with simulation has a significant effect on their ability to provide evidence-based explanations.

H5. Evidence-based explanations provided by students have a significant impact on their problem-solving performance.

figure 1

Proposed model construct for simulation-based CPS learning

4 Research questions

This study aims to determine whether our validated simulation-based CPS model can enhance students’ electrochemistry problem-solving abilities and benefit students of varying achievement levels through online collaboration. Therefore, the following four research questions are proposed as guidelines: (1) whether high- and low-achievers would significantly improve their performance on the electrochemical problem-solving test (ECPST) after learning;(2) whether high- and low-achievers would significantly improve their performance in proposing problem-solving (PS) solutions after peer communication; (3) whether high- and low-achievers would engage in a different amount of supportive dialogues, including giving support, requesting support, and reminding; and (4) whether high- and low-achievers differ in their attitudes toward collaborations after completing the online electrochemistry CPS learning.

5.1 Subjects and procedures

Over the span of a two-week physics science course, a total of 57 ninth grade students from a middle school two intact classes were recruited to participate in this online simulation-based collaborative problem-solving (CPS) program. To prove and validate the effectiveness of this simulation-base CPS program, thus we designed an entire electrochemistry unit with nine electrochemistry problem-solving lessons spread across four class sessions, each lasting 45 min. The nine electrochemistry problem-solving lessons comprised five on galvanic cells and four on electrolytic cells (Fig.  2 ). Each simulation-based CPS lesson was designed with four components: communication with partners, proposing PS solutions, implementing PS solutions with simulations, and making evidence-based reasoning. During four class sessions over two weeks, high- and low-achievers were anonymously paired up heterogeneously without knowing the identities of their partners. It is to ensure that social status does not negatively impact the ability to engage in communication dialogues, problem-solving, and collaborations.

figure 2

The design of online simulation-based CPS learning

Students were classified into high- and low-achievers based on their school science achievements. We used median school science achievement scores, with a threshold of 80 points, to classify students into high and low achievers. Students with school science achievement scores ≥ 80 points were classified as high achievers, and those with scores < 80 points were classified as low achievers. Heterogeneous groups were formed, each comprising one high- and one low-achiever. One week before and after online electrochemical collaborative problem-solving (CPS) program, all students were administered the electrochemical problem-solving test (ECPST). During the online learning, students’ online problem-solving processes were collected and recorded in MySQL database, including their problem-solving (PS) solutions, implementation PS solutions with simulations, evidence-based explanations, and communication dialogues.

5.2 The development of online electrochemistry collaborative problem-solving (CPS) learning activities

The electrochemistry CPS project was developed based on the national standards for 9th grade chemistry curriculum. A panel of three scientists designed the electrochemistry problem-solving content, including a science education professor, a Ph.D. candidate in science education with three years of middle school science teaching experience, and an experienced middle school science teacher. To create the online electrochemistry CPS program, Unity 3D technologies were used to develop simulations and experiments, the photon network was used to build multi-person collaborations, and a MySQL database was utilized to collect data.

Nine problem-solving lessons were designed: five on the topic of galvanic cells and four on the topic of electrolytic cells. Each CPS lesson required the students to communicate with their partners, propose PS solutions, implement PS solutions with simulation, and provide evidence-based explanations (Fig.  2 ). The five lessons on galvanic cells covered identifying electrode pairs to generate electric currents, finding electrolyte solutions to produce electric currents, finding salt bridge solutions to generate a current, identifying the electronic flow between electrodes, and identifying the movement of ions in the electrolyte solutions. The four lessons on electrolysis cells covered identifying electrolyte solutions, identifying how the electronic flow affects the anode and cathode in electrolysis, finding electrolyte solutions to produce gases during electrolysis at particular electrodes, and finding electrode pairs for copper sulfate electrolysis without changing their colors.

During the CPS process, each student must propose at least two PS solutions (Fig.  3 A). Upon submitting their proposed PS solutions, they were required to communicate with their partners for revising and modifying their proposals as needed (Fig.  3 B). Once their PS solutions have been finalized, they needed to implement their PS solutions with their teammates by running simulations in rotation, and their simulation screens would be automatically shared. During the simulation, they were able to test their proposed PS solutions and observe the changes of macroscopic (color change, electrochemical reaction product, etc.) and microscopic phenomena (ions, electrons, etc.) (Fig.  3 C). By implementing their PS solutions with 3D simulation, they were able to validate whether their PS solutions were feasible and workable. Students had to record the simulation results. Students were also required to provide evidence-based explanations to assess their physics understanding after completing these problem-solving processes (Fig.  3 D & E). The feedback with the correct answer was given after completing the evidence-based explanations (Fig.  3 F).

figure 3

Screen shots for online simulated-based CPS learning platform

5.3 Electrochemical problem-solving test (ECPST)

The ECPST is an open-ended diagnostic instrument designed to measure students’ electrochemical problem-solving performance before and after the intervention. The same panel of three developed the ECPST to ensure the questions were properly constructed and relevant to an online electrochemical problem-solving program. It consists of three galvanic cells and three electrolytic cells which required students to propose three viable solutions to each question and explain the reasons for their proposed PS solutions. Each correct solution was worth 2–4 points, depending on how many subcomponents were required. Students were awarded two points for a correct response, one point for a partially-correct response, and zero point for an incorrect response. A maximum achievable cumulative score was 64 points. Two raters scored students’ ECPST results based on the coding system, and the inter-rater reliability was 0.916.

5.4 Attitudes toward collaborations

PISA 2015 designed eight items of the attitudes toward collaboration questionnaire, including two indices of cooperation that reflected students’ valuing of relationships and teamwork (OECD, 2013 ). The four statements that comprised the index of valuing relationships were related to altruistic interactions which the student engages in collaborative activities not for their own benefit: “I am a good listener”; “I enjoy seeing my classmates be successful”; “I take into account what others are interested in”; and “I enjoy considering different perspectives.” By contrast, three of the four statements that comprised the index of valuing teamwork were related to what teamwork produces as opposed to working alone: “I prefer working as part of a team to working alone”; “I find that teams make better decisions than individuals”; and “I find that teamwork raises my own efficiency.”

5.5 Analyses of the online problem-solving processes and communication dialogues

Students’ online problem-solving processes were analyzed: communicating with partners, proposing the PS solution, implementing PS solutions with simulations, making evidence-based reasoning. In the PS solutions, the student who proposed each correct solution would earn a point. In the evidence-based explanations, two points for a correct response, one for a partially correct response, and zero for an incorrect response. The coding system for students’ implementation of PS solutions with simulations results assigned one point when they correctly reported their simulation results and one point for running accurate simulation. The inter-rater reliability of these three rubrics for PS solution, implementing PS solutions with simulations, and making evidence-based explanations were 0.963, 0.966, and 0.927, respectively. Students’ online discussion dialogues were analyzed with a coding system, which included giving support, requesting support, and reminding partners of the three categories; and the inter-rater reliability was 0.913.

5.6 PLS-SEM model

Hair et al. ( 2022 ) advocated that partial least squares structural equation modeling (PLS-SEM) is appropriate for analyzing small sample sizes and validating theoretical frameworks. PLS is increasingly used in education for developing exploratory models (Barclay et al., 1995 ). The PLS-SEM comprises two components, the measurement, and the structural model (Henseler et al., 2009 ).

The measurement model assesses indicator reliability using outer loading, where the value should exceed 0.50. The Cronbach’s α and the composite reliability (CR) are measures of internal consistency and both should be greater than 0.60. The average variance extracted (AVE) assesses convergence validity, which should be greater than 0.50 (Fornell & Larcker, 1981 ; Hair et al., 2022 ). To assess the discriminant validity of the PLS-SEM model, two commonly used criteria are the Fornell-Larcker criterion and the heterotrait-monotrait ratio (HTMT). According to the Fornell-Larcker criterion, the square root of each construct’s diagonal AVE must be greater than the correlation between that construct with all the other constructs. The HTMT determines whether the correlation between the two constructs is less than 0.90 (Henseler et al., 2015 ). Accordingly, we used the PLS-SEM methodology to examine hypotheses 1 through 5, as previously stated.

The structure model obtains various coefficients for evaluating the research hypothesis formulated (Henseler & Chin, 2010 ). To calculate the goodness of fit of the structural model when using PLS-SEM, the standardized root mean residual (SRMR) was used. An SRMR value of less than 0.10, or equal to 0.08, indicates a good fit in the PLS-SEM model, according to Ringle et al. ( 2015 ). However, PLS-SEM is still in its early stages and may not always be applicable. As a result, reporting these criteria should be exercised with caution. In addition, the path coefficient and of determination (R 2 ) coefficient were reported, and all statistical analyses were performed using SmartPLS 4.

6.1 PLS-SEM model

6.1.1 measurement model.

Table 1 presents the convergent validity and reliability of the proposed constructs of the model. There was satisfactory reliability among each of these indicators, with the loadings ranging from 0.82 to 0.97. The Cronbach’s alpha coefficients were all above 0.71, indicating adequate reliability. The CR indices were above 0.87, confirming each construct’s internal reliability. According to the convergent validity, the average variance extracted (AVE) ranged from 0.77 to 0.88. It reveals that the indicators account for more than 77% of the variance of each construct.

The discriminant validity was also assessed using the Fornell-Larcker criterion. Based on the results, the square root of each construct’s AVE was greater than the correlation between that construct and all others. Further, the HTMT of the correlations was below 0.90, thus confirming discriminant validity (Table  2 ).

6.1.2 Structural model

Evaluation of the structural model involved assessing the significance level of the relationships between constructs and the prediction quality of each construct. An evaluation of the path coefficient of the structural model using PLS-SEM appears in Fig.  4 . The SRMR value of the structural model was 0.095, which is less than 0.10, indicating a good fit in the PLS-SEM model, based on Ringle et al.’s ( 2015 ) recommendation. However, the criteria for model fit in PLS-SEM are still in the early stages of research and may not always be applicable (Ringle et al., 2015 ). Table 3 summarizes five hypotheses supported by the proposed structural model to show the direct effect between constructs. Among proposing PS solutions, communication, implementing PS solutions with simulation, evidence-based explanations, and ECPST, the R 2 values ranged between 0.18 and 0.48, indicating small to moderate predictability. The f 2 values for communication → proposed PS solutions, proposed PS solutions → evidence-based explanations, proposed PS solutions → implementing PS solutions with simulation, implementing PS solutions with simulation → evidence-based explanations, and evidence-based explanations → ECPST were 0.26, 0.14, 0.21, 0.15, 0.92, respectively.

figure 4

Path coefficient of the model ( *** p  < 0.001, ** p  < 0.01, * p  < 0.05)

Furthermore, the results of the indirect effect are presented in Table  4 , where it can be observed that three paths demonstrated statistical significance. The results indicated that significant indirect effects exist in communication → proposed PS solutions → evidence-based explanations, communication → proposed PS solutions → implementing PS solutions with simulation, and proposed PS solutions → evidence-based explanations → ECPST. However, since communication can predict the evidence-based explanations (β = 0.217, p  < 0.001) and implementing PS solutions with simulation (β = 0.189, p  < 0.005), and proposed PS solutions can predict ECPST (β = 0.333, p  < 0.001), therefore, the indirect effect for these paths were partially mediated.

6.2 The effectiveness of simulation-based CPS model on low- and high-achievers’ problem-solving

6.2.1 electrochemical problem-solving test (ecpst).

To answer the first research question, this study used the one-factor repeated measure ANOVA to examine whether high- and low-achievers would significantly improve their performance on the electrochemical problem-solving test (ECPST) after learning (Table  5 ). The results indicated that the ECPST performance improved significantly from the pretest to the posttest (F = 172.94, p  < 0.001), and achievement level also significantly affected performance (F = 21.94, p  < 0.001). Based on a simple main effect analysis, both high-achievers (F = 63.77, p  < 0.001) and low-achievers (F = 136.66, p  < 0.000) made significant progress from pretest to posttest (Table  6 ). As for achievement levels, high-achievers scored significantly higher than low-achievers in the pretest (F = 16.16, p  < 0.001) and posttest (F = 13.38, p  < 0.01) ECPST.

6.2.2 Online PS solutions

To answer the second research question, this study used a one-factor repeated measure ANOVA to examine whether high- and low-achievers would significantly improve their performance in proposing problem-solving (PS) solutions after peer communication (Table  7 ). The students’ PS solution performance improved significantly from before to after peer communication (F = 12.30, p  < 0.01), as well as their achievement levels (F = 9.73, p  < 0.01). This study found a significant interaction between the achievement levels and the PS solutions before and after peer communication (F = 4.65, p  < 0.05). Therefore, the simple main effect proceeded accordingly (Table  8 ). Based on the simple main effect analysis, only low-achievers (F = 16.36, p  < 0.001) made significant progress on students’ PS solution performance from before to after peer communication. A significantly higher PS solution performance was only observed in high-achievers before peer communication than in the low-achievers (F = 15.52, p  < 0.001). After peer communication, high- and low-achievers did not significantly differ in their performance in providing PS solutions (F = 3.97, p  = 0.051).

6.2.3 Online communication dialogues

To answer the third research question, this study used the one-factor multivariate analysis of variance (MANOVA) to determine whether high- and low-achievers would engage in a different amount of supportive dialogues, including giving support, requesting support, and reminding (Table  9 ). According to the results, high-achievers allocated significantly more dialogues to giving support than low-achievers (F = 5.97, p  < 0.05). However, high- and low- achievers, however, allocated similar amounts of requesting support and reminding dialogues (F = 0.01, p  = 0.941 and F = 0.042, p  = 0.839).

6.3 Attitudes toward collaborations and their association with online PS solutions

To answer the last research question, we used a one-factor analysis of covariance (ANCOVA) to examine whether high- and low-achievers differ in their attitudes toward collaborations after completing the online electrochemistry CPS learning (Table  10 ). Results show that the low-achievers’ attitudes toward collaboration after learning were significantly higher than those of the high-achievers (F = 4.05, p  < 0.05) when the effects of collaboration attitudes before learning were controlled. Also, the low-achievers’ value of teamwork was significantly higher than that of high-achievers (F = 7.12, p  < 0.05).

The scatter plot illustrated that most low-achievers’ PS solutions performance had moved upward from the lower right part of the plot after peer communication when the effects of collaboration attitudes were controlled (Fig.  5 ). However, most of the high-achievers’ PS solutions performance remain unchanged. In other words, students with low-achievement levels scored much higher on PS solutions after peer communication when their collaboration attitudes were controlled. However, high-achievers did not differ much in their PS solution performance after peer communication. Similar scatter plot patterns were also found for teamwork value and PS solution performance when the effects of teamwork value were controlled (Fig.  6 ). After peer communication, the association between high-achievers’ post-collaboration attitudes with their PS solution performance became more negative, while low-achievers’ collaboration attitudes did not change much. We found the same association pattern for the teamwork value with PS solutions.

figure 5

Scatter plot and marginal distribution displayed the relationship between high- and low-achievers’ attitudes toward collaborations and their online PS solution performance before and after collaboration

figure 6

Scatter plot and marginal distribution displayed the relationship between high- and low-achievers’ value of teamwork and their online PS solution performance before and after collaboration

7 Discussion

In the present study, we established an empirical proved theory-laden model of online simulation-based CPS for learning science effectively. Using PLS-SEM, we examined the influence of casual relationships between proposing PS solutions, peer communication, implementing solutions with simulation, making evidence-based explanations, and overall problem-solving performance. In summary, our proposed research model achieved an impressive predictive level and supported our five hypotheses. According to previous studies, having a human who can communicate effectively with others in addition to solving problems in the real world makes them more competitive (Bender, 2012 ; Erozkan, 2013 ). The students required communication skills to explain a valid conclusion based on the evidence of science in problem-solving (Yusuf & Adeoye, 2012 ). They support our finding that communication directly influences proposing PS solutions and indirectly influences implementing PS solutions with simulation and making evidence-based explanations. Previous studies reported that computer simulations are an effective tool for supporting the use of CPS for scientific learning (Andrews-Todd & Forsyth, 2020 ; Ceberio et al., 2016 ). Additionally, by integrating simulations with CPS instruction, students gain a better understanding of abstract concepts (Sinensis et al., 2019 ) and CPS skills (Lin et al., 2018 ). Our findings indicated that simulations directly influence students' evidence-based explanations and contribute to their effective problem-solving, which supports above literatures.

The present study demonstrated that using simulation-based CPS effectively leverages both low- and high-achievers to achieve great success in their problem-solving performance. Regarding online CPS learning process, only low-achievers made significant improvement in their scores of PS solution after peer communication, whereas high-achievers did not. High- and low-achievers' online PS solution scores differ significantly before peer communication, but not after. As a result of peer communication, the low-achievers advanced and achieved the same PS solution score levels as high-achievers. No study has reported similar findings, despite an extensive review of the literature. A deeper investigation into this question revealed interesting findings. According to the communication dialogues between students, high achievers gave significantly more support than low achievers. Andrews-Todd and Forsyth ( 2020 ) suggested that collaborative problem-solving groups with at least one member with high cognitive skills lead to enhanced learning performance. It helps explain when high-achievers offer more support to the low-achievers, they are more likely to significantly improve the PS solutions performance of the latter. It implies that the presence and support of high-achievers play a significant role in improving the problem-solving performance of low-achievers. Our results indicated that communication has a direct influence on students’ proposal of PS solutions, which supports why low-achievers significantly improved their PS solution scores. This highlights the unique contribution of our simulation-based CPS model in enhancing low-achievers’ online PS solutions performance through communication and collaboration.

Regarding attitudes toward collaborations, low-achievers perceived a significantly higher collaboration attitudes and its subscale of teamwork value after learning compared to high-achievers. A fascinating pattern derived from the scatter plot and marginal distribution showed that most low-achievers had scored much higher on PS solutions performance after peer communication, when the effects of collaboration attitudes were controlled. However, high-achievers did not differ much in their PS solutions scores after peer communication. The subscale of teamwork value followed a similar pattern. Earlier studies reported that students who have experienced online collaborative learning learned more than they would have individually (Hernández-Selles et al., 2019 ; Ku et al., 2013 ). The OECD reported that disadvantaged students in most countries and economies value teamwork more than advantaged students (OECD, 2013 ), similar to our case. These studies lead us to conclude that the use of theory-laden simulation-based CPS model effectively enhances low-achievers’ collaboration attitudes and their teamwork value, which contributes to their PS solution performance.

This study has shown that online simulation-based CPS models that feature communication, PS solutions, simulation implementation, and evidence-based explanations effectively enhance students’ problem-solving performance. Some potential implications and practical applications are provided below. Firstly, it is highly recommended that future applications of CPS in classroom or online learning include these four components. These four components are crucial not only to provide the opportunity for students to communicate and collaborate but also to enhance their generation of problem-solving solutions, which further impacts their implementation of PS solutions and evidence-based explanations and ultimately leads to greater problem-solving success. Secondly, future applications of CPS in classroom or online learning should group students heterogeneously to minimize the gaps between high and low achievers. Both high and low achievers showed statistically significant improvements in electrochemistry problem-solving using this simulation-based CPS. After peer communication and collaboration in which high achievers offered more support to low achievers, the low achievers improved and achieved the same PS solution score as the high achievers. Therefore, the low achievers developed a more positive attitude toward collaboration and teamwork than the high achievers. Consequently, it is imperative to include members of varying cognitive abilities and achievement levels when forming CPS groups. This practice can reduce the disparities among group members and improve the learning performance of low achievers. Thirdly, students should be given opportunities to visualize microscopic-level phenomena through simulation or animation when solving science problems because scientific concepts are inherent in many micro-level phenomena. It is vital to leverage visual tools such as images, animations, and simulations during this process. The use of simulations, especially, provides a microscopic view of phenomena and allows users to actively manipulate variables and interact with them. Ultimately, we hope that our study will provide insight into the future of simulation-based CPS in all aspects of science learning and problem-solving.

Availability of data and materials

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Akpınar, E. (2014). The use of interactive computer animations based on POE as a presentation tool in primary science teaching. Journal of Science Education and Technology, 23 (4), 527–537.

Article   ADS   Google Scholar  

Akram, M., Surif, J., & Ali, M. (2014). Conceptual difficulties of secondary school students in electrochemistry. Asian Social Science, 10 , 276–281.

Article   Google Scholar  

Al-Zoubi, S. M., & Younes, M. B. (2015). Low academic achievement: Causes and results. Theory and Practice in Language Studies, 5 , 2262–2268.

Andrews-Todd, J., & Forsyth, C. M. (2020). Exploring social and cognitive dimensions of collaborative problem solving in an open online simulation-based task. Computers in Human Behavior, 104 , 105759.

Barclay, D., Thompson, R., & Higgins, C. (1995). The partial least squares (PLS) approach to causal modeling: Personal computer use as an illustration. Technology Studies, 2 , 285–309.

Ben-David, A., & Zohar, A. (2009). Contribution of meta-strategic knowledge to scientific inquiry learning. International Journal of Science Education, 31 (12), 1657–1682.

Bender, T. (2012). Discussion-based online teaching to enhance student learning: Theory, practice and assessment . Stylus Publishing, LLC.

Google Scholar  

Binkley, M., Erstad, O., Herman, J., Raizen, S., Ripley, M., Miller-Ricci, M., & Rumble, M. (2012). Defining Twenty-First Century Skills. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment and Teaching of 21st Century Skills (pp. 17–66). Springer.

Chapter   Google Scholar  

Bransford, J. D., & Schwartz, D. L. (1999). Rethinking transfer: A simple proposal with multiple implications. Review of Research in Education, 24 , 61–100.

Ceberio, M., Almudí, J. M., & Franco, Á. (2016). Design and application of interactive simulations in problem-solving in University-Level Physics Education. Journal of Science Education and Technology, 25 (4), 590–609.

Cheng, S.-C., She, H.-C., & Huang, L.-Y. (2017). The impact of problem-solving instruction on middle school students’ physical science learning: Interplays of knowledge, reasoning, and problem solving. Eurasia Journal of Mathematics, Science and Technology Education , 14 (3), 731–743.  https://doi.org/10.12973/ejmste/80902

Chou, R.-J., Liang, C.-P., Huang, L.-y., & She, H.-C. (2022). The impacts of online skeuomorphic physics inquiry–based learning with and without simulation on 8th graders’ scientific inquiry performance. Journal of Science Education and Technology , 31 , 357–371. https://doi.org/10.1007/s10956-022-09960-5

Chua, B. L., Tan, O. S., & Liu, W. C. (2016). Journey into the problem-solving process: Cognitive functions in a PBL environment. Innovations in Education and Teaching International, 53 (2), 191–202.

Cook, M., Wiebe, E. N., & Carter, G. (2008). The influence of prior knowledge on viewing and interpreting graphics with macroscopic and molecular representations. Science Education, 92 (5), 848–867.

Darling-Hammond, L., & McLaughlin, M. W. (2011). Policies that support professional development in an Era of Reform. Phi Delta Kappan, 92 (6), 81–92.

Driver, R., Newton, P., & Osborne, J. (2000). Establishing the norms of scientific argumentation in classrooms. Science Education, 84 (3), 287–312.

Duran, M. (2014). A study on 7th Grade Students’ Inquiry and Communication Competencies. Procedia - Social and Behavioral Sciences, 116 , 4511–4516.

Erozkan, A. (2013). The effect of communication skills and interpersonal problem solving skills on social self-efficacy. Kuram Ve Uygulamada Egitim Bilimleri, 13 , 739–745.

Faranda, W. T., & Clarke, I. (2004). Student observations of outstanding teaching: Implications for marketing educators. Journal of Marketing Education, 26 (3), 271–281.

Fawcett, L. M., & Garton, A. F. (2005). The effect of peer collaboration on children’s problem-solving ability. British Journal of Educational Psychology, 75 (2), 157–169.

Article   PubMed   Google Scholar  

Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48 (1), 56–66.

Article   PubMed   PubMed Central   Google Scholar  

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18 (1), 39–50.

García-Valcárcel, A., Basilotta Gómez-Pablos, V., & López García, C. (2014). ICT in collaborative learning in the classrooms of primary and secondary education. Comunicar, 21 , 65–74.

Garrison, D. (1991). Critical thinking and adult education: A conceptual model for developing critical thinking in adult learners. International Journal of Lifelong Education, 10 , 287–303.

Griffin, P., & Care, E. (2014). Assessment and teaching of 21st century skills: Methods and approach . Springer.

Grimberg, B. I., & Hand, B. M. (2009). Cognitive pathways: Analysis of students’ written texts for science understanding. International Journal of Science Education, 31 , 503–521.

Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2022). A primer on partial least squares structural equation modeling (PLS-SEM) (3rd ed.). Sage publications.

Harskamp, E., & Ding, N. (2007). Structured collaboration versus individual learning in solving physics problems. International Journal of Science Education, 28 (14), 1669–1688.

Henseler, J., & Chin, W. W. (2010). A comparison of approaches for the analysis of interaction effects between latent variables using partial least squares path modeling. Structural Equation Modeling, 17 , 82–109.

Article   MathSciNet   Google Scholar  

Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43 (1), 115–135.

Henseler, J., Ringle, C. M., & Sinkovics, R. R. (2009). The use of partial least squares path modeling in international marketing. In R. R. Sinkovics & P. N. Ghauri (Eds.), New Challenges to International Marketing (Vol. 20, pp. 277–319). Emerald Group Publishing Limited.

Hernández, N., Muñoz Carril, P., & Gonzalez-Sanmamed, M. (2019). Computer-supported collaborative learning: An analysis of the relationship between interaction, emotional support and online collaborative tools. Computers & Education, 138 , 1–12.  https://doi.org/10.1016/j.compedu.2019.04.012

Howe, C., & Tolmie, A. (2003). Group work in primary school science: Discussion, consensus and guidance from experts. International Journal of Educational Research, 39 (1), 51–72.

Johnson, D. W., & Johnson, R. T. (2009). An educational psychology success story: Social interdependence theory and cooperative learning. Educational Researcher, 38 (5), 365–379.

Johnstone, A. H. (1993). The development of chemistry teaching: A changing response to changing demand. Journal of Chemical Education, 70 (9), 701.

Ku, H.-Y., Tseng, H., & Akarasriworn, C. (2013). Collaboration factors, teamwork satisfaction, and student attitudes toward online collaborative learning. Computers in Human Behavior, 29 , 922–929.

Lin, K.-Y., Yu, K.-C., Hsiao, H. S., Chang, Y.-S., & Chien, Y.-H. (2018). Effects of web-based versus classroom-based STEM learning environments on the development of collaborative problem-solving skills in junior high school students. International Journal of Technology and Design Education, 30 (1), 21–34.

Lu, H.-K., & Lin, P.-C. (2017). A study of the impact of collaborative problem-solving strategies on students’ performance of simulation-based learning — A case of network basic concepts course. International Journal of Information and Education Technology, 7 (5), 361–366.

Malik, A., Minan Chusni, M., & Yanti. (2019). Enhancing student’s problem-solving ability through Collaborative Problem Solving (CPS) on simple harmonic motion concept. Journal of Physics: Conference Series, 1175 , 012179.

Newell, A., & Simon, H. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.

Organisation for Economic Co-operation and Development (OECD) (2013). PISA 2015 collaborative problem solving frameworks. Paris, France: PISA, OECD Publishing. Retrieved from http://www.oecd.org/pisa/pisaproducts/pisa2015draftframeworks.htm

Rahayu, J., Solihatin, E., & Rusmono, R. (2022). The development of online module to improve chemistry learning outcomes in high schools. International Journal of Education, Information Technology, and Others, 5 (3), 31–46.

Ringle, C., Da Silva, D., & Bido, D. (2015). Structural equation modeling with the SmartPLS. Bido, D., da Silva, D., & Ringle, C. (2014). Structural Equation Modeling with the Smartpls. Brazilian Journal of Marketing, 13 (2). 

Rutten, N., van Joolingen, W. R., & van der Veen, J. T. (2012). The learning effects of computer simulations in science education. Computers & Education, 58 (1), 136–153.

Saab, N., van Joolingen, W., & Van Hout-Wolters, B. (2012). Support of the collaborative inquiry learning process: Influence of support on task and team regulation. Metacognition and Learning, 7 , 7–23.

Saxe, R., Guberman, S. R., & Gearheart, B. (2002). Peer interaction and the development of mathematical understandings: A new framework for research and educational practice . In H. Daniels (Ed.), Charting the Agenda (pp. 137–174). Routledge.

Sinensis, A. R., Firman, H., Hamidah, I., & Muslim, M. (2019). Reconstruction of collaborative problem solving based learning in thermodynamics with the aid of interactive simulation and derivative games. Journal of Physics: Conference Series, 1157 , 032042.

Supasorn, S., Khattiyavong, P., Jarujamrus, P., & Promarak, V. (2014). Small-scale inquiry-based experiments to enhance high school students' Conceptual understanding of electrochemistry. International Proceedings of Economics Development and Research, 81 , 85–91.

Unal, E., & Cakir, H. (2021). The effect of technology-supported collaborative problem solving method on students’ achievement and engagement. Education and Information Technologies, 26 (4), 4127–4150.

Veenman, M. V. J., & Spaans, M. A. (2005). Relation between intellectual and metacognitive skills: Age and task differences. Learning and Individual Differences, 15 (2), 159–176.

Yusuf, F. A., & Adeoye, E. A. (2012). Developing critical thinking and communication skills in students: Implications for practice in education. African research review, 6 (1), 311–324.

Download references

Acknowledgements

We acknowledge the financial support that we have received from our Ministry of Science and Technology (MOST), Grand number MOST 107-2511-H-009 -003 -MY3

Open Access funding enabled and organized by National Yang Ming Chiao Tung University.

Author information

Authors and affiliations.

Institute of Education, National Yang Ming Chiao Tung University, 1001, University Road, Hsinchu City, 30010, Taiwan, Republic of China

Meng-Jun Chen, Hsiao-Ching She & Pei-Yi Tsai

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Hsiao-Ching She .

Ethics declarations

Conflict of interest.

The authors have no conflict of interest to disclose.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Chen, MJ., She, HC. & Tsai, PY. The effects of online simulation-based collaborative problem-solving on students’ problem-solving, communication and collaboration attitudes. Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-12609-y

Download citation

Received : 30 August 2023

Accepted : 28 February 2024

Published : 18 March 2024

DOI : https://doi.org/10.1007/s10639-024-12609-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Computer simulation
  • Collaborative problem-solving
  • Peer communication
  • Collaboration attitudes
  • High-vs. low-achievers
  • Find a journal
  • Publish with us
  • Track your research

Collective Choice, Collaboration, and Communication

Affiliations.

  • 1 Department of Psychology, Miami University, Oxford, Ohio 45056, USA; email: [email protected].
  • 2 Department of Applied Health Promotion, Bundeswehr Institute for Preventive Medicine, 56070 Koblenz, Germany; email: [email protected].
  • PMID: 31518524
  • DOI: 10.1146/annurev-psych-010418-103211

This article reviews recent empirical research on collective choice and collaborative problem solving. Much of the collective choice research focuses on hidden profiles. A hidden profile exists when group members individually have information favoring suboptimal choices but the group collectively has information favoring an optimal choice. Groups are notoriously bad at discovering optimal choices when information is distributed to create a hidden profile. Reviewed work identifies informational structures, individual processing biases, and social motivations that inhibit and facilitate the discovery of hidden profiles. The review of collaborative problem-solving research is framed by Larson's concept of synergy. Synergy refers to performance gains that are attributable to collaboration. Recent research has addressed factors that result in groups performing as well as their best member (weak synergy) and better than their best member (strong synergy). Communication dynamics underlying both collective choice and collaborative problem solving are discussed.

Keywords: collaboration; collective choice; group communication; group performance; group process; hidden profiles.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.
  • Choice Behavior*
  • Communication*
  • Cooperative Behavior*
  • Group Processes*
  • Problem Solving*

IMAGES

  1. Collaborative Problem Solving

    collaborative problem solving communication

  2. 5 Expert Collaborative Problem-Solving Strategies

    collaborative problem solving communication

  3. Why Some People Learn Better Using Collaborative Learning Techniques

    collaborative problem solving communication

  4. Group Problem Solving

    collaborative problem solving communication

  5. Collaborative Problem Solving

    collaborative problem solving communication

  6. collaborative problem solving strategies

    collaborative problem solving communication

VIDEO

  1. Collaborative problem-solving, globally

  2. Collaborative Computer-Based Tasks: Maximizing Teamwork

  3. Communication and Collaboration Models

  4. How to Develop Learners’ Collaborative Problem Solving Skills

  5. Eduply Concept: Problem Solving Intro

  6. Collaborative Problem-Solving

COMMENTS

  1. PDF Collaborative Problem Solving

    distinction between individual problem solving and collaborative problem solving is the social component in the context of a group task. This is composed of processes such as the need for communication, the exchange of ideas, and shared identification of the problem and its elements. The PISA 2015 framework defines CPS as follows:

  2. The effectiveness of collaborative problem solving in promoting

    Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field ...

  3. How To Adopt A Collaborative Problem-Solving Approach Through ...

    The collaborative problem-solving approach paves ways to open communication, trust, better planning and smooth implementation of a plan or strategy. Forbes Coaches Council Editorial Standards

  4. Sharing the same languages helps us work better together

    There were no significant differences found between male and female pairs in performance, communication, and collaboration on the insight problem-solving task and the divergent thinking task (all ...

  5. Full article: Measuring collaborative problem solving: research agenda

    Defining collaborative problem solving. Collaborative problem solving refers to "problem-solving activities that involve interactions among a group of individuals" (O'Neil et al., Citation 2003, p. 4; Zhang, Citation 1998, p. 1).In a more detailed definition, "CPS in educational setting is a process in which two or more collaborative parties interact with each other to share and ...

  6. Collaborative Problem Solving: A Resource Guide for Counselors

    Collaborative Problem Solving (CPS) is an evidence-based approach that focuses on understanding and addressing the root causes of challenging behavior in children and adolescents. Developed by Dr. Ross Greene, CPS aims to foster empathy, communication, and collaboration between parents, children, and professionals, ultimately leading to more ...

  7. PDF 2 What is collaborative problem solving?

    Collaborative problem solving has several advantages over individual problem solving: labour can be divided among team members; a variety of knowledge, perspectives and ... and communication (presenting ideas clearly and coherently in both written and oral form). Collaboration and independent learning, which are skills developed and used on the ...

  8. PDF Pisa 2015 Collaborative Problem-solving Framework July 2017

    Collaborative problem solving (CPS) is a critical and necessary skill used in education and in the workforce. While problem solving as defined in PISA 2012 (OECD, 2010) relates to individuals working alone on resolving problems where a method of solution is not immediately obvious, in CPS, individuals

  9. Collaborative Problem Solving: The Ultimate Guide

    Because collaborative problem solving involves multiple people and ideas, there are some techniques that can help you stay on track, engage efficiently, and communicate effectively during collaboration. Set Expectations. From the very beginning, expectations for openness and respect must be established for CPS to be effective.

  10. Collaborative Problem Solving

    The PISA 2015 Collaborative Problem Solving assessment measures students' capacity to effectively solve a problem with others by sharing the understanding and effort ... can also quickly lead to communication issues, interpersonal conflict and inefficiencies. It is therefore important that students develop the skills needed to engage in ...

  11. How to Encourage Collaborative Communication in the Workplace

    Step 3: Make problem-solving fun. Brainstorming is a powerful collaborative communication tool that sparks collective creativity in your team. It enhances group dynamics and fosters a fun, collaborative environment. Bring ideas to life with collaborative tools on ClickUp Whiteboards.

  12. Collaborative Problem Solving

    Collaborative Problem Solving. Collaborative problem solving can be defined as, "the capacity of an individual to effectively engage in a process whereby two or more agents attempt to solve a problem by sharing the understanding and effort required to come to a solution and pooling their knowledge, skills, and efforts to reach that solution" (OECD, 2013, p. 6).

  13. Collaborative Problem Solving

    Although designed specifically for use in educational settings, Collaborative Problem Solving outlines the communication skills needed to solve problems within any group. These skills include listening actively, reflecting another person's statements back to that person, asking questions, and summarizing. The skills are then incorporated ...

  14. Creativity, Critical Thinking, Communication, and Collaboration

    Collaborative problem solving—and more generally, collaboration—has gained increasing attention in national and international assessments (e.g., PISA) as an educational priority encompassing social, emotional, and cognitive skills critical to efficiency, effectiveness, and innovation in the modern global economy (Graesser et al. 2018; OECD ...

  15. BrainNet: A Multi-Person Brain-to-Brain Interface for Direct

    We present BrainNet which, to our knowledge, is the first multi-person non-invasive direct brain-to-brain interface for collaborative problem solving. The interface combines electroencephalography ...

  16. Enhance Problem-Solving with Better Team Communication

    Active listening is a skill that can greatly enhance communication within your problem-solving team. It involves paying full attention to the speaker, understanding their message, and responding ...

  17. Collaborative Problem Solving for Parents: A Step-by-Step Guide to

    Implementing the Collaborative Problem Solving Process . 1. Identifying Lagging Skills. The first step in the CPS process is to identify the specific skills that your child may be struggling with. This can be done through a combination of observation, communication, and reflection. Some common lagging skills include: Emotional regulation ...

  18. The effects of online simulation-based collaborative problem-solving on

    Despite national curricula and instructional reforms calling for collaborative problem-solving skills (CPS), however, there is an absence of a theory-laden model showing how to effectively construct CPS for science learning. We therefore developed and validated a simulation-based CPS model that exploits its constructs, sequences, and causal relationships, and evaluating its effectiveness on ...

  19. Embracing Conflict: The Key to Successful Collaboration

    Resolving conflict collaboratively has several benefits. It promotes open communication, encourages mutual understanding, and fosters a sense of ownership and commitment among the parties involved. Additionally, collaborative conflict resolution leads to creative problem-solving and the development of innovative solutions.

  20. Greater than the sum of its parts: The role of minority and majority

    During collaborative problem-solving (CPS) tasks, individuals work together, exchange ideas, and share information to solve a problem [1, 2].CPS is at the center of teamwork and learning across many fields and particularly in STEM education [].The integral role of CPS in solving complex learning, workplace, and global problems makes it a key 21st century skill [] and a central transversal ...

  21. Collective Choice, Collaboration, and Communication

    Collective Choice, Collaboration, and Communication. Annu Rev Psychol2020 Jan 4;71:589-612. doi: 10.1146/annurev-psych-010418-103211. Epub 2019 Sep 13. This article reviews recent empirical research on collective choice and collaborative problem solving. Much of the collective choice research focuses on hidden profiles.

  22. PDF Human-guided Collaborative Problem Solving: A Natural Language based

    We consider the problem of human-machine collaborative problem solving as a planning task coupled with natural language communication. Our framework consists of three components - a natural language engine that parses the lan-guage utterances to a formal representation and vice-versa, a concept learner that induces generalized concepts for plans

  23. Communication Games for Teams: Enhancing Engagement and Collaboration

    Teams can increase cooperation, successful collaboration, and employee engagement by including icebreaker games, verbal and nonverbal communication games, collaborative problem-solving games, and technology-assisted communication games. Accept the transforming power of communication games in your team's relationship-building and overall success.

  24. An artificial intelligence-driven learning analytics method to examine

    Collaborative problem solving (CPS) enables student groups to complete learning tasks, construct knowledge, and solve problems. Previous research has argued the importance of examining the complexity of CPS, including its multimodality, dynamics, and synergy from the complex adaptive systems perspective. However, there is limited empirical research examining the adaptive and temporal ...