Leading in Context

Unleash the Positive Power of Ethical Leadership

How Is Critical Thinking Different From Ethical Thinking?

difference of critical thinking and ethics

By Linda Fisher Thornton

Ethical thinking and critical thinking are both important and it helps to understand how we need to use them together to make decisions. 

  • Critical thinking helps us narrow our choices.  Ethical thinking includes values as a filter to guide us to a choice that is ethical.
  • Using critical thinking, we may discover an opportunity to exploit a situation for personal gain.  It’s ethical thinking that helps us realize it would be unethical to take advantage of that exploit.

Develop An Ethical Mindset Not Just Critical Thinking

Critical thinking can be applied without considering how others will be impacted. This kind of critical thinking is self-interested and myopic.

“Critical thinking varies according to the motivation underlying it. When grounded in selfish motives, it is often manifested in the skillful manipulation of ideas in service of one’s own, or one’s groups’, vested interest.” Defining Critical Thinking, The Foundation For Critical Thinking

Critical thinking informed by ethical values is a powerful leadership tool. Critical thinking that sidesteps ethical values is sometimes used as a weapon. 

When we develop leaders, the burden is on us to be sure the mindsets we teach align with ethical thinking. Otherwise we may be helping people use critical thinking to stray beyond the boundaries of ethical business. 

Unl eash the Positive Power of Ethical Leadership

© 2019-2024 Leading in Context LLC

difference of critical thinking and ethics

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to share on Pinterest (Opens in new window)
  • Click to print (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Pingback: Unveiling Ethical Insights: Reflecting on My Business Ethics Class – Atlas-blue.com
  • Pingback: The Ethics Of Artificial Intelligence – Surfactants
  • Pingback: Five Blogs – 17 May 2019 – 5blogs

Join the Conversation!

This site uses Akismet to reduce spam. Learn how your comment data is processed .

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

10.1: Ethics vs. Morality

  • Last updated
  • Save as PDF
  • Page ID 223916

There’s no standard distinction between the ‘ethical’ and the ‘moral.’ Which are ethical questions? Which are moral questions? Who knows?

I like to think about them the following way:

The ethical (from Greek ethos ) is a really broad category encompassing questions about everything we do. The ethical is about your relationship with yourself (and if you’re a theist about your relationship with God).

The moral (from Latin mores or customs) is a narrower category encompassing only questions about our relations with one another. Moral questions are like the morality of abortion, murder, theft, lying, etc. They’re about how we interact with other agents/actors.

A sub-set of moral questions are political : how should we govern our society? What taxation schemes are fair/just/moral? What is a moral policing strategy? Etc.

On this conception, the ethical encompasses the moral and political because ethical questions are questions about the good life and what we ought to do, whereas moral questions are about what we ought to do to and with one another.

It’s important to note, though, that this isn’t an authoritative way to draw the distinction. There are other ways to do so. In this class, I tend to just use ‘moral’ and ‘ethical’ interchangeably.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Moral Reasoning

While moral reasoning can be undertaken on another’s behalf, it is paradigmatically an agent’s first-personal (individual or collective) practical reasoning about what, morally, they ought to do. Philosophical examination of moral reasoning faces both distinctive puzzles – about how we recognize moral considerations and cope with conflicts among them and about how they move us to act – and distinctive opportunities for gleaning insight about what we ought to do from how we reason about what we ought to do.

Part I of this article characterizes moral reasoning more fully, situates it in relation both to first-order accounts of what morality requires of us and to philosophical accounts of the metaphysics of morality, and explains the interest of the topic. Part II then takes up a series of philosophical questions about moral reasoning, so understood and so situated.

1.1 Defining “Moral Reasoning”

1.2 empirical challenges to moral reasoning, 1.3 situating moral reasoning, 1.4 gaining moral insight from studying moral reasoning, 1.5 how distinct is moral reasoning from practical reasoning in general, 2.1 moral uptake, 2.2 moral principles, 2.3 sorting out which considerations are most relevant, 2.4 moral reasoning and moral psychology, 2.5 modeling conflicting moral considerations, 2.6 moral learning and the revision of moral views, 2.7 how can we reason, morally, with one another, other internet resources, related entries, 1. the philosophical importance of moral reasoning.

This article takes up moral reasoning as a species of practical reasoning – that is, as a type of reasoning directed towards deciding what to do and, when successful, issuing in an intention (see entry on practical reason ). Of course, we also reason theoretically about what morality requires of us; but the nature of purely theoretical reasoning about ethics is adequately addressed in the various articles on ethics . It is also true that, on some understandings, moral reasoning directed towards deciding what to do involves forming judgments about what one ought, morally, to do. On these understandings, asking what one ought (morally) to do can be a practical question, a certain way of asking about what to do. (See section 1.5 on the question of whether this is a distinctive practical question.) In order to do justice to the full range of philosophical views about moral reasoning, we will need to have a capacious understanding of what counts as a moral question. For instance, since a prominent position about moral reasoning is that the relevant considerations are not codifiable, we would beg a central question if we here defined “ morality ” as involving codifiable principles or rules. For present purposes, we may understand issues about what is right or wrong, or virtuous or vicious, as raising moral questions.

Even when moral questions explicitly arise in daily life, just as when we are faced with child-rearing, agricultural, and business questions, sometimes we act impulsively or instinctively rather than pausing to reason, not just about what to do, but about what we ought to do. Jean-Paul Sartre described a case of one of his students who came to him in occupied Paris during World War II, asking advice about whether to stay by his mother, who otherwise would have been left alone, or rather to go join the forces of the Free French, then massing in England (Sartre 1975). In the capacious sense just described, this is probably a moral question; and the young man paused long enough to ask Sartre’s advice. Does that mean that this young man was reasoning about his practical question? Not necessarily. Indeed, Sartre used the case to expound his skepticism about the possibility of addressing such a practical question by reasoning. But what is reasoning?

Reasoning, of the sort discussed here, is active or explicit thinking, in which the reasoner, responsibly guided by her assessments of her reasons (Kolodny 2005) and of any applicable requirements of rationality (Broome 2009, 2013), attempts to reach a well-supported answer to a well-defined question (Hieronymi 2013). For Sartre’s student, at least such a question had arisen. Indeed, the question was relatively definite, implying that the student had already engaged in some reflection about the various alternatives available to him – a process that has well been described as an important phase of practical reasoning, one that aptly precedes the effort to make up one’s mind (Harman 1986, 2).

Characterizing reasoning as responsibly conducted thinking of course does not suffice to analyze the notion. For one thing, it fails to address the fraught question of reasoning’s relation to inference (Harman 1986, Broome 2009). In addition, it does not settle whether formulating an intention about what to do suffices to conclude practical reasoning or whether such intentions cannot be adequately worked out except by starting to act. Perhaps one cannot adequately reason about how to repair a stone wall or how to make an omelet with the available ingredients without actually starting to repair or to cook (cf. Fernandez 2016). Still, it will do for present purposes. It suffices to make clear that the idea of reasoning involves norms of thinking. These norms of aptness or correctness in practical thinking surely do not require us to think along a single prescribed pathway, but rather permit only certain pathways and not others (Broome 2013, 219). Even so, we doubtless often fail to live up to them.

Our thinking, including our moral thinking, is often not explicit. We could say that we also reason tacitly, thinking in much the same way as during explicit reasoning, but without any explicit attempt to reach well-supported answers. In some situations, even moral ones, we might be ill-advised to attempt to answer our practical questions by explicit reasoning. In others, it might even be a mistake to reason tacitly – because, say, we face a pressing emergency. “Sometimes we should not deliberate about what to do, and just drive” (Arpaly and Schroeder 2014, 50). Yet even if we are not called upon to think through our options in all situations, and even if sometimes it would be positively better if we did not, still, if we are called upon to do so, then we should conduct our thinking responsibly: we should reason.

Recent work in empirical ethics has indicated that even when we are called upon to reason morally, we often do so badly. When asked to give reasons for our moral intuitions, we are often “dumbfounded,” finding nothing to say in their defense (Haidt 2001). Our thinking about hypothetical moral scenarios has been shown to be highly sensitive to arbitrary variations, such as in the order of presentation. Even professional philosophers have been found to be prone to such lapses of clear thinking (e.g., Schwitzgebel & Cushman 2012). Some of our dumbfounding and confusion has been laid at the feet of our having both a fast, more emotional way of processing moral stimuli and a slow, more cognitive way (e.g., Greene 2014). An alternative explanation of moral dumbfounding looks to social norms of moral reasoning (Sneddon 2007). And a more optimistic reaction to our confusion sees our established patterns of “moral consistency reasoning” as being well-suited to cope with the clashing input generated by our fast and slow systems (Campbell & Kumar 2012) or as constituting “a flexible learning system that generates and updates a multidimensional evaluative landscape to guide decision and action” (Railton, 2014, 813).

Eventually, such empirical work on our moral reasoning may yield revisions in our norms of moral reasoning. This has not yet happened. This article is principally concerned with philosophical issues posed by our current norms of moral reasoning. For example, given those norms and assuming that they are more or less followed, how do moral considerations enter into moral reasoning, get sorted out by it when they clash, and lead to action? And what do those norms indicate about what we ought to do do?

The topic of moral reasoning lies in between two other commonly addressed topics in moral philosophy. On the one side, there is the first-order question of what moral truths there are, if any. For instance, are there any true general principles of morality, and if so, what are they? At this level utilitarianism competes with Kantianism, for instance, and both compete with anti-theorists of various stripes, who recognize only particular truths about morality (Clarke & Simpson 1989). On the other side, a quite different sort of question arises from seeking to give a metaphysical grounding for moral truths or for the claim that there are none. Supposing there are some moral truths, what makes them true? What account can be given of the truth-conditions of moral statements? Here arise familiar questions of moral skepticism and moral relativism ; here, the idea of “a reason” is wielded by many hoping to defend a non-skeptical moral metaphysics (e.g., Smith 2013). The topic of moral reasoning lies in between these two other familiar topics in the following simple sense: moral reasoners operate with what they take to be morally true but, instead of asking what makes their moral beliefs true, they proceed responsibly to attempt to figure out what to do in light of those considerations. The philosophical study of moral reasoning concerns itself with the nature of these attempts.

These three topics clearly interrelate. Conceivably, the relations between them would be so tight as to rule out any independent interest in the topic of moral reasoning. For instance, if all that could usefully be said about moral reasoning were that it is a matter of attending to the moral facts, then all interest would devolve upon the question of what those facts are – with some residual focus on the idea of moral attention (McNaughton 1988). Alternatively, it might be thought that moral reasoning is simply a matter of applying the correct moral theory via ordinary modes of deductive and empirical reasoning. Again, if that were true, one’s sufficient goal would be to find that theory and get the non-moral facts right. Neither of these reductive extremes seems plausible, however. Take the potential reduction to getting the facts right, first.

Contemporary advocates of the importance of correctly perceiving the morally relevant facts tend to focus on facts that we can perceive using our ordinary sense faculties and our ordinary capacities of recognition, such as that this person has an infection or that this person needs my medical help . On such a footing, it is possible to launch powerful arguments against the claim that moral principles undergird every moral truth (Dancy 1993) and for the claim that we can sometimes perfectly well decide what to do by acting on the reasons we perceive instinctively – or as we have been trained – without engaging in any moral reasoning. Yet this is not a sound footing for arguing that moral reasoning, beyond simply attending to the moral facts, is always unnecessary. On the contrary, we often find ourselves facing novel perplexities and moral conflicts in which our moral perception is an inadequate guide. In addressing the moral questions surrounding whether society ought to enforce surrogate-motherhood contracts, for instance, the scientific and technological novelties involved make our moral perceptions unreliable and shaky guides. When a medical researcher who has noted an individual’s illness also notes the fact that diverting resources to caring, clinically, for this individual would inhibit the progress of my research, thus harming the long-term health chances of future sufferers of this illness , he or she comes face to face with conflicting moral considerations. At this juncture, it is far less plausible or satisfying simply to say that, employing one’s ordinary sensory and recognitional capacities, one sees what is to be done, both things considered. To posit a special faculty of moral intuition that generates such overall judgments in the face of conflicting considerations is to wheel in a deus ex machina . It cuts inquiry short in a way that serves the purposes of fiction better than it serves the purposes of understanding. It is plausible instead to suppose that moral reasoning comes in at this point (Campbell & Kumar 2012).

For present purposes, it is worth noting, David Hume and the moral sense theorists do not count as short-circuiting our understanding of moral reasoning in this way. It is true that Hume presents himself, especially in the Treatise of Human Nature , as a disbeliever in any specifically practical or moral reasoning. In doing so, however, he employs an exceedingly narrow definition of “reasoning” (Hume 2000, Book I, Part iii, sect. ii). For present purposes, by contrast, we are using a broader working gloss of “reasoning,” one not controlled by an ambition to parse out the relative contributions of (the faculty of) reason and of the passions. And about moral reasoning in this broader sense, as responsible thinking about what one ought to do, Hume has many interesting things to say, starting with the thought that moral reasoning must involve a double correction of perspective (see section 2.4 ) adequately to account for the claims of other people and of the farther future, a double correction that is accomplished with the aid of the so-called “calm passions.”

If we turn from the possibility that perceiving the facts aright will displace moral reasoning to the possibility that applying the correct moral theory will displace – or exhaust – moral reasoning, there are again reasons to be skeptical. One reason is that moral theories do not arise in a vacuum; instead, they develop against a broad backdrop of moral convictions. Insofar as the first potentially reductive strand, emphasizing the importance of perceiving moral facts, has force – and it does have some – it also tends to show that moral theories need to gain support by systematizing or accounting for a wide range of moral facts (Sidgwick 1981). As in most other arenas in which theoretical explanation is called for, the degree of explanatory success will remain partial and open to improvement via revisions in the theory (see section 2.6 ). Unlike the natural sciences, however, moral theory is an endeavor that, as John Rawls once put it, is “Socratic” in that it is a subject pertaining to actions “shaped by self-examination” (Rawls 1971, 48f.). If this observation is correct, it suggests that the moral questions we set out to answer arise from our reflections about what matters. By the same token – and this is the present point – a moral theory is subject to being overturned because it generates concrete implications that do not sit well with us on due reflection. This being so, and granting the great complexity of the moral terrain, it seems highly unlikely that we will ever generate a moral theory on the basis of which we can serenely and confidently proceed in a deductive way to generate answers to what we ought to do in all concrete cases. This conclusion is reinforced by a second consideration, namely that insofar as a moral theory is faithful to the complexity of the moral phenomena, it will contain within it many possibilities for conflicts among its own elements. Even if it does deploy some priority rules, these are unlikely to be able to cover all contingencies. Hence, some moral reasoning that goes beyond the deductive application of the correct theory is bound to be needed.

In short, a sound understanding of moral reasoning will not take the form of reducing it to one of the other two levels of moral philosophy identified above. Neither the demand to attend to the moral facts nor the directive to apply the correct moral theory exhausts or sufficiently describes moral reasoning.

In addition to posing philosophical problems in its own right, moral reasoning is of interest on account of its implications for moral facts and moral theories. Accordingly, attending to moral reasoning will often be useful to those whose real interest is in determining the right answer to some concrete moral problem or in arguing for or against some moral theory. The characteristic ways we attempt to work through a given sort of moral quandary can be just as revealing about our considered approaches to these matters as are any bottom-line judgments we may characteristically come to. Further, we may have firm, reflective convictions about how a given class of problems is best tackled, deliberatively, even when we remain in doubt about what should be done. In such cases, attending to the modes of moral reasoning that we characteristically accept can usefully expand the set of moral information from which we start, suggesting ways to structure the competing considerations.

Facts about the nature of moral inference and moral reasoning may have important direct implications for moral theory. For instance, it might be taken to be a condition of adequacy of any moral theory that it play a practically useful role in our efforts at self-understanding and deliberation. It should be deliberation-guiding (Richardson 2018, §1.2). If this condition is accepted, then any moral theory that would require agents to engage in abstruse or difficult reasoning may be inadequate for that reason, as would be any theory that assumes that ordinary individuals are generally unable to reason in the ways that the theory calls for. J.S. Mill (1979) conceded that we are generally unable to do the calculations called for by utilitarianism, as he understood it, and argued that we should be consoled by the fact that, over the course of history, experience has generated secondary principles that guide us well enough. Rather more dramatically, R. M. Hare defended utilitarianism as well capturing the reasoning of ideally informed and rational “archangels” (1981). Taking seriously a deliberation-guidance desideratum for moral theory would favor, instead, theories that more directly inform efforts at moral reasoning by we “proletarians,” to use Hare’s contrasting term.

Accordingly, the close relations between moral reasoning, the moral facts, and moral theory do not eliminate moral reasoning as a topic of interest. To the contrary, because moral reasoning has important implications about moral facts and moral theories, these close relations lend additional interest to the topic of moral reasoning.

The final threshold question is whether moral reasoning is truly distinct from practical reasoning more generally understood. (The question of whether moral reasoning, even if practical, is structurally distinct from theoretical reasoning that simply proceeds from a proper recognition of the moral facts has already been implicitly addressed and answered, for the purposes of the present discussion, in the affirmative.) In addressing this final question, it is difficult to overlook the way different moral theories project quite different models of moral reasoning – again a link that might be pursued by the moral philosopher seeking leverage in either direction. For instance, Aristotle’s views might be as follows: a quite general account can be given of practical reasoning, which includes selecting means to ends and determining the constituents of a desired activity. The difference between the reasoning of a vicious person and that of a virtuous person differs not at all in its structure, but only in its content, for the virtuous person pursues true goods, whereas the vicious person simply gets side-tracked by apparent ones. To be sure, the virtuous person may be able to achieve a greater integration of his or her ends via practical reasoning (because of the way the various virtues cohere), but this is a difference in the result of practical reasoning and not in its structure. At an opposite extreme, Kant’s categorical imperative has been taken to generate an approach to practical reasoning (via a “typic of practical judgment”) that is distinctive from other practical reasoning both in the range of considerations it addresses and its structure (Nell 1975). Whereas prudential practical reasoning, on Kant’s view, aims to maximize one’s happiness, moral reasoning addresses the potential universalizability of the maxims – roughly, the intentions – on which one acts. Views intermediate between Aristotle’s and Kant’s in this respect include Hare’s utilitarian view and Aquinas’ natural-law view. On Hare’s view, just as an ideal prudential agent applies maximizing rationality to his or her own preferences, an ideal moral agent’s reasoning applies maximizing rationality to the set of everyone’s preferences that its archangelic capacity for sympathy has enabled it to internalize (Hare 1981). Thomistic, natural-law views share the Aristotelian view about the general unity of practical reasoning in pursuit of the good, rightly or wrongly conceived, but add that practical reason, in addition to demanding that we pursue the fundamental human goods, also, and distinctly, demands that we not attack these goods. In this way, natural-law views incorporate some distinctively moral structuring – such as the distinctions between doing and allowing and the so-called doctrine of double effect’s distinction between intending as a means and accepting as a by-product – within a unified account of practical reasoning (see entry on the natural law tradition in ethics ). In light of this diversity of views about the relation between moral reasoning and practical or prudential reasoning, a general account of moral reasoning that does not want to presume the correctness of a definite moral theory will do well to remain agnostic on the question of how moral reasoning relates to non-moral practical reasoning.

2. General Philosophical Questions about Moral Reasoning

To be sure, most great philosophers who have addressed the nature of moral reasoning were far from agnostic about the content of the correct moral theory, and developed their reflections about moral reasoning in support of or in derivation from their moral theory. Nonetheless, contemporary discussions that are somewhat agnostic about the content of moral theory have arisen around important and controversial aspects of moral reasoning. We may group these around the following seven questions:

  • How do relevant considerations get taken up in moral reasoning?
  • Is it essential to moral reasoning for the considerations it takes up to be crystallized into, or ranged under, principles?
  • How do we sort out which moral considerations are most relevant?
  • In what ways do motivational elements shape moral reasoning?
  • What is the best way to model the kinds of conflicts among considerations that arise in moral reasoning?
  • Does moral reasoning include learning from experience and changing one’s mind?
  • How can we reason, morally, with one another?

The remainder of this article takes up these seven questions in turn.

One advantage to defining “reasoning” capaciously, as here, is that it helps one recognize that the processes whereby we come to be concretely aware of moral issues are integral to moral reasoning as it might more narrowly be understood. Recognizing moral issues when they arise requires a highly trained set of capacities and a broad range of emotional attunements. Philosophers of the moral sense school of the 17th and 18th centuries stressed innate emotional propensities, such as sympathy with other humans. Classically influenced virtue theorists, by contrast, give more importance to the training of perception and the emotional growth that must accompany it. Among contemporary philosophers working in empirical ethics there is a similar divide, with some arguing that we process situations using an innate moral grammar (Mikhail 2011) and some emphasizing the role of emotions in that processing (Haidt 2001, Prinz 2007, Greene 2014). For the moral reasoner, a crucial task for our capacities of moral recognition is to mark out certain features of a situation as being morally salient. Sartre’s student, for instance, focused on the competing claims of his mother and the Free French, giving them each an importance to his situation that he did not give to eating French cheese or wearing a uniform. To say that certain features are marked out as morally salient is not to imply that the features thus singled out answer to the terms of some general principle or other: we will come to the question of particularism, below. Rather, it is simply to say that recognitional attention must have a selective focus.

What will be counted as a moral issue or difficulty, in the sense requiring moral agents’ recognition, will again vary by moral theory. Not all moral theories would count filial loyalty and patriotism as moral duties. It is only at great cost, however, that any moral theory could claim to do without a layer of moral thinking involving situation-recognition. A calculative sort of utilitarianism, perhaps, might be imagined according to which there is no need to spot a moral issue or difficulty, as every choice node in life presents the agent with the same, utility-maximizing task. Perhaps Jeremy Bentham held a utilitarianism of this sort. For the more plausible utilitarianisms mentioned above, however, such as Mill’s and Hare’s, agents need not always calculate afresh, but must instead be alive to the possibility that because the ordinary “landmarks and direction posts” lead one astray in the situation at hand, they must make recourse to a more direct and critical mode of moral reasoning. Recognizing whether one is in one of those situations thus becomes the principal recognitional task for the utilitarian agent. (Whether this task can be suitably confined, of course, has long been one of the crucial questions about whether such indirect forms of utilitarianism, attractive on other grounds, can prevent themselves from collapsing into a more Benthamite, direct form: cf. Brandt 1979.)

Note that, as we have been describing moral uptake, we have not implied that what is perceived is ever a moral fact. Rather, it might be that what is perceived is some ordinary, descriptive feature of a situation that is, for whatever reason, morally relevant. An account of moral uptake will interestingly impinge upon the metaphysics of moral facts, however, if it holds that moral facts can be perceived. Importantly intermediate, in this respect, is the set of judgments involving so-called “thick” evaluative concepts – for example, that someone is callous, boorish, just, or brave (see the entry on thick ethical concepts ). These do not invoke the supposedly “thinner” terms of overall moral assessment, “good,” or “right.” Yet they are not innocent of normative content, either. Plainly, we do recognize callousness when we see clear cases of it. Plainly, too – whatever the metaphysical implications of the last fact – our ability to describe our situations in these thick normative terms is crucial to our ability to reason morally.

It is debated how closely our abilities of moral discernment are tied to our moral motivations. For Aristotle and many of his ancient successors, the two are closely linked, in that someone not brought up into virtuous motivations will not see things correctly. For instance, cowards will overestimate dangers, the rash will underestimate them, and the virtuous will perceive them correctly ( Eudemian Ethics 1229b23–27). By the Stoics, too, having the right motivations was regarded as intimately tied to perceiving the world correctly; but whereas Aristotle saw the emotions as allies to enlist in support of sound moral discernment, the Stoics saw them as inimical to clear perception of the truth (cf. Nussbaum 2001).

That one discerns features and qualities of some situation that are relevant to sizing it up morally does not yet imply that one explicitly or even implicitly employs any general claims in describing it. Perhaps all that one perceives are particularly embedded features and qualities, without saliently perceiving them as instantiations of any types. Sartre’s student may be focused on his mother and on the particular plights of several of his fellow Frenchmen under Nazi occupation, rather than on any purported requirements of filial duty or patriotism. Having become aware of some moral issue in such relatively particular terms, he might proceed directly to sorting out the conflict between them. Another possibility, however, and one that we frequently seem to exploit, is to formulate the issue in general terms: “An only child should stick by an otherwise isolated parent,” for instance, or “one should help those in dire need if one can do so without significant personal sacrifice.” Such general statements would be examples of “moral principles,” in a broad sense. (We do not here distinguish between principles and rules. Those who do include Dworkin 1978 and Gert 1998.)

We must be careful, here, to distinguish the issue of whether principles commonly play an implicit or explicit role in moral reasoning, including well-conducted moral reasoning, from the issue of whether principles necessarily figure as part of the basis of moral truth. The latter issue is best understood as a metaphysical question about the nature and basis of moral facts. What is currently known as moral particularism is the view that there are no defensible moral principles and that moral reasons, or well-grounded moral facts, can exist independently of any basis in a general principle. A contrary view holds that moral reasons are necessarily general, whether because the sources of their justification are all general or because a moral claim is ill-formed if it contains particularities. But whether principles play a useful role in moral reasoning is certainly a different question from whether principles play a necessary role in accounting for the ultimate truth-conditions of moral statements. Moral particularism, as just defined, denies their latter role. Some moral particularists seem also to believe that moral particularism implies that moral principles cannot soundly play a useful role in reasoning. This claim is disputable, as it seems a contingent matter whether the relevant particular facts arrange themselves in ways susceptible to general summary and whether our cognitive apparatus can cope with them at all without employing general principles. Although the metaphysical controversy about moral particularism lies largely outside our topic, we will revisit it in section 2.5 , in connection with the weighing of conflicting reasons.

With regard to moral reasoning, while there are some self-styled “anti-theorists” who deny that abstract structures of linked generalities are important to moral reasoning (Clarke, et al. 1989), it is more common to find philosophers who recognize both some role for particular judgment and some role for moral principles. Thus, neo-Aristotelians like Nussbaum who emphasize the importance of “finely tuned and richly aware” particular discernment also regard that discernment as being guided by a set of generally describable virtues whose general descriptions will come into play in at least some kinds of cases (Nussbaum 1990). “Situation ethicists” of an earlier generation (e.g. Fletcher 1997) emphasized the importance of taking into account a wide range of circumstantial differentiae, but against the background of some general principles whose application the differentiae help sort out. Feminist ethicists influenced by Carol Gilligan’s path breaking work on moral development have stressed the moral centrality of the kind of care and discernment that are salient and well-developed by people immersed in particular relationships (Held 1995); but this emphasis is consistent with such general principles as “one ought to be sensitive to the wishes of one’s friends”(see the entry on feminist moral psychology ). Again, if we distinguish the question of whether principles are useful in responsibly-conducted moral thinking from the question of whether moral reasons ultimately all derive from general principles, and concentrate our attention solely on the former, we will see that some of the opposition to general moral principles melts away.

It should be noted that we have been using a weak notion of generality, here. It is contrasted only with the kind of strict particularity that comes with indexicals and proper names. General statements or claims – ones that contain no such particular references – are not necessarily universal generalizations, making an assertion about all cases of the mentioned type. Thus, “one should normally help those in dire need” is a general principle, in this weak sense. Possibly, such logically loose principles would be obfuscatory in the context of an attempt to reconstruct the ultimate truth-conditions of moral statements. Such logically loose principles would clearly be useless in any attempt to generate a deductively tight “practical syllogism.” In our day-to-day, non-deductive reasoning, however, such logically loose principles appear to be quite useful. (Recall that we are understanding “reasoning” quite broadly, as responsibly conducted thinking: nothing in this understanding of reasoning suggests any uniquely privileged place for deductive inference: cf. Harman 1986. For more on defeasible or “default” principles, see section 2.5 .)

In this terminology, establishing that general principles are essential to moral reasoning leaves open the further question whether logically tight, or exceptionless, principles are also essential to moral reasoning. Certainly, much of our actual moral reasoning seems to be driven by attempts to recast or reinterpret principles so that they can be taken to be exceptionless. Adherents and inheritors of the natural-law tradition in ethics (e.g. Donagan 1977) are particularly supple defenders of exceptionless moral principles, as they are able to avail themselves not only of a refined tradition of casuistry but also of a wide array of subtle – some would say overly subtle – distinctions, such as those mentioned above between doing and allowing and between intending as a means and accepting as a byproduct.

A related role for a strong form of generality in moral reasoning comes from the Kantian thought that one’s moral reasoning must counter one’s tendency to make exceptions for oneself. Accordingly, Kant holds, as we have noted, that we must ask whether the maxims of our actions can serve as universal laws. As most contemporary readers understand this demand, it requires that we engage in a kind of hypothetical generalization across agents, and ask about the implications of everybody acting that way in those circumstances. The grounds for developing Kant’s thought in this direction have been well explored (e.g., Nell 1975, Korsgaard 1996, Engstrom 2009). The importance and the difficulties of such a hypothetical generalization test in ethics were discussed the influential works Gibbard 1965 and Goldman 1974.

Whether or not moral considerations need the backing of general principles, we must expect situations of action to present us with multiple moral considerations. In addition, of course, these situations will also present us with a lot of information that is not morally relevant. On any realistic account, a central task of moral reasoning is to sort out relevant considerations from irrelevant ones, as well as to determine which are especially relevant and which only slightly so. That a certain woman is Sartre’s student’s mother seems arguably to be a morally relevant fact; what about the fact (supposing it is one) that she has no other children to take care of her? Addressing the task of sorting what is morally relevant from what is not, some philosophers have offered general accounts of moral relevant features. Others have given accounts of how we sort out which of the relevant features are most relevant, a process of thinking that sometimes goes by the name of “casuistry.”

Before we look at ways of sorting out which features are morally relevant or most morally relevant, it may be useful to note a prior step taken by some casuists, which was to attempt to set out a schema that would capture all of the features of an action or proposed action. The Roman Catholic casuists of the middle ages did so by drawing on Aristotle’s categories. Accordingly, they asked, where, when, why, how, by what means, to whom, or by whom the action in question is to be done or avoided (see Jonsen and Toulmin 1988). The idea was that complete answers to these questions would contain all of the features of the action, of which the morally relevant ones would be a subset. Although metaphysically uninteresting, the idea of attempting to list all of an action’s features in this way represents a distinctive – and extreme – heuristic for moral reasoning.

Turning to the morally relevant features, one of the most developed accounts is Bernard Gert’s. He develops a list of features relevant to whether the violation of a moral rule should be generally allowed. Given the designed function of Gert’s list, it is natural that most of his morally relevant features make reference to the set of moral rules he defended. Accordingly, some of Gert’s distinctions between dimensions of relevant features reflect controversial stances in moral theory. For example, one of the dimensions is whether “the violation [is] done intentionally or only knowingly” (Gert 1998, 234) – a distinction that those who reject the doctrine of double effect would not find relevant.

In deliberating about what we ought, morally, to do, we also often attempt to figure out which considerations are most relevant. To take an issue mentioned above: Are surrogate motherhood contracts more akin to agreements with babysitters (clearly acceptable) or to agreements with prostitutes (not clearly so)? That is, which feature of surrogate motherhood is more relevant: that it involves a contract for child-care services or that it involves payment for the intimate use of the body? Both in such relatively novel cases and in more familiar ones, reasoning by analogy plays a large role in ordinary moral thinking. When this reasoning by analogy starts to become systematic – a social achievement that requires some historical stability and reflectiveness about what are taken to be moral norms – it begins to exploit comparison to cases that are “paradigmatic,” in the sense of being taken as settled. Within such a stable background, a system of casuistry can develop that lends some order to the appeal to analogous cases. To use an analogy: the availability of a widely accepted and systematic set of analogies and the availability of what are taken to be moral norms may stand to one another as chicken does to egg: each may be an indispensable moment in the genesis of the other.

Casuistry, thus understood, is an indispensable aid to moral reasoning. At least, that it is would follow from conjoining two features of the human moral situation mentioned above: the multifariousness of moral considerations that arise in particular cases and the need and possibility for employing moral principles in sound moral reasoning. We require moral judgment, not simply a deductive application of principles or a particularist bottom-line intuition about what we should do. This judgment must be responsible to moral principles yet cannot be straightforwardly derived from them. Accordingly, our moral judgment is greatly aided if it is able to rest on the sort of heuristic support that casuistry offers. Thinking through which of two analogous cases provides a better key to understanding the case at hand is a useful way of organizing our moral reasoning, and one on which we must continue to depend. If we lack the kind of broad consensus on a set of paradigm cases on which the Renaissance Catholic or Talmudic casuists could draw, our casuistic efforts will necessarily be more controversial and tentative than theirs; but we are not wholly without settled cases from which to work. Indeed, as Jonsen and Toulmin suggest at the outset of their thorough explanation and defense of casuistry, the depth of disagreement about moral theories that characterizes a pluralist society may leave us having to rest comparatively more weight on the cases about which we can find agreement than did the classic casuists (Jonsen and Toulmin 1988).

Despite the long history of casuistry, there is little that can usefully be said about how one ought to reason about competing analogies. In the law, where previous cases have precedential importance, more can be said. As Sunstein notes (Sunstein 1996, chap. 3), the law deals with particular cases, which are always “potentially distinguishable” (72); yet the law also imposes “a requirement of practical consistency” (67). This combination of features makes reasoning by analogy particularly influential in the law, for one must decide whether a given case is more like one set of precedents or more like another. Since the law must proceed even within a pluralist society such as ours, Sunstein argues, we see that analogical reasoning can go forward on the basis of “incompletely theorized judgments” or of what Rawls calls an “overlapping consensus” (Rawls 1996). That is, although a robust use of analogous cases depends, as we have noted, on some shared background agreement, this agreement need not extend to all matters or all levels of individuals’ moral thinking. Accordingly, although in a pluralist society we may lack the kind of comprehensive normative agreement that made the high casuistry of Renaissance Christianity possible, the path of the law suggests that normatively forceful, case-based, analogical reasoning can still go on. A modern, competing approach to case-based or precedent-respecting reasoning has been developed by John F. Horty (2016). On Horty’s approach, which builds on the default logic developed in (Horty 2012), the body of precedent systematically shifts the weights of the reasons arising in a new case.

Reasoning by appeal to cases is also a favorite mode of some recent moral philosophers. Since our focus here is not on the methods of moral theory, we do not need to go into any detail in comparing different ways in which philosophers wield cases for and against alternative moral theories. There is, however, an important and broadly applicable point worth making about ordinary reasoning by reference to cases that emerges most clearly from the philosophical use of such reasoning. Philosophers often feel free to imagine cases, often quite unlikely ones, in order to attempt to isolate relevant differences. An infamous example is a pair of cases offered by James Rachels to cast doubt on the moral significance of the distinction between killing and letting die, here slightly redescribed. In both cases, there is at the outset a boy in a bathtub and a greedy older cousin downstairs who will inherit the family manse if and only if the boy predeceases him (Rachels 1975). In Case A, the cousin hears a thump, runs up to find the boy unconscious in the bath, and reaches out to turn on the tap so that the water will rise up to drown the boy. In Case B, the cousin hears a thump, runs up to find the boy unconscious in the bath with the water running, and decides to sit back and do nothing until the boy drowns. Since there is surely no moral difference between these cases, Rachels argued, the general distinction between killing and letting die is undercut. “Not so fast!” is the well-justified reaction (cf. Beauchamp 1979). Just because a factor is morally relevant in a certain way in comparing one pair of cases does not mean that it either is or must be relevant in the same way or to the same degree when comparing other cases. Shelly Kagan has dubbed the failure to take account of this fact of contextual interaction when wielding comparison cases the “additive fallacy” (1988). Kagan concludes from this that the reasoning of moral theorists must depend upon some theory that helps us anticipate and account for ways in which factors will interact in various contexts. A parallel lesson, reinforcing what we have already observed in connection with casuistry proper, would apply for moral reasoning in general: reasoning from cases must at least implicitly rely upon a set of organizing judgments or beliefs, of a kind that would, on some understandings, count as a moral “theory.” If this is correct, it provides another kind of reason to think that moral considerations could be crystallized into principles that make manifest the organizing structure involved.

We are concerned here with moral reasoning as a species of practical reasoning – reasoning directed to deciding what to do and, if successful, issuing in an intention. But how can such practical reasoning succeed? How can moral reasoning hook up with motivationally effective psychological states so as to have this kind of causal effect? “Moral psychology” – the traditional name for the philosophical study of intention and action – has a lot to say to such questions, both in its traditional, a priori form and its newly popular empirical form. In addition, the conclusions of moral psychology can have substantive moral implications, for it may be reasonable to assume that if there are deep reasons that a given type of moral reasoning cannot be practical, then any principles that demand such reasoning are unsound. In this spirit, Samuel Scheffler has explored “the importance for moral philosophy of some tolerably realistic understanding of human motivational psychology” (Scheffler 1992, 8) and Peter Railton has developed the idea that certain moral principles might generate a kind of “alienation” (Railton 1984). In short, we may be interested in what makes practical reasoning of a certain sort psychologically possible both for its own sake and as a way of working out some of the content of moral theory.

The issue of psychological possibility is an important one for all kinds of practical reasoning (cf. Audi 1989). In morality, it is especially pressing, as morality often asks individuals to depart from satisfying their own interests. As a result, it may appear that moral reasoning’s practical effect could not be explained by a simple appeal to the initial motivations that shape or constitute someone’s interests, in combination with a requirement, like that mentioned above, to will the necessary means to one’s ends. Morality, it may seem, instead requires individuals to act on ends that may not be part of their “motivational set,” in the terminology of Williams 1981. How can moral reasoning lead people to do that? The question is a traditional one. Plato’s Republic answered that the appearances are deceiving, and that acting morally is, in fact, in the enlightened self-interest of the agent. Kant, in stark contrast, held that our transcendent capacity to act on our conception of a practical law enables us to set ends and to follow morality even when doing so sharply conflicts with our interests. Many other answers have been given. In recent times, philosophers have defended what has been called “internalism” about morality, which claims that there is a necessary conceptual link between agents’ moral judgment and their motivation. Michael Smith, for instance, puts the claim as follows (Smith 1994, 61):

If an agent judges that it is right for her to Φ in circumstances C , then either she is motivated to Φ in C or she is practically irrational.

Even this defeasible version of moral judgment internalism may be too strong; but instead of pursuing this issue further, let us turn to a question more internal to moral reasoning. (For more on the issue of moral judgment internalism, see moral motivation .)

The traditional question we were just glancing at picks up when moral reasoning is done. Supposing that we have some moral conclusion, it asks how agents can be motivated to go along with it. A different question about the intersection of moral reasoning and moral psychology, one more immanent to the former, concerns how motivational elements shape the reasoning process itself.

A powerful philosophical picture of human psychology, stemming from Hume, insists that beliefs and desires are distinct existences (Hume 2000, Book II, part iii, sect. iii; cf. Smith 1994, 7). This means that there is always a potential problem about how reasoning, which seems to work by concatenating beliefs, links up to the motivations that desire provides. The paradigmatic link is that of instrumental action: the desire to Ψ links with the belief that by Φing in circumstances C one will Ψ. Accordingly, philosophers who have examined moral reasoning within an essentially Humean, belief-desire psychology have sometimes accepted a constrained account of moral reasoning. Hume’s own account exemplifies the sort of constraint that is involved. As Hume has it, the calm passions support the dual correction of perspective constitutive of morality, alluded to above. Since these calm passions are seen as competing with our other passions in essentially the same motivational coinage, as it were, our passions limit the reach of moral reasoning.

An important step away from a narrow understanding of Humean moral psychology is taken if one recognizes the existence of what Rawls has called “principle-dependent desires” (Rawls 1996, 82–83; Rawls 2000, 46–47). These are desires whose objects cannot be characterized without reference to some rational or moral principle. An important special case of these is that of “conception-dependent desires,” in which the principle-dependent desire in question is seen by the agent as belonging to a broader conception, and as important on that account (Rawls 1996, 83–84; Rawls 2000, 148–152). For instance, conceiving of oneself as a citizen, one may desire to bear one’s fair share of society’s burdens. Although it may look like any content, including this, may substitute for Ψ in the Humean conception of desire, and although Hume set out to show how moral sentiments such as pride could be explained in terms of simple psychological mechanisms, his influential empiricism actually tends to restrict the possible content of desires. Introducing principle-dependent desires thus seems to mark a departure from a Humean psychology. As Rawls remarks, if “we may find ourselves drawn to the conceptions and ideals that both the right and the good express … , [h]ow is one to fix limits on what people might be moved by in thought and deliberation and hence may act from?” (1996, 85). While Rawls developed this point by contrasting Hume’s moral psychology with Kant’s, the same basic point is also made by neo-Aristotelians (e.g., McDowell 1998).

The introduction of principle-dependent desires bursts any would-be naturalist limit on their content; nonetheless, some philosophers hold that this notion remains too beholden to an essentially Humean picture to be able to capture the idea of a moral commitment. Desires, it may seem, remain motivational items that compete on the basis of strength. Saying that one’s desire to be just may be outweighed by one’s desire for advancement may seem to fail to capture the thought that one has a commitment – even a non-absolute one – to justice. Sartre designed his example of the student torn between staying with his mother and going to fight with the Free French so as to make it seem implausible that he ought to decide simply by determining which he more strongly wanted to do.

One way to get at the idea of commitment is to emphasize our capacity to reflect about what we want. By this route, one might distinguish, in the fashion of Harry Frankfurt, between the strength of our desires and “the importance of what we care about” (Frankfurt 1988). Although this idea is evocative, it provides relatively little insight into how it is that we thus reflect. Another way to model commitment is to take it that our intentions operate at a level distinct from our desires, structuring what we are willing to reconsider at any point in our deliberations (e.g. Bratman 1999). While this two-level approach offers some advantages, it is limited by its concession of a kind of normative primacy to the unreconstructed desires at the unreflective level. A more integrated approach might model the psychology of commitment in a way that reconceives the nature of desire from the ground up. One attractive possibility is to return to the Aristotelian conception of desire as being for the sake of some good or apparent good (cf. Richardson 2004). On this conception, the end for the sake of which an action is done plays an important regulating role, indicating, in part, what one will not do (Richardson 2018, §§8.3–8.4). Reasoning about final ends accordingly has a distinctive character (see Richardson 1994, Schmidtz 1995). Whatever the best philosophical account of the notion of a commitment – for another alternative, see (Tiberius 2000) – much of our moral reasoning does seem to involve expressions of and challenges to our commitments (Anderson and Pildes 2000).

Recent experimental work, employing both survey instruments and brain imaging technologies, has allowed philosophers to approach questions about the psychological basis of moral reasoning from novel angles. The initial brain data seems to show that individuals with damage to the pre-frontal lobes tend to reason in more straightforwardly consequentialist fashion than those without such damage (Koenigs et al. 2007). Some theorists take this finding as tending to confirm that fully competent human moral reasoning goes beyond a simple weighing of pros and cons to include assessment of moral constraints (e.g., Wellman & Miller 2008, Young & Saxe 2008). Others, however, have argued that the emotional responses of the prefrontal lobes interfere with the more sober and sound, consequentialist-style reasoning of the other parts of the brain (e.g. Greene 2014). The survey data reveals or confirms, among other things, interesting, normatively loaded asymmetries in our attribution of such concepts as responsibility and causality (Knobe 2006). It also reveals that many of moral theory’s most subtle distinctions, such as the distinction between an intended means and a foreseen side-effect, are deeply built into our psychologies, being present cross-culturally and in young children, in a way that suggests to some the possibility of an innate “moral grammar” (Mikhail 2011).

A final question about the connection between moral motivation and moral reasoning is whether someone without the right motivational commitments can reason well, morally. On Hume’s official, narrow conception of reasoning, which essentially limits it to tracing empirical and logical connections, the answer would be yes. The vicious person could trace the causal and logical implications of acting in a certain way just as a virtuous person could. The only difference would be practical, not rational: the two would not act in the same way. Note, however, that the Humean’s affirmative answer depends on departing from the working definition of “moral reasoning” used in this article, which casts it as a species of practical reasoning. Interestingly, Kant can answer “yes” while still casting moral reasoning as practical. On his view in the Groundwork and the Critique of Practical Reason , reasoning well, morally, does not depend on any prior motivational commitment, yet remains practical reasoning. That is because he thinks the moral law can itself generate motivation. (Kant’s Metaphysics of Morals and Religion offer a more complex psychology.) For Aristotle, by contrast, an agent whose motivations are not virtuously constituted will systematically misperceive what is good and what is bad, and hence will be unable to reason excellently. The best reasoning that a vicious person is capable of, according to Aristotle, is a defective simulacrum of practical wisdom that he calls “cleverness” ( Nicomachean Ethics 1144a25).

Moral considerations often conflict with one another. So do moral principles and moral commitments. Assuming that filial loyalty and patriotism are moral considerations, then Sartre’s student faces a moral conflict. Recall that it is one thing to model the metaphysics of morality or the truth conditions of moral statements and another to give an account of moral reasoning. In now looking at conflicting considerations, our interest here remains with the latter and not the former. Our principal interest is in ways that we need to structure or think about conflicting considerations in order to negotiate well our reasoning involving them.

One influential building-block for thinking about moral conflicts is W. D. Ross’s notion of a “ prima facie duty”. Although this term misleadingly suggests mere appearance – the way things seem at first glance – it has stuck. Some moral philosophers prefer the term “ pro tanto duty” (e.g., Hurley 1989). Ross explained that his term provides “a brief way of referring to the characteristic (quite distinct from that of being a duty proper) which an act has, in virtue of being of a certain kind (e.g., the keeping of a promise), of being an act which would be a duty proper if it were not at the same time of another kind which is morally significant.” Illustrating the point, he noted that a prima facie duty to keep a promise can be overridden by a prima facie duty to avert a serious accident, resulting in a proper, or unqualified, duty to do the latter (Ross 1988, 18–19). Ross described each prima facie duty as a “parti-resultant” attribute, grounded or explained by one aspect of an act, whereas “being one’s [actual] duty” is a “toti-resultant” attribute resulting from all such aspects of an act, taken together (28; see Pietroski 1993). This suggests that in each case there is, in principle, some function that generally maps from the partial contributions of each prima facie duty to some actual duty. What might that function be? To Ross’s credit, he writes that “for the estimation of the comparative stringency of these prima facie obligations no general rules can, so far as I can see, be laid down” (41). Accordingly, a second strand in Ross simply emphasizes, following Aristotle, the need for practical judgment by those who have been brought up into virtue (42).

How might considerations of the sort constituted by prima facie duties enter our moral reasoning? They might do so explicitly, or only implicitly. There is also a third, still weaker possibility (Scheffler 1992, 32): it might simply be the case that if the agent had recognized a prima facie duty, he would have acted on it unless he considered it to be overridden. This is a fact about how he would have reasoned.

Despite Ross’s denial that there is any general method for estimating the comparative stringency of prima facie duties, there is a further strand in his exposition that many find irresistible and that tends to undercut this denial. In the very same paragraph in which he states that he sees no general rules for dealing with conflicts, he speaks in terms of “the greatest balance of prima facie rightness.” This language, together with the idea of “comparative stringency,” ineluctably suggests the idea that the mapping function might be the same in each case of conflict and that it might be a quantitative one. On this conception, if there is a conflict between two prima facie duties, the one that is strongest in the circumstances should be taken to win. Duly cautioned about the additive fallacy (see section 2.3 ), we might recognize that the strength of a moral consideration in one set of circumstances cannot be inferred from its strength in other circumstances. Hence, this approach will need still to rely on intuitive judgments in many cases. But this intuitive judgment will be about which prima facie consideration is stronger in the circumstances, not simply about what ought to be done.

The thought that our moral reasoning either requires or is benefited by a virtual quantitative crutch of this kind has a long pedigree. Can we really reason well morally in a way that boils down to assessing the weights of the competing considerations? Addressing this question will require an excursus on the nature of moral reasons. Philosophical support for this possibility involves an idea of practical commensurability. We need to distinguish, here, two kinds of practical commensurability or incommensurability, one defined in metaphysical terms and one in deliberative terms. Each of these forms might be stated evaluatively or deontically. The first, metaphysical sort of value incommensurability is defined directly in terms of what is the case. Thus, to state an evaluative version: two values are metaphysically incommensurable just in case neither is better than the other nor are they equally good (see Chang 1998). Now, the metaphysical incommensurability of values, or its absence, is only loosely linked to how it would be reasonable to deliberate. If all values or moral considerations are metaphysically (that is, in fact) commensurable, still it might well be the case that our access to the ultimate commensurating function is so limited that we would fare ill by proceeding in our deliberations to try to think about which outcomes are “better” or which considerations are “stronger.” We might have no clue about how to measure the relevant “strength.” Conversely, even if metaphysical value incommensurability is common, we might do well, deliberatively, to proceed as if this were not the case, just as we proceed in thermodynamics as if the gas laws obtained in their idealized form. Hence, in thinking about the deliberative implications of incommensurable values , we would do well to think in terms of a definition tailored to the deliberative context. Start with a local, pairwise form. We may say that two options, A and B, are deliberatively commensurable just in case there is some one dimension of value in terms of which, prior to – or logically independently of – choosing between them, it is possible adequately to represent the force of the considerations bearing on the choice.

Philosophers as diverse as Immanuel Kant and John Stuart Mill have argued that unless two options are deliberatively commensurable, in this sense, it is impossible to choose rationally between them. Interestingly, Kant limited this claim to the domain of prudential considerations, recognizing moral reasoning as invoking considerations incommensurable with those of prudence. For Mill, this claim formed an important part of his argument that there must be some one, ultimate “umpire” principle – namely, on his view, the principle of utility. Henry Sidgwick elaborated Mill’s argument and helpfully made explicit its crucial assumption, which he called the “principle of superior validity” (Sidgwick 1981; cf. Schneewind 1977). This is the principle that conflict between distinct moral or practical considerations can be rationally resolved only on the basis of some third principle or consideration that is both more general and more firmly warranted than the two initial competitors. From this assumption, one can readily build an argument for the rational necessity not merely of local deliberative commensurability, but of a global deliberative commensurability that, like Mill and Sidgwick, accepts just one ultimate umpire principle (cf. Richardson 1994, chap. 6).

Sidgwick’s explicitness, here, is valuable also in helping one see how to resist the demand for deliberative commensurability. Deliberative commensurability is not necessary for proceeding rationally if conflicting considerations can be rationally dealt with in a holistic way that does not involve the appeal to a principle of “superior validity.” That our moral reasoning can proceed holistically is strongly affirmed by Rawls. Rawls’s characterizations of the influential ideal of reflective equilibrium and his related ideas about the nature of justification imply that we can deal with conflicting considerations in less hierarchical ways than imagined by Mill or Sidgwick. Instead of proceeding up a ladder of appeal to some highest court or supreme umpire, Rawls suggests, when we face conflicting considerations “we work from both ends” (Rawls 1999, 18). Sometimes indeed we revise our more particular judgments in light of some general principle to which we adhere; but we are also free to revise more general principles in light of some relatively concrete considered judgment. On this picture, there is no necessary correlation between degree of generality and strength of authority or warrant. That this holistic way of proceeding (whether in building moral theory or in deliberating: cf. Hurley 1989) can be rational is confirmed by the possibility of a form of justification that is similarly holistic: “justification is a matter of the mutual support of many considerations, of everything fitting together into one coherent view” (Rawls 1999, 19, 507). (Note that this statement, which expresses a necessary aspect of moral or practical justification, should not be taken as a definition or analysis thereof.) So there is an alternative to depending, deliberatively, on finding a dimension in terms of which considerations can be ranked as “stronger” or “better” or “more stringent”: one can instead “prune and adjust” with an eye to building more mutual support among the considerations that one endorses on due reflection. If even the desideratum of practical coherence is subject to such re-specification, then this holistic possibility really does represent an alternative to commensuration, as the deliberator, and not some coherence standard, retains reflective sovereignty (Richardson 1994, sec. 26). The result can be one in which the originally competing considerations are not so much compared as transformed (Richardson 2018, chap. 1)

Suppose that we start with a set of first-order moral considerations that are all commensurable as a matter of ultimate, metaphysical fact, but that our grasp of the actual strength of these considerations is quite poor and subject to systematic distortions. Perhaps some people are much better placed than others to appreciate certain considerations, and perhaps our strategic interactions would cause us to reach suboptimal outcomes if we each pursued our own unfettered judgment of how the overall set of considerations plays out. In such circumstances, there is a strong case for departing from maximizing reasoning without swinging all the way to the holist alternative. This case has been influentially articulated by Joseph Raz, who develops the notion of an “exclusionary reason” to occupy this middle position (Raz 1990).

“An exclusionary reason,” in Raz’s terminology, “is a second order reason to refrain from acting for some reason” (39). A simple example is that of Ann, who is tired after a long and stressful day, and hence has reason not to act on her best assessment of the reasons bearing on a particularly important investment decision that she immediately faces (37). This notion of an exclusionary reason allowed Raz to capture many of the complexities of our moral reasoning, especially as it involves principled commitments, while conceding that, at the first order, all practical reasons might be commensurable. Raz’s early strategy for reconciling commensurability with complexity of structure was to limit the claim that reasons are comparable with regard to strength to reasons of a given order. First-order reasons compete on the basis of strength; but conflicts between first- and second-order reasons “are resolved not by the strength of the competing reasons but by a general principle of practical reasoning which determines that exclusionary reasons always prevail” (40).

If we take for granted this “general principle of practical reasoning,” why should we recognize the existence of any exclusionary reasons, which by definition prevail independently of any contest of strength? Raz’s principal answer to this question shifts from the metaphysical domain of the strengths that various reasons “have” to the epistemically limited viewpoint of the deliberator. As in Ann’s case, we can see in certain contexts that a deliberator is likely to get things wrong if he or she acts on his or her perception of the first-order reasons. Second-order reasons indicate, with respect to a certain range of first-order reasons, that the agent “must not act for those reasons” (185). The broader justification of an exclusionary reason, then, can consistently be put in terms of the commensurable first-order reasons. Such a justification can have the following form: “Given this agent’s deliberative limitations, the balance of first-order reasons will likely be better conformed with if he or she refrains from acting for certain of those reasons.”

Raz’s account of exclusionary reasons might be used to reconcile ultimate commensurability with the structured complexity of our moral reasoning. Whether such an attempt could succeed would depend, in part, on the extent to which we have an actual grasp of first-order reasons, conflict among which can be settled solely on the basis of their comparative strength. Our consideration, above, of casuistry, the additive fallacy, and deliberative incommensurability may combine to make it seem that only in rare pockets of our practice do we have a good grasp of first-order reasons, if these are defined, à la Raz, as competing only in terms of strength. If that is right, then we will almost always have good exclusionary reasons to reason on some other basis than in terms of the relative strength of first-order reasons. Under those assumptions, the middle way that Raz’s idea of exclusionary reasons seems to open up would more closely approach the holist’s.

The notion of a moral consideration’s “strength,” whether put forward as part of a metaphysical picture of how first-order considerations interact in fact or as a suggestion about how to go about resolving a moral conflict, should not be confused with the bottom-line determination of whether one consideration, and specifically one duty, overrides another. In Ross’s example of conflicting prima facie duties, someone must choose between averting a serious accident and keeping a promise to meet someone. (Ross chose the case to illustrate that an “imperfect” duty, or a duty of commission, can override a strict, prohibitive duty.) Ross’s assumption is that all well brought-up people would agree, in this case, that the duty to avert serious harm to someone overrides the duty to keep such a promise. We may take it, if we like, that this judgment implies that we consider the duty to save a life, here, to be stronger than the duty to keep the promise; but in fact this claim about relative strength adds nothing to our understanding of the situation. Yet we do not reach our practical conclusion in this case by determining that the duty to save the boy’s life is stronger. The statement that this duty is here stronger is simply a way to embellish the conclusion that of the two prima facie duties that here conflict, it is the one that states the all-things-considered duty. To be “overridden” is just to be a prima facie duty that fails to generate an actual duty because another prima facie duty that conflicts with it – or several of them that do – does generate an actual duty. Hence, the judgment that some duties override others can be understood just in terms of their deontic upshots and without reference to considerations of strength. To confirm this, note that we can say, “As a matter of fidelity, we ought to keep the promise; as a matter of beneficence, we ought to save the life; we cannot do both; and both categories considered we ought to save the life.”

Understanding the notion of one duty overriding another in this way puts us in a position to take up the topic of moral dilemmas . Since this topic is covered in a separate article, here we may simply take up one attractive definition of a moral dilemma. Sinnott-Armstrong (1988) suggested that a moral dilemma is a situation in which the following are true of a single agent:

  • He ought to do A .
  • He ought to do B .
  • He cannot do both A and B .
  • (1) does not override (2) and (2) does not override (1).

This way of defining moral dilemmas distinguishes them from the kind of moral conflict, such as Ross’s promise-keeping/accident-prevention case, in which one of the duties is overridden by the other. Arguably, Sartre’s student faces a moral dilemma. Making sense of a situation in which neither of two duties overrides the other is easier if deliberative commensurability is denied. Whether moral dilemmas are possible will depend crucially on whether “ought” implies “can” and whether any pair of duties such as those comprised by (1) and (2) implies a single, “agglomerated” duty that the agent do both A and B . If either of these purported principles of the logic of duties is false, then moral dilemmas are possible.

Jonathan Dancy has well highlighted a kind of contextual variability in moral reasons that has come to be known as “reasons holism”: “a feature that is a reason in one case may be no reason at all, or an opposite reason, in another” (Dancy 2004). To adapt one of his examples: while there is often moral reason not to lie, when playing liar’s poker one generally ought to lie; otherwise, one will spoil the game (cf. Dancy 1993, 61). Dancy argues that reasons holism supports moral particularism of the kind discussed in section 2.2 , according to which there are no defensible moral principles. Taking this conclusion seriously would radically affect how we conducted our moral reasoning. The argument’s premise of holism has been challenged (e.g., Audi 2004, McKeever & Ridge 2006). Philosophers have also challenged the inference from reasons holism to particularism in various ways. Mark Lance and Margaret Olivia Little (2007) have done so by exhibiting how defeasible generalizations, in ethics and elsewhere, depend systematically on context. We can work with them, they suggest, by utilizing a skill that is similar to the skill of discerning morally salient considerations, namely the skill of discerning relevant similarities among possible worlds. More generally, John F. Horty has developed a logical and semantic account according to which reasons are defaults and so behave holistically, but there are nonetheless general principles that explain how they behave (Horty 2012). And Mark Schroeder has argued that our holistic views about reasons are actually better explained by supposing that there are general principles (Schroeder 2011).

This excursus on moral reasons suggests that there are a number of good reasons why reasoning about moral matters might not simply reduce to assessing the weights of competing considerations.

If we have any moral knowledge, whether concerning general moral principles or concrete moral conclusions, it is surely very imperfect. What moral knowledge we are capable of will depend, in part, on what sorts of moral reasoning we are capable of. Although some moral learning may result from the theoretical work of moral philosophers and theorists, much of what we learn with regard to morality surely arises in the practical context of deliberation about new and difficult cases. This deliberation might be merely instrumental, concerned only with settling on means to moral ends, or it might be concerned with settling those ends. There is no special problem about learning what conduces to morally obligatory ends: that is an ordinary matter of empirical learning. But by what sorts of process can we learn which ends are morally obligatory, or which norms morally required? And, more specifically, is strictly moral learning possible via moral reasoning?

Much of what was said above with regard to moral uptake applies again in this context, with approximately the same degree of dubiousness or persuasiveness. If there is a role for moral perception or for emotions in agents’ becoming aware of moral considerations, these may function also to guide agents to new conclusions. For instance, it is conceivable that our capacity for outrage is a relatively reliable detector of wrong actions, even novel ones, or that our capacity for pleasure is a reliable detector of actions worth doing, even novel ones. (For a thorough defense of the latter possibility, which intriguingly interprets pleasure as a judgment of value, see Millgram 1997.) Perhaps these capacities for emotional judgment enable strictly moral learning in roughly the same way that chess-players’ trained sensibilities enable them to recognize the threat in a previously unencountered situation on the chessboard (Lance and Tanesini 2004). That is to say, perhaps our moral emotions play a crucial role in the exercise of a skill whereby we come to be able to articulate moral insights that we have never before attained. Perhaps competing moral considerations interact in contextually specific and complex ways much as competing chess considerations do. If so, it would make sense to rely on our emotionally-guided capacities of judgment to cope with complexities that we cannot model explicitly, but also to hope that, once having been so guided, we might in retrospect be able to articulate something about the lesson of a well-navigated situation.

A different model of strictly moral learning puts the emphasis on our after-the-fact reactions rather than on any prior, tacit emotional or judgmental guidance: the model of “experiments in living,” to use John Stuart Mill’s phrase (see Anderson 1991). Here, the basic thought is that we can try something and see if “it works.” For this to be an alternative to empirical learning about what causally conduces to what, it must be the case that we remain open as to what we mean by things “working.” In Mill’s terminology, for instance, we need to remain open as to what are the important “parts” of happiness. If we are, then perhaps we can learn by experience what some of them are – that is, what are some of the constitutive means of happiness. These paired thoughts, that our practical life is experimental and that we have no firmly fixed conception of what it is for something to “work,” come to the fore in Dewey’s pragmatist ethics (see esp. Dewey 1967 [1922]). This experimentalist conception of strictly moral learning is brought to bear on moral reasoning in Dewey’s eloquent characterizations of “practical intelligence” as involving a creative and flexible approach to figuring out “what works” in a way that is thoroughly open to rethinking our ultimate aims.

Once we recognize that moral learning is a possibility for us, we can recognize a broader range of ways of coping with moral conflicts than was canvassed in the last section. There, moral conflicts were described in a way that assumed that the set of moral considerations, among which conflicts were arising, was to be taken as fixed. If we can learn, morally, however, then we probably can and should revise the set of moral considerations that we recognize. Often, we do this by re-interpreting some moral principle that we had started with, whether by making it more specific, making it more abstract, or in some other way (cf. Richardson 2000 and 2018).

So far, we have mainly been discussing moral reasoning as if it were a solitary endeavor. This is, at best, a convenient simplification. At worst, it is, as Jürgen Habermas has long argued, deeply distorting of reasoning’s essentially dialogical or conversational character (e.g., Habermas 1984; cf. Laden 2012). In any case, it is clear that we often do need to reason morally with one another.

Here, we are interested in how people may actually reason with one another – not in how imagined participants in an original position or ideal speech situation may be said to reason with one another, which is a concern for moral theory, proper. There are two salient and distinct ways of thinking about people morally reasoning with one another: as members of an organized or corporate body that is capable of reaching practical decisions of its own; and as autonomous individuals working outside any such structure to figure out with each other what they ought, morally, to do.

The nature and possibility of collective reasoning within an organized collective body has recently been the subject of some discussion. Collectives can reason if they are structured as an agent. This structure might or might not be institutionalized. In line with the gloss of reasoning offered above, which presupposes being guided by an assessment of one’s reasons, it is plausible to hold that a group agent “counts as reasoning, not just rational, only if it is able to form not only beliefs in propositions – that is, object-language beliefs – but also belief about propositions” (List and Pettit 2011, 63). As List and Pettit have shown (2011, 109–113), participants in a collective agent will unavoidably have incentives to misrepresent their own preferences in conditions involving ideologically structured disagreements where the contending parties are oriented to achieving or avoiding certain outcomes – as is sometimes the case where serious moral disagreements arise. In contexts where what ultimately matters is how well the relevant group or collective ends up faring, “team reasoning” that takes advantage of orientation towards the collective flourishing of the group can help it reach a collectively optimal outcome (Sugden 1993, Bacharach 2006; see entry on collective intentionality ). Where the group in question is smaller than the set of persons, however, such a collectively prudential focus is distinct from a moral focus and seems at odds with the kind of impartiality typically thought distinctive of the moral point of view. Thinking about what a “team-orientation” to the set all persons might look like might bring us back to thoughts of Kantian universalizability; but recall that here we are focused on actual reasoning, not hypothetical reasoning. With regard to actual reasoning, even if individuals can take up such an orientation towards the “team” of all persons, there is serious reason, highlighted by another strand of the Kantian tradition, for doubting that any individual can aptly surrender their moral judgment to any group’s verdict (Wolff 1998).

This does not mean that people cannot reason together, morally. It suggests, however, that such joint reasoning is best pursued as a matter of working out together, as independent moral agents, what they ought to do with regard to an issue on which they have some need to cooperate. Even if deferring to another agent’s verdict as to how one morally ought to act is off the cards, it is still possible that one may licitly take account of the moral testimony of others (for differing views, see McGrath 2009, Enoch 2014).

In the case of independent individuals reasoning morally with one another, we may expect that moral disagreement provides the occasion rather than an obstacle. To be sure, if individuals’ moral disagreement is very deep, they may not be able to get this reasoning off the ground; but as Kant’s example of Charles V and his brother each wanting Milan reminds us, intractable disagreement can arise also from disagreements that, while conceptually shallow, are circumstantially sharp. If it were true that clear-headed justification of one’s moral beliefs required seeing them as being ultimately grounded in a priori principles, as G.A. Cohen argued (Cohen 2008, chap. 6), then room for individuals to work out their moral disagreements by reasoning with one another would seem to be relatively restricted; but whether the nature of (clearheaded) moral grounding is really so restricted is seriously doubtful (Richardson 2018, §9.2). In contrast to what such a picture suggests, individuals’ moral commitments seem sufficiently open to being re-thought that people seem able to engage in principled – that is, not simply loss-minimizing – compromise (Richardson 2018, §8.5).

What about the possibility that the moral community as a whole – roughly, the community of all persons – can reason? This possibility does not raise the kind of threat to impartiality that is raised by the team reasoning of a smaller group of people; but it is hard to see it working in a way that does not run afoul of the concern about whether any person can aptly defer, in a strong sense, to the moral judgments of another agent. Even so, a residual possibility remains, which is that the moral community can reason in just one way, namely by accepting or ratifying a moral conclusion that has already become shared in a sufficiently inclusive and broad way (Richardson 2018, chap. 7).

  • Anderson, E. S., 1991. “John Stuart Mill and experiments in living,” Ethics , 102: 4–26.
  • Anderson, E. S. and Pildes, R. H., 2000. “Expressive theories of law: A general restatement,” University of Pennsylvania Law Review , 148: 1503–1575.
  • Arpaly, N. and Schroeder, T. In praise of desire , Oxford: Oxford University Press.
  • Audi, R., 1989. Practical reasoning , London: Routledge.
  • –––. 2004. The good in the right: A theory of good and intrinsic value , Princeton: Princeton University Press.
  • Bacharach, M., 2006. Beyond individual choice: Teams and frames in game theory , Princeton: Princeton University Press.
  • Beauchamp, T. L., 1979. “A reply to Rachels on active and passive euthanasia,” in Medical responsibility , ed. W. L. Robinson, Clifton, N.J.: Humana Press, 182–95.
  • Brandt, R. B., 1979. A theory of the good and the right , Oxford: Oxford University Press.
  • Bratman, M., 1999. Faces of intention: Selected essays on intention and agency , Cambridge, England: Cambridge University Press.
  • Broome, J., 2009. “The unity of reasoning?” in Spheres of reason , ed. S. Robertson, Oxford: Oxford University Press.
  • –––, 2013. Rationality through Reasoning , Chichester, West Sussex: Wiley Blackwell.
  • Campbell, R. and Kumar, V., 2012. “Moral reasoning on the ground,” Ethics , 122: 273–312.
  • Chang, R. (ed.), 1998. Incommensurability, incomparability, and practical reason , Cambridge, Mass.: Harvard University Press.
  • Clarke, S. G., and E. Simpson, 1989. Anti-theory in ethics and moral conservativism , Albany: SUNY Press.
  • Dancy, J., 1993. Moral reasons , Oxford: Blackwell.
  • –––, 2004. Ethics without principles , Oxford: Oxford University Press.
  • Dewey, J., 1967. The middle works, 1899–1924 , Vol. 14, Human nature and conduct , ed. J. A. Boydston, Carbondale: Southern Illinois University Press.
  • Donagan, A., 1977. The theory of morality , Chicago: University of Chicago Press.
  • Dworkin, R., 1978. Taking rights seriously , Cambridge: Harvard University Press.
  • Engstrom, S., 2009. The form of practical knowledge: A study of the categorical imperative , Cambridge, Mass.: Harvard University Press.
  • Enoch, D., 2014. “In defense of moral deference,” Journal of philosophy , 111: 229–58.
  • Fernandez, P. A., 2016. “Practical reasoning: Where the action is,” Ethics , 126: 869–900.
  • Fletcher, J., 1997. Situation ethics: The new morality , Louisville: Westminster John Knox Press.
  • Frankfurt, H. G., 1988. The importance of what we care about: Philosophical essays , Cambridge: Cambridge University Press.
  • Gert, B., 1998. Morality: Its nature and justification , New York: Oxford University Press.
  • Gibbard, Allan, 1965. “Rule-utilitarianism: Merely an illusory alternative?,” Australasian Journal of Philosophy , 43: 211–220.
  • Goldman, Holly S., 1974. “David Lyons on utilitarian generalization,” Philosophical Studies , 26: 77–95.
  • Greene, J. D., 2014. “Beyond point-and-shoot morality: Why cognitive (neuro)science matters for ethics,” Ethics , 124: 695–726.
  • Habermas, J., 1984. The theory of communicative action: Vol. I, Reason and the rationalization of society , Boston: Beacon Press.
  • Haidt, J., 2001. “The emotional dog and its rational tail: A social intuitionist approach to moral judgment,” Psychological Review , 108: 814–34.
  • Hare, R. M., 1981. Moral thinking: Its levels, method, and point , Oxford: Oxford University Press.
  • Harman, G., 1986. Change in view: principles of peasoning , Cambridge, Mass.: MIT Press.
  • Held, V., 1995. Justice and care: Essential readings in feminist ethics , Boulder, Colo.: Westview Press.
  • Hieronymi, P., 2013. “The use of reasons in thought (and the use of earmarks in arguments),” Ethics , 124: 124–27.
  • Horty, J. F., 2012. Reasons as defaults , Oxford: Oxford University Press.
  • –––, 2016. “Reasoning with precedents as constrained natural reasoning,” in E. Lord and B. McGuire (eds.), Weighing Reasons , Oxford: Oxford University Press: 193–212.
  • Hume, D., 2000 [1739–40]. A treatise of human nature , ed. D. F. Norton and M. J. Norton, Oxford: Oxford University Press.
  • Hurley, S. L., 1989. Natural reasons: Personality and polity , New York: Oxford University Press.
  • Jonsen, A. R., and S. Toulmin, 1988. The abuse of casuistry: A history of moral reasoning , Berkeley: University of California Press.
  • Kagan, S., 1988. “The additive fallacy,” Ethics , 90: 5–31.
  • Knobe, J., 2006. “The concept of individual action: A case study in the uses of folk psychology,” Philosophical Studies , 130: 203–231.
  • Koenigs, M., 2007. “Damage to the prefrontal cortex increases utilitarian moral judgments,” Nature , 446: 908–911.
  • Kolodny, N., 2005. “Why be rational?” Mind , 114: 509–63.
  • Laden, A. S., 2012. Reasoning: A social picture , Oxford: Oxford University Press.
  • Korsgaard, C. M., 1996. Creating the kingdom of ends , Cambridge: Cambridge University Press.
  • Lance, M. and Little, M., 2007. “Where the Laws Are,” in R. Shafer-Landau (ed.), Oxford Studies in Metaethics (Volume 2), Oxford: Oxford University Press.
  • List, C. and Pettit, P., 2011. Group agency: The possibility, design, and status of corporate agents , Oxford: Oxford University Press.
  • McDowell, John, 1998. Mind, value, and reality , Cambridge, Mass.: Harvard University Press.
  • McGrath, S., 2009. “The puzzle of moral deference,” Philosophical Perspectives , 23: 321–44.
  • McKeever, S. and Ridge, M. 2006., Principled Ethics: Generalism as a Regulative Idea , Oxford: Oxford University Press.
  • McNaughton, D., 1988. Moral vision: An introduction to ethics , Oxford: Blackwell.
  • Mill, J. S., 1979 [1861]. Utilitarianism , Indianapolis: Hackett Publishing.
  • Millgram, E., 1997. Practical induction , Cambridge, Mass.: Harvard University Press.
  • Mikhail, J., 2011. Elements of moral cognition: Rawls’s linguistic analogy and the cognitive science of moral and legal judgment , Cambridge: Cambridge University Press.
  • Nell, O., 1975. Acting on principle: An essay on Kantian ethics , New York: Columbia University Press.
  • Nussbaum, M. C., 1990. Love’s knowledge: Essays on philosophy and literature , New York: Oxford University Press.
  • –––, 2001. Upheavals of thought: The intelligence of emotions , Cambridge, England: Cambridge University Press.
  • Pietroski, P. J., 1993. “Prima facie obligations, ceteris paribus laws in moral theory,” Ethics , 103: 489–515.
  • Prinz, J., 2007. The emotional construction of morals , Oxford: Oxford University Press.
  • Rachels, J., 1975. “Active and passive euthanasia,” New England Journal of Medicine , 292: 78–80.
  • Railton, P., 1984. “Alienation, consequentialism, and the demands of morality,” Philosophy and Public Affairs , 13: 134–71.
  • –––, 2014. “The affective dog and its rational tale: Intuition and attunement,” Ethics , 124: 813–59.
  • Rawls, J., 1971. A theory of justice , Cambridge, Mass.: Harvard University Press.
  • –––, 1996. Political liberalism , New York: Columbia University Press.
  • –––, 1999. A theory of justice , revised edition, Cambridge, Mass.: Harvard University Press.
  • –––, 2000. Lectures on the history of moral philosophy , Cambridge, Mass.: Harvard University Press.
  • Raz, J., 1990. Practical reason and norms , Princeton: Princeton University Press.
  • Richardson, H. S., 1994. Practical reasoning about final ends , Cambridge: Cambridge University Press.
  • –––, 2000. “Specifying, balancing, and interpreting bioethical principles,” Journal of Medicine and Philosophy , 25: 285–307.
  • –––, 2002. Democratic autonomy: Public reasoning about the ends of policy , New York: Oxford University Press.
  • –––, 2004. “Thinking about conflicts of desires,” in Practical conflicts: New philosophical essays , eds. P. Baumann and M. Betzler, Cambridge: Cambridge University Press, 96–117.
  • –––, 2018. Articulating the moral community: Toward a constructive ethical pragmatism , New York: Oxford University Press.
  • Ross, W. D., 1988. The right and the good , Indianapolis: Hackett.
  • Sandel, M., 1998. Liberalism and the limits of justice , Cambridge: Cambridge University Press.
  • Sartre, J. P., 1975. “Existentialism is a Humanism,” in Existentialism from Dostoyevsky to Sartre , ed. W. Kaufmann, New York: Meridian-New American, 345–69.
  • Scheffler, Samuel, 1992. Human morality , New York: Oxford University Press.
  • Schmidtz, D., 1995. Rational choice and moral agency , Princeton: Princeton University Press.
  • Schneewind, J.B., 1977. Sidgwick’s ethics and Victorian moral philosophy , Oxford: Oxford University Press.
  • Schroeder, M., 2011. “Holism, weight, and undercutting.” Noûs , 45: 328–44.
  • Schwitzgebel, E. and Cushman, F., 2012. “Expertise in moral reasoning? Order effects on moral judgment in professional philosophers and non-philosophers,” Mind and Language , 27: 135–53.
  • Sidgwick, H., 1981. The methods of ethics , reprinted, 7th edition, Indianapolis: Hackett.
  • Sinnott-Armstrong, W., 1988. Moral dilemmas , Oxford: Basil Blackwell.
  • Smith, M., 1994. The moral problem , Oxford: Blackwell.
  • –––, 2013. “A constitutivist theory of reasons: Its promise and parts,” Law, Ethics and Philosophy , 1: 9–30.
  • Sneddon, A., 2007. “A social model of moral dumbfounding: Implications for studying moral reasoning and moral judgment,” Philosophical Psychology , 20: 731–48.
  • Sugden, R., 1993. “Thinking as a team: Towards an explanation of nonselfish behavior,” Social Philosophy and Policy , 10: 69–89.
  • Sunstein, C. R., 1996. Legal reasoning and political conflict , New York: Oxford University Press.
  • Tiberius, V., 2000. “Humean heroism: Value commitments and the source of normativity,” Pacific Philosophical Quarterly , 81: 426–446.
  • Vogler, C., 1998. “Sex and talk,” Critical Inquiry , 24: 328–65.
  • Wellman, H. and Miller, J., 2008. “Including deontic reasoning as fundamental to theory of mind,” Human Development , 51: 105–35
  • Williams, B., 1981. Moral luck: Philosophical papers 1973–1980 , Cambridge: Cambridge University Press.
  • Wolff, R. P., 1998. In defense of anarchism , Berkeley and Los Angeles: University of California Press.
  • Young, L. and Saxe, R., 2008. “The neural basis of belief encoding and integration in moral judgment,” NeuroImage , 40: 1912–20.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

agency: shared | intentionality: collective | moral dilemmas | moral particularism | moral particularism: and moral generalism | moral relativism | moral skepticism | practical reason | prisoner’s dilemma | reflective equilibrium | value: incommensurable

Acknowledgments

The author is grateful for help received from Gopal Sreenivasan and the students in a seminar on moral reasoning taught jointly with him, to the students in a more recent seminar in moral reasoning, and, for criticisms received, to David Brink, Margaret Olivia Little and Mark Murphy. He welcomes further criticisms and suggestions for improvement.

Copyright © 2018 by Henry S. Richardson < richardh @ georgetown . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Logo for VIVA Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

4 Philosophy, Ethics and Thinking

Philosophy, Ethics and Thinking

Mark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125

Thinking

Philosophy is hard. Part of the reason it can feel so annoying is because it seems like it should not be hard. After all, philosophy just involves thinking, and we all think — thinking is easy! We do it without…well, thinking. Yet philosophy involves not just thinking , but thinking well . Of course it is true that we all think. But thinking, like football, math, baking and singing is something we can get better at. Unfortunately, people rarely ask how . If you do not believe us, then just open your eyes. Society might be a whole lot better off if we thought well, more often. Philosophy will not give you the ability to solve the problems of the world; we are not that naive! But if you engage with philosophy, then you will be developing yourself as a thinker who thinks well . Philosophy is useful not merely to would-be philosophers, but also to any would be thinkers, perhaps heading off to make decisions in law, medicine, structural engineering — just about anything that requires you to think effectively and clearly. However, if Philosophy is hard, then Ethics is really hard. This might seem unlikely at first glance. After all, Ethics deals with issues of right and wrong, and we have been discussing “what is right” and “what is wrong” since we were children. Philosophy of Mind, on the other hand, deals with topics like the nature of consciousness, while Metaphysics deals with the nature of existence itself. Indeed, compared to understanding a lecture in the Philosophy of Physics, arguing about the ethics of killing in video games might seem something of a walk in the park. This is misleading, not because other areas of philosophy are easy, but because the complexity of ethics is well camouflaged.

When you study Ethics, and you evaluate what is right and wrong, it can be tempting and comforting to spend time simply defending your initial views; few people would come to a debate about vegetarianism, or abortion, without some pre-existing belief. If you are open-minded in your ethical approach then you need not reject everything you currently believe, but you should see these beliefs as starting points , or base camps, from which your inquiry commences. For example, why do you think that eating animals is OK, or that abortion is wrong? If you think that giving to charity is good, what does “good” mean? For true success, ethics requires intellectual respect. If you might think that a particular position is obviously false, perhaps take this reaction as a red flag, as it may suggest that you have missed some important step of an argument — ask yourself why someone, presumably just as intellectually proficient as yourself, might have once accepted that position. If you are thinking well as an ethicist, then you are likely to have good reasons for your views, and be prepared to rethink those views where you cannot find such good reasons. In virtue of this, you are providing justification for the beliefs you have. It is the philosopher’s job, whatever beliefs you have, to ask why you hold those beliefs. What reasons might you have for those beliefs? For example, imagine the reason that you believe it is OK to eat meat is that it tastes nice . As philosophers we can say that this is not a particularly good reason. Presumably it might taste nice to eat your pet cat, or your neighbor, or your dead aunt; but in these cases the “taste justification” seems totally unimportant! The details of this debate are not relevant here. The point is that there are good and bad reasons for our beliefs and it is the philosopher’s job to reveal and analyze them (Hospers, 1997).Philosophy is more than just fact-learning, or a “history of ideas”. It is different from chemistry, mathematics, languages, theology etc. It is unique. Sure, it is important to learn some facts, and learn what others believed, but a successful student needs to do more than simply regurgitate information in order to both maneuver past the exam hurdles and to become a better ethicist.

Philosophy, and in particular Ethics, is a live and evolving subject. When you study philosophy you are entering a dialogue with those that have gone before you. Learning about what various philosophers think will enable you to become clearer about what you think and add to that evolving dialogue.In order to understand philosophy you need to be authentic with yourself and to ask what you think, using this as a guide to critically analyze the ideas learned and lead yourself to your own justifiable conclusion. Philosophy is a living and dynamic subject that we cannot reduce to a few key facts, or a simplistic noting of what other people have said. Some people distinguish between “ethics” and “morality,” but we will use these words interchangeably. Moral questions are distinct from legal questions, although, of course, moral issues might have some implications for the law. That child labor is morally unacceptable might mean that we have a law against it. But it is unhelpful to answer whether something is morally right or wrong by looking to the laws of the land. It is quite easy to see why. Imagine a country which has a set of actions which are legally acceptable, but morally unacceptable or vice versa — the well-used example of Nazi Germany brings to mind this distinction. Therefore, in discussions about ethics do be wary of talking about legal issues. Much more often than not, such points will be irrelevant.Something to keep separate are moral reasons and prudential reasons. Prudential reasons relate to our personal reasons for doing things. Consider some examples. When defending slavery, people used to cite the fact that it supported the economy as a reason to keep it. It is true, of course, that this is a reason; it is a prudential reason, particularly for those who benefited from slavery such as traders or plantation owners. Yet, such a reason does not help us with the moral question of slavery. We would say “OK, but so what if it helps the economy! Is it right or wrong?”

Another important distinction is between descriptive and prescriptive claims. This is sometimes referred to as the “is/ought” gap. Consider some examples. Imagine the headline: “ Scientists discover a gene explaining why we want to punch people wearing red trousers ”. The article includes lots of science showing the genes and the statistical proof. Yet, none of this will tell us whether acting violently towards people wearing red trousers is morally acceptable. The explanation of why people feel and act in certain ways leaves it open as to how people morally ought to act. Consider a more serious example, relating to the ethics of eating meat. Supporters of meat-eating often point to our incisor teeth. This shows that it is natural for us to eat meat, a fact used as a reason for thinking that it is morally acceptable to do so. But this is a bad argument. Just because we have incisors does not tell us how we morally ought to behave. It might explain why we find it easy to eat meat, and it might even explain why we like eating meat. But this is not relevant to the moral question. Don’t you believe us? Imagine that dentists discover that our teeth are “designed” to eat other humans alive. What does this tell us about whether it is right or wrong to eat humans alive? Nothing.You will also be aware of the philosophical device known as a “thought experiment”. These are hypothetical, sometimes fanciful, examples that are designed to aid our thinking about an issue. For example, imagine that you could travel back in time. You are pointing a gun at your grandfather when he was a child. Would it be possible for you to pull the trigger? Or, imagine that there is a tram running down a track. You could stop it, thereby saving five people, by throwing a fat man under the tracks. Is this the morally right thing to do?The details here are unimportant. What is important, is that it is inadequate to respond: “yes, but that could never happen!” Thought experiments are devices to help us to think about certain issues. Whether they are possible in real life does not stop us doing that thinking. Indeed, it is not just philosophy that uses thought experiments. When Einstein asked what would happen if he looked at his watch near a black hole, this was a thought experiment. In fact, most other subjects use thought experiments. It is just that philosophy uses them more frequently, and they are often a bit more bizarre.

Finally, we want to draw your attention to a common bad argument as we want you to be aware of the mistake it leads to. Imagine that a group of friends are arguing about which country has won the most Olympic gold medals. Max says China, Alastair says the US, Dinh says the UK. There is general ignorance and disagreement; but does this mean that there is not an answer to the question of “which country has won the most Olympic gold medals?” No! We cannot move from the fact that people disagree to the conclusion that there is no answer. Now consider a parallel argument that we hear far too often.Imagine that you and your friends are discussing whether euthanasia is morally acceptable. Some say yes, the others say no. Each of you cite how different cultures have different views on euthanasia. Does this fact — that there is disagreement — mean that there is no answer to the question of whether euthanasia is morally acceptable? Again, the answer is no. That answer did not follow in the Olympic case, and it does not follow in the moral one either. So just because different cultures have different moral views, this does not show, by itself, that there is no moral truth and no answer to the question.

Hospers, J. (1997). An introduction to philosophical analysis ( 4th edition). New York and London: Routledge      https://doi.org/10.4324/9780203714454

Philosophy, Ethics and Thinking Copyright © 2020 by Mark Dimmock and Andrew Fisher, Ethics for A-Level. Cambridge, UK: Open Book Publishers, 2017, https://doi.org/10.11647/OBP.0125 is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Book cover

Critical Thinking and Epistemic Injustice pp 37–104 Cite as

Ethics, Education, and Reasoning

  • Alessia Marabini 13  
  • First Online: 23 March 2022

505 Accesses

Part of the book series: Contemporary Philosophies and Theories in Education ((COPT,volume 20))

Methods based on ‘technical’ skills assessable on objective standards do not suffice to form critical abilities. Following contemporary virtue epistemologists , I will contend that what crucially matters is the acquisition of appropriate sensitivity , mental attitudes and character traits that inform and help students’ epistemic evaluations. I will suggest that this view, when applied to general education, corresponds to a generalisation of Matthew Lipman’s thesis that education in ethics can only take place as an activity that allows for the forming of a sensitivity . This is so because the problem is often not individuating the relevant values or principles, already generally shared, but understanding how these principles and values should be applied in judgements made in specific situations. I will defend, with others, the view that teaching how to think critically comes together with the formation of the character of the fair-minded sophisticated thinker, rather than the mere skillful sophisticated thinker. Nevertheless, my conception of critical thinking also presupposes a genuine transmission of contents , where these contents are thought of as patterns of material inferences shared by a cultural tradition.

  • Critical thinking
  • Education to ethics
  • Epistemic value
  • Virtue and competence
  • Social recognition
  • Rational imagination

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

The positions of Megan Laverty ( 2004 , quoted in Marabini, 2006 , pp. 6–8) and Standish and Thoilliez ( 2018 ) are also placed on a similar line. According to Laverty, one of the contexts that favour ethics education by promoting understanding of the common art of living well— even more than social behaviours instilled from the outside—is philosophical dialogue. An education to ethics through philosophical dialogue means, for the scholar, an education in the art of living a good life that even goes beyond the very concept of ethics as a virtue . Laverty’s article takes its cue from a reflection on the relationship between ethics and philosophical dialogue in relation to three fundamental aspects: research, virtue and love. The sphere of ethics is in turn understood as articulated on other aspects. In the first place, it consists in the possibility of a reasonableness of the ethics obtained through a discourse of rational analysis or discourse; in the second place as the practice and formation of habits to ethical behaviours through the development of ethical virtue as an education in dispositions to action; thirdly, as a dimension of the ineffable, in the sense of a transcendence of the egoistic self and overcoming of the self. It is this third dimension that represents the most interesting aspect of the use of philosophical dialogue in education. It is a question of ethically transforming human subjectivity through what Laverty, in one word, calls love . More than through reasonableness or predisposition to virtue, philosophical dialogue is an activity that allows the encounter with otherness and that makes it similar to those activities that have the dual function of resisting individual self-absorption and revealing to the individual how reality appears when contemplation is not mediated by the ego. These activities include, for example, the pleasure of studying, the concentration that is achieved when reading literature or studying a foreign language, the aesthetic experience of art when observing a work of art or performing manual work. They allow the speaker to go beyond herself. The reference to philosophy in this third sense, as an activity endowed in itself with a value within an ethical approach, and not only as a means or tool for achieving goals external to it, makes it particularly endowed with a formative value (Laverty, 2004 ; Marabini, 2006 ). Even the aesthetic experience becomes, in this context, an ethical experience that predisposes to the formation of virtues. However, what appears particularly important in Laverty’s reflection is that the interesting aspect of the theories centred on virtues, ethical or aesthetic, lies not only in their capacity as reflections on ethics and human conduct, but also in being a foundation for the development of important epistemic theories. Compared to a widespread epistemological tradition, they present the novelty of focusing on aspects of the agent’s character and the social dimension within an explanation of what knowledge consists of. Not far from these positions are some reflections by Standish and Thoilliez, which start from a partially critical conception of the standardised and formal version of critical thinking. Standish and Thoilliez confer value to j udgement in the aesthetic field intended not as a tool, but as a constitutive value in the development of a good life, through the formation of a critical identity within a research community.

For example, in the well-known dialogue by Plato, Meno , knowledge is compared to the statues of Daedalus, which must be tied in order to remain firmly in their position. Likewise, in knowledge, opinions or beliefs that are true but potentially elusive because they are subject to objections, are made firm through the reasons which thus constitute a justification .

This paragraph takes up some arguments developed in presentations delivered respectively by Alessia Marabini and Annalisa Cattani ( 2010 ) at APA, Eastern Division Annual Conference, Boston, Alessia Marabini ( 2015 ) at Plato Annual Conference, Seattle, and in Sean Moran’s doctoral thesis (2011).

Modus tollens can be described as the rule according to which ‘If p , then q ’ is logically equivalent to ‘If not-q , then not-p ’. Thus, safety and sensitivity are not equivalent because according to safety ––but not to sensitivity––‘If p happened, then q would happen’ is not logically equivalent to ‘If q didn’t happen, then p wouldn’t happen’.

Imposing safety as a necessary condition for knowledge allows us to consider, correctly, the cases introduced by Gettier as situations of non-knowledge . If p in fact means ‘It is twelve o’clock” and the method used by S to believe p is to observe the stopped clock, this method does not satisfy Safety. In fact, if we consider circumstances very similar to the current ones in which S simply looks at the clock a few seconds before noon, we see that in that case, the antecedent of Safety is true, but the consequent is false.

“Critical thinking educational models are a diverse lot. Some combine a focus on critical thinking skills with a focus on ‘critical spirit’ or good intellectual ‘dispositions’ which are very much like (if not identical to) intellectual character virtues. For our purposes, it will be helpful to consider an approach that focuses strictly on the development of critical thinking skills or abilities . Let us stipulate that the approach in question is rigorous, demanding competence in complex forms of reasoning across a wide range of different content areas. While satisfying the desideratum of intellectual rigour, there is no guarantee that this approach will be sufficiently personal. The primary concern of a teacher on this model will be whether her students are developing the ability to reason in the relevant ways. She might be unconcerned with whether they are developing a motivation or inclination to think in these ways outside of class. And even if she does have this concern, it will not (as such) be situated within a broader commitment to nurturing the intellectual character of her students, that is, to their becoming more curious, open–minded, fair–minded, intellectually courageous, persevering, and so on. In trying to impart the relevant skills, she might even be oblivious to such considerations” (Baehr, 2013 , p. 252).

After that first approach to CBE in 1992, some new considerations have been taken into account, like the fact that criteria of economic utility might have c hanged , up to the point of requiring new kind of skills like ‘soft’ skills. A new set of key competencies was then introduced which should have fostered, in learners, first adaptability , like problem solving––which leads to pure cognitive questions––, second, a better understanding of emotions ––which are intended as non-cognitive questions––, third ethics and morality for a citizenship education and other aspects connected to individual good life . Nonetheless, according to some critics, rather than trying to solve the conflict among traditional indicators of competencies and the new demands previously excluded, this operation was done without putting into question our idea of society.

According to Rosa, social acceleration is a totalitarian form in and of modern society. For Rosa this kind of totalitarianism should not be intended as a political dictator, or political group, class or party; rather in late-modern property society, the totalitarian power rests in an abstract principle that nevertheless subjects all who live under its rule (Rosa, 2010 , p. 61).

Paul and Elder (2007a, pp. 6–8) report that the qualities that the National Society of Professional Engineers ( 2003 ) introduces into the Code of Ethics for Engineers , 2003, characterise the professionalism of the engineer who consciously restricts his professional judgements to those domains with respect to which he is professionally qualified.

In their Critical Thinking handbook (1997), Paul, Binker, Jensen and Kreklau introduce at least seven interdependent traits of the mind that teachers should cultivate if they want students to become critical thinkers in the strong sense ( 1997 , p. 381). Among these are Intellectual humility , Intellectual Courage, Intellectual Empathy Intellectual Good Faith (Integrity), Intellectual Perseverance, Faith in Reason and Intellectual Sense of Justice . According to the authors these intellectual traits are interdependent. If we consider, for example, Intellectual humility which is described as “Awareness of the limits of one’s knowledge, including sensitivity to circumstances in which one’s native egocentrism is likely to function self-deceptively; sensitivity to bias and prejudice and limitations of one’s viewpoint (Paul et al., 1997 , p. 381)”—i.e., the trait of mind necessary to become aware of the limits of our knowledge—we need then the courage to face our own prejudices and ignorance. But we will not make that effort unless we have faith in reason that we will not be deceived by whatever is false or misleading in the opposing viewpoint, and intellectual sense of justice . Moreover, we must recognise an intellectual responsibility to be fair to views we oppose. We must feel obliged to hear them in their strongest form to ensure that we are not condemning them out of ignorance or bias on our part. At the end, we come back to where we began: the need for Intellectual humility (Paul et al., 1997 , p. 382). I think that these traits, which are applicable in all domains of knowledge, perfectly fit what should represent, in my view, the unique way of giving substance to the aim of fostering a key competence like the capacity to learn or learning to learn . Unfortunately, making clear the difference between selfish and fair-minded thought still does not seem to be a priority in schooling nowadays like the authors by the time this work was published in 1997 had already observed. Though many students develop critical thinking competences those reasoning are used to advance selfish aims and prudential ethics.

To this purpose, Byrne shows a case of rational imagination through counterfactual reasoning in the speech made by Martin Luther King a decade after being attacked in 1958: “Martin Luther King Jr. almost died when he was stabbed in 1958. A decade later he made the following remarks during a speech: ‘The tip of the blade was on the edge of my aorta… It came out in The New York Times the next morning that if I had merely sneezed I would have died… And I want to say tonight, I want to say tonight that I too am happy that I didn’t sneeze. Because if I had sneezed, I wouldn’t have been around here in 1960 when students all over the South started sitting in at lunch counters… If I had sneezed, I wouldn’t have been here in 1963 when the black people of Birmingham, Alabama, aroused the conscience of this nation and brought into being the Civil Rights bill… If I had sneezed, I wouldn’t have had the chance later that year in August to try to tell America about a dream that I had had… I’m so happy that I didn’t sneeze’” (Byrne, 2005 ).

In a conference held in December 2010 at the APA Eastern Division Annual Conference (Marabini & Cattani, 2010 ) as part of the ICPIC panel, I argued that the topic of Lisa’s dilemma could be understood by taking the cue from some reflections by Williamson on counterfactual and the problem of knowledge extension––a problem at the time not taken into great consideration by the analytic current––and some research by the psychologist Byrne on rational imagination and counterfactual reasoning. On that occasion, I also emphasized the fact that the position I held was different from that of Lipman and Sharp on one point. The two authors warn that epistemic questions concern the application of universal and universally valid laws, while the same cannot be said for ethical questions for the reason that the latter require consideration of circumstances and situations and therefore ask for judgements on the premises of the argument. My position was that those ethical questions, for this very reason, are similar to epistemic questions, provided that an essential problem for the latter is that of explaining how the process of knowledge extension can take place. In a way similar to ethical problems, also in the epistemic case the capacity of judgement is required as research and investigation on the premises of the arguments in which some universal values or universally shared ethical principles appear. Judgement constitutes an act of freedom and typically human will to submit to the norm.

Adler, J. (2003). Knowledge, truth and learning. In R. Curren (Ed.), A companion to the philosophy of education . Blackwell.

Google Scholar  

Baehr, J. (2013). Educating from intellectual virtues: From theory to practice. Journal of Philosophy of Education, 47 (2), 248–262.

Bagnoli, C. (2013). Constructivism in ethics . Cambridge University Press.

Beck, J., & Young, M. (2005). The assault on the professions and the restructuring of academic and professional identities: A Bernsteinian analysis. British Journal of Sociology of Education, 26 (2), 183–197.

Byrne, R. (2005). The rational imagination: How people create alternatives to reality . MIT Press.

Byrne, R. (2017). Counterfactual thinking: From logic to morality. Current Directions in Psychological Science, 26 (4), 314–322.

Canto-Sperber, M., & Dupuy, J. P. (2001). Competencies for the good life and the good society. In D. S. Reychen & L. H. Salganik (Eds.), Defining and selecting key competencies . Hogrefe & Huber.

Carr, D. (2007). Character in teaching. British Journal of Educational Studies, 55 (4), 369–389.

Carter, A. J., & Kotzee, B. (2015). Epistemology of Education (Oxford … Bibliographies on–line). Oxford University Press.

Cassam, Q. (2021). Epistemic vices, ideologies, and false consciousness. In M. Hannon & J. de Ridder (Eds.), The Routledge handbook of political epistemology (pp. 301–311). Routledge.

Dreyfus, H. (1999). The primacy of phenomenology over logical analysis. Philosophical Topics, 27 (2), 3–24.

Elgin, C. (1999). Epistemology’s ends, pedagogy’s prospects. Facta Philosophica, 1 , 39–54.

Ennis, R. (1962). A concept of critical thinking. Harvard Educational Review, 32 , 81–111.

Evans, et al. (1993). The mental model theory of condtitional reasoning: Critical appraisal and revision. Cognition, 48 (1), 1–20.

Frank, T. (2016). Listen, liberal or whatever happened to the party of the people? London & Melbourne, Scribe.

Freire, P. (1970). Pedagogy of the oppressed (M. Ramos, Trans.). Continuum (Original work published 1968).

Gettier, E. L. (1963). Is justified true belief knowledge? Analysis, 23 , 121–123.

Goldman, A. I. (1999). Knowledge in a social world . Clarendon Press.

Goldman, A. I. (2006). Social epistemology, theory of evidence, and intelligent design: Deciding what to teach. The Southern Journal of Philosophy, 24 , 1–22.

Goodhart, D. (2017). The road to somewhere: The new tribes shaping British politics . Penguin Books.

Habermas, J. (1984). The theory of communicative action , vol. 1 . Reason and the rationalization of society (T. McCarthy, Trans.). Polity Press.

Habermas, J. (1989). The theory of communicative action, vol. 2 lifeworld and system. A critique of functionalist reason (T. McCarthy, Trans.). Polity Press.

Honneth, A. (1994). Kampf um Anerkennung. Zur moralischen Grammatik Sozialer Konfiikte . Suhrkamp.

Honneth, A. (2003). Umverteilung als Anerkennung. Eine Erwiderung auf Nancy Fraser. In N. Fraser & A. Honneth (Eds.), Umverteilung oder Anerkennung? Eine Politischphilosophische Kontroverse (pp. 129–224). Suhrkamp.

Jeffrey, R. (1981). The logic of decisions defended. Synthese, 48 , 473–492.

Johnson-Laird, P. N., & Byrne, R. (1991). Deduction . Lawrence Erlbaum Associates.

Kahan, D. (2016). The politically motivated reasoning paradigm, part 1: What politically motivated reasoning is and how to measure it. In R. Scott & S. Kosslyn (Eds.), Emerging trends in the social and Behavioral sciences . John Wiley & Sons.

Kohlberg, L. (1970). Education for justice: A modern statement of the platonic view. In N. F. Sizer & T. R. Sizer (Eds.), Moral education (pp. 71–72). Cambridge University Press.

Kotzee, B. (2012). Expertise, fluency and social realism about professional knowledge. Journal of Education and Work, 27 (2), 161–178.

Kotzee, B. (2013). Introduction: Education, social epistemology and virtue epistemology. Journal of Philosophy of Education, 47 (2).

Kotzee, B. (2014). Educational justice, epistemic justice, and leveling down. Educational Theory, 63 (4), 331–334.

Kotzee, B. (2017). Education and epistemic injustice. In I. J. Kidd, J. Medina, & G. Poilhaus Jr. (Eds.), The Routledge handbook of epistemic injustice . Routledge.

Kvanvig, J. (2003). The value of knowledge and the pursuit of understanding . Cambridge University Press.

Laverty, M. (2004). Philosophical dialogue and ethics. International Journal of Applied Philosophy, 18 (2), 189–201.

Lewis, D. (1981). Causal decision theory. Australasian Journal of Philosophy, 59 (1), 5–30.

Lind, M. (2020). The new class war: Saving democracy from the metropolitan elite . Atlantic Books.

Lipman, M. (1985a). Lisa . IAPC Montclair State University.

Lipman, M. (1985b). Ethical inquiry. An Instructional Manual to Accompany Lisa . IAPC Montclair State University.

Lipman, M. (1991). Thinking in education . Cambridge University Press.

Marabini A. (2006). Arte e significato: un’applicazione della Philosophy for children all’esperienza estetica . Dissertation, Corso di perfezionamento in P4C. University of Padova.

Marabini, A. (2015). Counterfactual thinking and moral judgment in ethical and equity inquiry . Paper presented at Plato Conference 2015 (poster-session) on Ethics and Inquiry, University of Washington, Seattle, 30 June 2015.

Marabini, A., & Cattani, A. (2010). Counterfactual thinking in ethical education . Paper presented at the American Philosophical Association (APA), Eastern Division Annual Conference, IAPC Group session, Boston, USA, 30 December 2010.

Mc Kenna, R. (2021). Asymmetrical irrationality: Are only other people stupid? In M. Hannon & J. de Ridder (Eds.), The Routledge handbook of political epistemology (pp. 285–296). Routledge.

Mc Peck, J. E. (1984). Critical thinking and education . St Martin’s Press.

Medina, J. (2013). The epistemology of resistance: Gender and racial oppression, epistemic injustice, and resistant imaginations . Oxford University Press.

Moran, S. (2011). Virtue epistemology: Some implications for education. Dissertation . Dublin City University.

National Society of Professional Engineers. (2003). Code of ethics for Engineers . www.nspe.org/ethics/codeofethics2003.pdf . In R. Paul, R. Niewoehner, L. Elder 2013, Engineering reasoning (2). The Foundation for Critical Thinking.

Nozick, R. (1981). Philosophical explanations . Oxford University Press.

OECD. (2019). PISA 2018 assessment and analytical framework, PISA . OECD Publishing. https://doi.org/10.1787/b25efab8-en

Book   Google Scholar  

Paul, R. (1990). Critical thinking: What every person needs to survive in a rapidly changing world . Rohnert Park, CA.

Paul, R., & Elder, L. (2007a). Critical thinking competency standards . The Foundation for Critical Thinking.

Paul, R., & Elder, L. (2007b). Critical and creative thinking . Washington.

Paul, R., & Elder, L. (2007c). Educational fads. For parents, educators, and concerned citizens. How to get beyond glitz and glitter . The Foundations of Critical Thinking.

Paul, R., & Elder, L. (2008). The thinker’s guide for conscious citizens on how to detect media bias & propaganda in national and world news (4th ed.). The Foundation for Critical Thinking.

Paul, R., & Elder, L. (2014). The miniature guide to critical concepts and tools (7th (1st edition 1999) ed.). The Foundation of Critical Thinking.

Paul, R., Binker, A. J. A., Jensen, K., & Kreklau, H. (1997). Critical thinking handbook: 4th–6th grades. A guide for Remodelling lesson plans in language, arts, social studies & science . The Foundation for Critical Thinking.

Paul, R., Niewoehner, R., & Elder, L. (2013). Engineering reasoning (2nd ed.). The Foundation for Critical Thinking. (1st ed. 2007).

Pritchard, D. (2013). Epistemic virtue and the epistemology of education. Journal of Philosophy of Education, 47 (2).

Rosa, H. (2010). Alienation and acceleration: Towards a critical theory of late-modern temporality . NSU Press.

Rovira Kaltwasser, C., et al. (2017). Populism: An overview of the state of the art. In C. Rovira Kaltwaser et al. (Eds.), The Oxford handbook of populism . Oxford University Press.

Rychen, D. S. (2003a). A frame of reference for defining and selecting key competencies in an international context. In D. S. Rychen, L. H. Salganik, & M. E. McLaughlin (Eds.), Definition and selection of key competencies. Contribution to the second DeSeCo symposium (pp. 109–116). Swiss Federal Statistical Office.

Rychen, D. S. (2003b). Key competencies: Meeting important challenges in life. In D. S. Rychen & L. H. Salganik (Eds.), Key competences for a successful life and a well-functioning society (pp. 63–108). Hogrefe & Huber.

Rychen, D. S., & Salganik, L. H. (2003). A holistic model of competence. In D. S. Rychen & L. H. Salganik (Eds.), Key competences for a successful life and a well-functioning society (pp. 41–62). Hogrefe & Huber.

Sandel, M. J. (2020). The tyranny of merit. What’s become of the common good? Great Britain . Pinguin Books.

Schmitt, F. (2005). What are the aims of education? Episteme , 223–223.

Schroeder, S. (2011). What readers have and do: Effects of students’ verbal ability and reading time components on comprehension with and without text availability. Journal of Educational Psychology, 103 (4), 877–896.

Selinger, E., & Crease, R. (2006). The philosophy of expertise . Columbia University Press.

Siegel, H. (1988). Educating reason. Rationality, critical thinking, and Education . Routledge.

Siegel, H. (2007). Multiculturalism and rationality. Theory and Rese arch in Education, 5 (2), 203–223.

Sosa, E. (1980). The raft and the pyramid: Coeherence versus Foundations in the theory of knowledge. Midwest Studies in Philosophy, 5 , 3–25. Reprinted in E. Sosa, Kim, J, (Eds.), 2008, Epistemology: An Anthology (2nd ed.), pp. 145–164. Oxford, Blackwell Publishing Ltd.

Sosa, E. (1999). How to defeat opposition to Moore. Philosophical Perspectives, 13 , 141–154.

Sosa, E. (2007). Apt belief and reflective knowledge (A Virtue Epistemology) (Vol. 1). Oxford University Press.

Sosa, E. (2010). How competence matters in epistemology. Philosophical Perspectives, 24 , 465–475.

Stalnaker, R. (1968). A theory of conditionals . Springer.

Standish, P. (2006). The nature and purposes of education. In R. Curren (Ed.), A companion to the philosophy of education . Blackwell Publishing.

Standish, P. (2012). Stanley Cavell in conversation with Paul Standish. Journal of Philosophy of Education, 46 (2), 155–176.

Standish, P. (2016). The disenchantment of education and the re–enchantment of the world. Journal of Philosophy of Education, 50 (1), 98–116.

Standish, P. (2018). Culture, Heritage, and the Humanities . Paper presented at Education and Cultural Heritage conference, University of Padova, Palazzo Bo, 22 June 2018.

Standish, P., & Thoilliez, B. (2018). El pensamiento crítico en crisis: una reconsideraciόn pedagόgica en tres movimientos. Teoria della Educaciόn: Revista interuniversitaria, 30 (2), 7–22.

Stanovich, K. (2021). The irrational attempt to impute irrationality to one’s political opponents. In M. Hannon & J. de Ridder (Eds.), The Routledge handbook of political epistemology (pp. 274–284). Routledge.

Stoyanov, K. (2018). Education, self-consciousness and social- action. Bildung as a neo-Hegelian concept . Routledge.

Tanesini, A. (2019, July 19). Review of Vices of the Mind. Mind (online version), pp. 1–9. https://()-academic-oup-com.pugwash.lib.warwick.ac.uk/mind/article/doi/10.1093/mind/fzz()44

Taylor, C. (1985). Legitimation crisis? In I. C. Taylor (Ed.), Philosophy and the human science. Philosophical papers 2 (pp. 248–288). Cambridge University Press.

Taylor, C. (1989). Sources of the self. The making of modern identity . Harvard University Press.

Taylor, C. (2007). A secular age . The Belknap Press of Harvard University Press.

Ventura, R. (2020). Radical choc. Ascesa e caduta dei competenti . Einaudi.

Wajcman, J. (2015). Pressed for time. The acceleration of life in digital capitalism . The University of Chicago Press.

Watson, L. (2016). The epistemology of education. Philosophy Compass, 11 , 146–159.

Wheelahan, L. (2007). How competency-based training locks the working class out of powerful knowledge: A modified Bernsteinian analysis. British Journal of Sociology of Education, 28 (5), 637–651.

Weinert, F. E. (1999). Concepts of competence . Munich, Germany.

Weinert, F. E. (2001). Concepts of competence. A conceptual clarification. In D. S. Rychen & L. H. Salganik (Eds.), Defining and selecting key competencies (pp. 45–65). Hogrefe & Huber Publishers.

Williamson, T. (2007). The philosophy of philosophy . Oxford University Press.

Williamson, T. (2016). Abductive philosophy. The Philosophical Forum, 47 (3–4), 263–280.

Winch, C. (2010). Dimension of expertise . Continuum.

Young, M., & Muller, J. (2010). Three educational scenarios for the future: Lessons from the sociology of knowledge. European Journal of Education, 45 ((1), part I), 11–27.

Zagzebski, L. (1996). Virtues of the mind . Cambridge University Press.

Download references

Author information

Authors and affiliations.

Centre for Knowledge and Society, University of Aberdeen (associate member), Aberdeen, UK

Alessia Marabini

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Marabini, A. (2022). Ethics, Education, and Reasoning. In: Critical Thinking and Epistemic Injustice. Contemporary Philosophies and Theories in Education, vol 20. Springer, Cham. https://doi.org/10.1007/978-3-030-95714-8_3

Download citation

DOI : https://doi.org/10.1007/978-3-030-95714-8_3

Published : 23 March 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-95713-1

Online ISBN : 978-3-030-95714-8

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Working with sources
  • What Is Critical Thinking? | Definition & Examples

What Is Critical Thinking? | Definition & Examples

Published on May 30, 2022 by Eoghan Ryan . Revised on May 31, 2023.

Critical thinking is the ability to effectively analyze information and form a judgment .

To think critically, you must be aware of your own biases and assumptions when encountering information, and apply consistent standards when evaluating sources .

Critical thinking skills help you to:

  • Identify credible sources
  • Evaluate and respond to arguments
  • Assess alternative viewpoints
  • Test hypotheses against relevant criteria

Table of contents

Why is critical thinking important, critical thinking examples, how to think critically, other interesting articles, frequently asked questions about critical thinking.

Critical thinking is important for making judgments about sources of information and forming your own arguments. It emphasizes a rational, objective, and self-aware approach that can help you to identify credible sources and strengthen your conclusions.

Critical thinking is important in all disciplines and throughout all stages of the research process . The types of evidence used in the sciences and in the humanities may differ, but critical thinking skills are relevant to both.

In academic writing , critical thinking can help you to determine whether a source:

  • Is free from research bias
  • Provides evidence to support its research findings
  • Considers alternative viewpoints

Outside of academia, critical thinking goes hand in hand with information literacy to help you form opinions rationally and engage independently and critically with popular media.

Prevent plagiarism. Run a free check.

Critical thinking can help you to identify reliable sources of information that you can cite in your research paper . It can also guide your own research methods and inform your own arguments.

Outside of academia, critical thinking can help you to be aware of both your own and others’ biases and assumptions.

Academic examples

However, when you compare the findings of the study with other current research, you determine that the results seem improbable. You analyze the paper again, consulting the sources it cites.

You notice that the research was funded by the pharmaceutical company that created the treatment. Because of this, you view its results skeptically and determine that more independent research is necessary to confirm or refute them. Example: Poor critical thinking in an academic context You’re researching a paper on the impact wireless technology has had on developing countries that previously did not have large-scale communications infrastructure. You read an article that seems to confirm your hypothesis: the impact is mainly positive. Rather than evaluating the research methodology, you accept the findings uncritically.

Nonacademic examples

However, you decide to compare this review article with consumer reviews on a different site. You find that these reviews are not as positive. Some customers have had problems installing the alarm, and some have noted that it activates for no apparent reason.

You revisit the original review article. You notice that the words “sponsored content” appear in small print under the article title. Based on this, you conclude that the review is advertising and is therefore not an unbiased source. Example: Poor critical thinking in a nonacademic context You support a candidate in an upcoming election. You visit an online news site affiliated with their political party and read an article that criticizes their opponent. The article claims that the opponent is inexperienced in politics. You accept this without evidence, because it fits your preconceptions about the opponent.

There is no single way to think critically. How you engage with information will depend on the type of source you’re using and the information you need.

However, you can engage with sources in a systematic and critical way by asking certain questions when you encounter information. Like the CRAAP test , these questions focus on the currency , relevance , authority , accuracy , and purpose of a source of information.

When encountering information, ask:

  • Who is the author? Are they an expert in their field?
  • What do they say? Is their argument clear? Can you summarize it?
  • When did they say this? Is the source current?
  • Where is the information published? Is it an academic article? Is it peer-reviewed ?
  • Why did the author publish it? What is their motivation?
  • How do they make their argument? Is it backed up by evidence? Does it rely on opinion, speculation, or appeals to emotion ? Do they address alternative arguments?

Critical thinking also involves being aware of your own biases, not only those of others. When you make an argument or draw your own conclusions, you can ask similar questions about your own writing:

  • Am I only considering evidence that supports my preconceptions?
  • Is my argument expressed clearly and backed up with credible sources?
  • Would I be convinced by this argument coming from someone else?

If you want to know more about ChatGPT, AI tools , citation , and plagiarism , make sure to check out some of our other articles with explanations and examples.

  • ChatGPT vs human editor
  • ChatGPT citations
  • Is ChatGPT trustworthy?
  • Using ChatGPT for your studies
  • What is ChatGPT?
  • Chicago style
  • Paraphrasing

 Plagiarism

  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Consequences of plagiarism
  • Common knowledge

Scribbr Citation Checker New

The AI-powered Citation Checker helps you avoid common mistakes such as:

  • Missing commas and periods
  • Incorrect usage of “et al.”
  • Ampersands (&) in narrative citations
  • Missing reference entries

difference of critical thinking and ethics

Critical thinking refers to the ability to evaluate information and to be aware of biases or assumptions, including your own.

Like information literacy , it involves evaluating arguments, identifying and solving problems in an objective and systematic way, and clearly communicating your ideas.

Critical thinking skills include the ability to:

You can assess information and arguments critically by asking certain questions about the source. You can use the CRAAP test , focusing on the currency , relevance , authority , accuracy , and purpose of a source of information.

Ask questions such as:

  • Who is the author? Are they an expert?
  • How do they make their argument? Is it backed up by evidence?

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Information literacy refers to a broad range of skills, including the ability to find, evaluate, and use sources of information effectively.

Being information literate means that you:

  • Know how to find credible sources
  • Use relevant sources to inform your research
  • Understand what constitutes plagiarism
  • Know how to cite your sources correctly

Confirmation bias is the tendency to search, interpret, and recall information in a way that aligns with our pre-existing values, opinions, or beliefs. It refers to the ability to recollect information best when it amplifies what we already believe. Relatedly, we tend to forget information that contradicts our opinions.

Although selective recall is a component of confirmation bias, it should not be confused with recall bias.

On the other hand, recall bias refers to the differences in the ability between study participants to recall past events when self-reporting is used. This difference in accuracy or completeness of recollection is not related to beliefs or opinions. Rather, recall bias relates to other factors, such as the length of the recall period, age, and the characteristics of the disease under investigation.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Ryan, E. (2023, May 31). What Is Critical Thinking? | Definition & Examples. Scribbr. Retrieved April 3, 2024, from https://www.scribbr.com/working-with-sources/critical-thinking/

Is this article helpful?

Eoghan Ryan

Eoghan Ryan

Other students also liked, student guide: information literacy | meaning & examples, what are credible sources & how to spot them | examples, applying the craap test & evaluating sources, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Back Home

  • Search Search Search …
  • Search Search …

The Connection between Critical Thinking and Ethics: Unraveling the Link

critical thinking and ethics

The connection between critical thinking and ethics is a significant one, as both concepts play crucial roles in decision-making and problem-solving. Critical thinking is the process of evaluating and analyzing information to reach well-founded conclusions, while ethics involves the principles and standards that guide our behavior.

Both critical thinking and ethics are closely related, as the former enables individuals to discern between right and wrong, fact and fiction, and develop a deeper understanding of complex issues. By using critical thinking skills, individuals can approach ethical dilemmas from diverse perspectives and make informed decisions based on logic and reason. This relationship is essential in helping us navigate the world around us and make morally responsible choices.

Developing a strong sense of critical thinking and ethical awareness is crucial for individuals to become responsible citizens and decision-makers. When applied together, these skills allow people to engage in a balanced and rational examination of various ethical issues, thereby promoting fair judgment and responsible action within their personal and professional lives.

The Concepts of Critical Thinking and Ethics

Defining critical thinking.

Critical thinking is a widely accepted educational goal, but its definition is contested. According to the Stanford Encyclopedia of Philosophy , critical thinking can be understood as careful thinking directed toward a goal. It involves the ability to analyze information, identify biases, and evaluate the credibility of sources, ultimately leading to better decision-making.

Defining Ethics

Ethics, on the other hand, deals with moral principles that govern an individual’s or group’s behavior. It is the study of what is right or wrong and how we should act in various situations. Ethical reasoning involves the application of moral values to guide actions and decisions in various contexts.

When combined, critical thinking and ethics create a powerful framework for evaluating multiple perspectives and making informed choices. The development of these skills is crucial for both personal and professional growth. The interconnected nature of these concepts is crucial in understanding their relevance in various aspects of life.

The Importance of Critical Thinking in Ethical Decision-Making

Critical thinking plays a vital role in ethical decision-making by providing the tools needed to carefully evaluate situations, examine various perspectives, and make informed choices that align with personal and professional values without making exaggerated or false claims.

Recognizing Ethical Issues

Utilizing critical thinking skills enables individuals to examine situations from multiple perspectives , identify potential issues and risks, and recognize ethical dilemmas by reducing the impact of cognitive biases. By analyzing situations and questioning assumptions, critical thinking allows people to recognize ethical problems that may otherwise go unnoticed.

Evaluating Ethical Choices

In the process of ethical decision-making, critical thinking plays a key role in evaluating the strengths and weaknesses of each available option. It helps individuals to determine whether or not something is right or wrong , taking into account both facts and values. This evaluation helps in understanding the basis of one’s beliefs and decisions and in considering alternative solutions before making choices.

Implementing Ethical Solutions

Once ethical issues have been recognized and evaluated, critical thinking aids in the implementation of ethical solutions. It allows individuals to weigh the consequences of each action, taking into account the possible impacts on stakeholders and the broader society. Critical thinking promotes conscious, well-informed decisions that are in line with personal and professional beliefs, ensuring that the chosen solutions consider all possible outcomes and adhere to the principles of ethical decision-making.

Section 4: Developing Critical Thinking and Ethical Reasoning Skills

The connection between critical thinking and ethics is crucial in fostering a questioning mindset and promoting reasoned decision-making. To develop these skills, we will discuss the role of education and curricula, as well as exercises and practice in cultivating ethical critical thinkers.

Education and Curricula

Integrating ethical reasoning into education is an essential step for promoting fairminded critical thinking. Teachers must be aware of the risk of inadvertently fostering sophistic critical thinking if ethics are not addressed in the curriculum (source) . To develop students’ ethical understanding, educators should use inquiry-based learning methods, which encourage a questioning mindset (source) .

By incorporating ethical reasoning into educational programs, students learn to operationalize their reflective questioning skills as the basis for ethical decision-making. Understanding the various ethical frameworks and perspectives can help students think critically and make well-reasoned decisions in complex situations (source) .

Exercises and Practice

Regular practice in dealing with ethical dilemmas and ambiguities allows students to strengthen their critical thinking and ethical reasoning skills. Exercises that ask students to define problems, examine evidence, analyze assumptions and biases, and consider other interpretations are effective in fostering critical thinking (source) .

Using real-life ethical dilemmas in activities not only helps students to engage with the material but also promotes reasoning and justification skills vital for informed citizenship (source) . Some exercises that can be utilized include:

  • Debates on controversial ethical issues
  • Case studies and role-playing exercises
  • Reflection papers and group discussions
  • Analysis of ethical dilemmas in various scenarios

Frequent practice of these activities will help students develop a strong foundation in critical thinking and ethical reasoning, enabling them to make well-informed decisions in their personal and professional lives.

Section 5: Challenges and Limitations

Cognitive biases.

One of the challenges of integrating critical thinking with ethics is addressing cognitive biases. Cognitive biases can cloud our judgment and affect our ability to make ethical decisions. Developing critical thinking skills can help reveal these biases and enable us to make more objective decisions.

Emotional Influence

Another challenge in connecting critical thinking and ethics is emotional influence. Our emotions can significantly impact our ability to think critically and ethically. Emotional experiences may lead to hasty decisions without considering ethical implications. To overcome this challenge, individuals must learn to balance emotion and reason, allowing them to maintain a neutral perspective during decision-making processes.

Cultural Differences

Lastly, cultural differences can create a barrier when trying to foster the relationship between critical thinking and ethics. Different cultures often have unique ethical values and practices, making it difficult to establish universal ethical principles. Understanding and respecting these differences is crucial in mitigating the potential for miscommunication and ethical conflicts.

By addressing challenges such as cognitive biases, emotional influence, and cultural differences, individuals can further strengthen the connection between critical thinking and ethics. This integration is essential for making informed, ethical decisions within diverse global communities.

In conclusion, the relationship between critical thinking and ethics is a fundamental aspect of how we make decisions in our daily lives. Critical thinking allows us to see the world from different perspectives and to make ethical decisions based on our understanding and analysis of facts.

As we develop our critical thinking skills, we become better at distinguishing right from wrong , which in turn helps us navigate complex ethical situations. This process involves analyzing and observing our own biases and beliefs, as well as evaluating the facts at hand.

The importance of this relationship cannot be understated, as it influences the choices we make and their impact on ourselves and others. Developing a strong foundation in critical thinking not only allows us to make informed and ethical decisions but also contributes to a deeper understanding of the world and the various perspectives within it.

You may also like

critical thinking tools

Critical Thinking Tools

When you think rationally and clearly about what to believe or what to do, you are using your critical thinking skills. Critical […]

The Map Is Not the Territory

The Map Is Not the Territory: A Concise Analysis of this Cognitive Principle

“The Map Is Not the Territory” is a critical thinking concept that serves as a metaphor highlighting the differences between our perception […]

Which Part of the Brain is Related to Critical Thinking?

Which Part of the Brain is Related to Critical Thinking?

The brain is a complex organ divided into many, many different regions and subregions. While we are far from understanding the brain’s […]

examples of lateral thinking

Lateral Thinking in the Real World: Everyday Examples of Lateral Thinking.

The history of art, science, politics, warfare and business are full of examples of lateral thinking. Nelson’s famous victory at the Battle of […]

difference of critical thinking and ethics

EthicsBowl.org

International Hub for all things Ethics Bowl

Critical Thinking and Ethics

difference of critical thinking and ethics

I’d like to thank Matt Deaton for introducing me to Ethics Bowl at this year’s American Philosophical Association (APA) Eastern Division conference. 

Given my own mission to help students (of any age) develop their critical-thinking skills (through books like Critical Thinking from MIT Press and my LogicCheck site that uses the news of the day to teach critical-thinking techniques) I’m drawn to situations where facts alone cannot provide answers on what to do.

In situations when we have to decide what to do in the future, we can’t fact-check things that haven’t happened yet, but we can argue over which choice to make. We can also never know with certainty what is going on inside other people’s heads, which requires us to argue over motives and motivations, rather than claim to know them without doubt.  

Similarly, only the most trivial ethical dilemmas can be resolved by appealing to facts of the matter.  For the kind of complex dilemmas we face in the real world, such as those students grapple with when they participate in Ethics Bowl, we need to argue things out.  And arguing well is what you learn by studying critical thinking.

With that in mind, I was inspired to start a series over at LogicCheck that applies different critical-thinking principles to specific cases in this year’s Ethics Bowl national case set .  The first looks at how the ability to peer through persuasive language (commonly referred to as rhetoric) to see through wording that might pre-suppose an answer to a problem.  A second piece shows how hidden premises , statements implied but not stated in arguments, often contain the most important points we are need to discuss. 

I hope to continue this series by looking at other cases in light of the critical-thinker’s toolkit that involves skills such as controlling for bias and media and information literacy.  In each of these postings, I will endeavor to introduce students to productive ways of thinking about ethical issues and avoid telling them what to think about them.

So thanks again to Matt for letting me post here at his Ethics Bowl site.  Thanks as well to everyone involved with this fantastic program, and to all the students and teachers participating in it.

Happy deliberating!

~Jonathan Haber~

Leave a Reply Cancel reply

Critical thinking definition

difference of critical thinking and ethics

Critical thinking, as described by Oxford Languages, is the objective analysis and evaluation of an issue in order to form a judgement.

Active and skillful approach, evaluation, assessment, synthesis, and/or evaluation of information obtained from, or made by, observation, knowledge, reflection, acumen or conversation, as a guide to belief and action, requires the critical thinking process, which is why it's often used in education and academics.

Some even may view it as a backbone of modern thought.

However, it's a skill, and skills must be trained and encouraged to be used at its full potential.

People turn up to various approaches in improving their critical thinking, like:

  • Developing technical and problem-solving skills
  • Engaging in more active listening
  • Actively questioning their assumptions and beliefs
  • Seeking out more diversity of thought
  • Opening up their curiosity in an intellectual way etc.

Is critical thinking useful in writing?

Critical thinking can help in planning your paper and making it more concise, but it's not obvious at first. We carefully pinpointed some the questions you should ask yourself when boosting critical thinking in writing:

  • What information should be included?
  • Which information resources should the author look to?
  • What degree of technical knowledge should the report assume its audience has?
  • What is the most effective way to show information?
  • How should the report be organized?
  • How should it be designed?
  • What tone and level of language difficulty should the document have?

Usage of critical thinking comes down not only to the outline of your paper, it also begs the question: How can we use critical thinking solving problems in our writing's topic?

Let's say, you have a Powerpoint on how critical thinking can reduce poverty in the United States. You'll primarily have to define critical thinking for the viewers, as well as use a lot of critical thinking questions and synonyms to get them to be familiar with your methods and start the thinking process behind it.

Are there any services that can help me use more critical thinking?

We understand that it's difficult to learn how to use critical thinking more effectively in just one article, but our service is here to help.

We are a team specializing in writing essays and other assignments for college students and all other types of customers who need a helping hand in its making. We cover a great range of topics, offer perfect quality work, always deliver on time and aim to leave our customers completely satisfied with what they ordered.

The ordering process is fully online, and it goes as follows:

  • Select the topic and the deadline of your essay.
  • Provide us with any details, requirements, statements that should be emphasized or particular parts of the essay writing process you struggle with.
  • Leave the email address, where your completed order will be sent to.
  • Select your prefered payment type, sit back and relax!

With lots of experience on the market, professionally degreed essay writers , online 24/7 customer support and incredibly low prices, you won't find a service offering a better deal than ours.

  • Organizations
  • Planning & Activities
  • Product & Services
  • Structure & Systems
  • Career & Education
  • Entertainment
  • Fashion & Beauty
  • Political Institutions
  • SmartPhones
  • Protocols & Formats
  • Communication
  • Web Applications
  • Household Equipments
  • Career and Certifications
  • Diet & Fitness
  • Mathematics & Statistics
  • Processed Foods
  • Vegetables & Fruits

Difference Between Thinking and Critical Thinking

• Categorized under Nature | Difference Between Thinking and Critical Thinking

Difference Between Thinking and Critical Thinking

Thinking vs. Critical Thinking

The Two Think Tanks: Thinking and Critical Thinking

Every human being is capable of thinking, but some say that few are able to practice critical thinking. What’s the difference?

Thinking is the mental process, the act and the ability to produce thoughts. People think about almost everything and anything. They often think of people, things, places, and anything without a reason or as a result of a trigger of a stimulus. Meanwhile, critical thinking often means “thinking about thinking.” In a sense, it is a deeper form of thinking about a particular issue or situation before actually deciding and acting.

In any given situation, thinking is an action that requires the person to form a thought about that situation. Any thought can be formed, even without facts or evidence. When critical thinking is applied, the mind is open to all considerations, assumptions, and details before actually forming a thought or an opinion. A person who is a critical thinker regards the subject itself and all its aspects, like the methods of collecting facts or the motivation behind said facts. A person who employs critical thinking often adds the question “why” to “who, what, where, and when” in a particular situation.

To illustrate, imagine a person at a bookstore. This person can pick out a book and think that the book is good upon first impression. A critical thinking person would open the book, read some passages, and read about the author before actually deciding whether to buy the book or not. The customer might often wonder about the title or why the author chose to write this particular piece of literature.

A thinker may accept facts or realities based on faith alone and without examination and analysis of the issue. These facts or realities are often perceived as “truth” and cannot be criticized or modified. In this situation, there is no need for evidence or the effort to produce it and its examination.

Difference Between Thinking and Critical Thinking-1

Critical thinking is the opposite of all of this. It often requires a lot of time, questions, and considerations. It also involves a longer process before arriving at a conclusion or decision.

Individuals who apply critical thinking are often open-minded and mindful of alternatives. They try to be well informed and do not jump to conclusions. Critical thinkers know and identify conclusions, reasons, and assumptions. They use clarifying and probing questions in order to formulate their reasonable situations and arguments. They often try to integrate all items in the situation and then draw conclusions with reason and caution. They also have good judgment on the credibility of sources and the quality of an argument, aside from developing and defending their stand. If asked, these people can clearly articulate their argument with all its strengths and weaknesses.

Critical thinking is an on-going process and activity. This skill is learned through active practice and constant use. Exposure to controversial issues and thought-provoking situations stimulates the mind to utilize this skill, which is then applied upon careful examination of an issue or situation. Meanwhile, thinking can be done in an instant without any given proof and/or justification.

Critical thinking requires logic and accuracy, while thinking sometimes occurs in the form of faith and personal opinion. The former requires evidence and further actions of examination and analysis, while the latter does not. It’s up to you to think and decide.

  • Both thinking and critical thinking are mental processes.
  • Thinking can be classified as an action, while critical thinking can be said to be a skill.
  • Critical thinking is used with caution, while thinking can be spontaneous.
  • A critical thinker is able to identify the main contention in an issue, look for evidence that supports or opposes that contention, and assess the strength of the reasoning, while a thinker may base their belief solely on faith or personal opinion.
  • Recent Posts
  • Differences Between Fraternity And Sorority - January 8, 2014
  • Differences Between Lucite and Plastic - January 7, 2014
  • Differences Between Oil and Butter - January 6, 2014

Sharing is caring!

  • Pinterest 7

Search DifferenceBetween.net :

Email This Post

  • Difference Between Concrete and Abstract Thinking
  • Difference Between Fact and Opinion
  • Difference between Argument and Debate
  • Difference Between Facts and Opinions
  • Difference Between Argument and Persuasion

Cite APA 7 Franscisco, . (2017, June 30). Difference Between Thinking and Critical Thinking. Difference Between Similar Terms and Objects. http://www.differencebetween.net/science/nature/difference-between-thinking-and-critical-thinking/. MLA 8 Franscisco, . "Difference Between Thinking and Critical Thinking." Difference Between Similar Terms and Objects, 30 June, 2017, http://www.differencebetween.net/science/nature/difference-between-thinking-and-critical-thinking/.

Thank you very, much, this was a discussion question and the information was too closly related to find a significant difference.

As I was reading this article I kind of think I’m a critical thinker. When my boyfriend tells me thing about his day I’m not going to lie I try and ask why did that happen. Or I say strange that happened in order to get him to tell me more things. Just the other day we were out with our friends and Jose one of our friends was telling us how one of there friend is different ever since he got his promotion at work and Jose was like that foo needs to chill I’m not going talk about our wild nights and I was like oh yeah like which ones. I was trying to get him to talk but then our other friend pointed it out and was like umm look at Brenda thinking we really do have wild nights. I tend to always ask why is it done that way or could it have ever crossed there mind that they can do it this way.

Thx for the article,it’s very easy to understand

Leave a Response

Name ( required )

Email ( required )

Please note: comment moderation is enabled and may delay your comment. There is no need to resubmit your comment.

Notify me of followup comments via e-mail

References :

Advertisments, more in 'nature'.

  • Difference Between Crohn’s and Colitis
  • Difference Between African and Asian Elephants
  • Difference Between Zebra And Horse
  • Difference Between Squid and Octopus
  • Difference Between Quality Assurance and Quality Control

Top Difference Betweens

Get new comparisons in your inbox:, most emailed comparisons, editor's picks.

  • Difference Between MAC and IP Address
  • Difference Between Platinum and White Gold
  • Difference Between Civil and Criminal Law
  • Difference Between GRE and GMAT
  • Difference Between Immigrants and Refugees
  • Difference Between DNS and DHCP
  • Difference Between Computer Engineering and Computer Science
  • Difference Between Men and Women
  • Difference Between Book value and Market value
  • Difference Between Red and White wine
  • Difference Between Depreciation and Amortization
  • Difference Between Bank and Credit Union
  • Difference Between White Eggs and Brown Eggs

IMAGES

  1. SOLUTION: BRANCHES OF ETHICS (CONCEPT MAP)

    difference of critical thinking and ethics

  2. The Thinker's Guide to Ethical Reasoning: Based on Critical Thinking

    difference of critical thinking and ethics

  3. Ethics and critical thinking (ENGLISH)

    difference of critical thinking and ethics

  4. 6 Main Types of Critical Thinking Skills (With Examples)

    difference of critical thinking and ethics

  5. Critical Thinking Definition, Skills, and Examples

    difference of critical thinking and ethics

  6. Research Methods Critical Thinking Ethics Recorded Lecture

    difference of critical thinking and ethics

VIDEO

  1. Analytical And Critical Thinking

  2. Introduction to Rights

  3. I love smoking

  4. Critical Thinking and Morality #education #shorts

  5. Immanuel Kant’s Moral philosophy: A Journey into Ethical Reasoning

  6. A Warm Welcome and a Provocative Start #philosophy #havawater

COMMENTS

  1. How Is Critical Thinking Different From Ethical Thinking?

    Ethical thinking and critical thinking are both important and it helps to understand how we need to use them together to make decisions. Critical thinking helps us narrow our choices. Ethical thinking includes values as a filter to guide us to a choice that is ethical. Using critical thinking, we may discover an opportunity to exploit a ...

  2. Critical Thinking

    Critical thinking is a widely accepted educational goal. Its definition is contested, but the competing definitions can be understood as differing conceptions of the same basic concept: careful thinking directed to a goal. Conceptions differ with respect to the scope of such thinking, the type of goal, the criteria and norms for thinking ...

  3. Bridging critical thinking and transformative learning: The role of

    In recent decades, approaches to critical thinking have generally taken a practical turn, pivoting away from more abstract accounts - such as emphasizing the logical relations that hold between statements (Ennis, 1964) - and moving toward an emphasis on belief and action.According to the definition that Robert Ennis (2018) has been advocating for the last few decades, critical thinking is ...

  4. Critical Thinking

    Critical Theory refers to a way of doing philosophy that involves a moral critique of culture. A "critical" theory, in this sense, is a theory that attempts to disprove or discredit a widely held or influential idea or way of thinking in society. Thus, critical race theorists and critical gender theorists offer critiques of traditional ...

  5. Critical Thinking, Creativity, Ethical Reasoning: A Unity of ...

    5 8.5 Thinking Beyond the Opposites: Toward a Better and More Humane World. Critical, creative, and ethical thinking working together are intellectually more powerful than any one of these forms in isolation. This is especially obvious if one contemplates the opposites of any of the three combined with the other two.

  6. PDF Critical Thinking: Ethical Reasoning and Fairminded Thinking, Part I

    cal capacities; and integrate ethical understandings with critical thinking skills, abilities, and traits. There are many reasons why students lack ethical reasoning abilities. For example, most students (and indeed most people) confuse ethics with behaving in accordance with social conventions, religious beliefs, and the law.

  7. 10.1: Ethics vs. Morality

    Etc. On this conception, the ethical encompasses the moral and political because ethical questions are questions about the good life and what we ought to do, whereas moral questions are about what we ought to do to and with one another. It's important to note, though, that this isn't an authoritative way to draw the distinction.

  8. Ethical Thinking

    To avoid such false conclusions in thinking and arguing is one reason for critical thinkers to concern themselves with ethics. Especially because the difference between these kinds of validity claims often remain unspoken in the actual practice of discussion. ... reflection categories of applied ethics can promote critical reflection and expand ...

  9. Critical thinking

    Critical thinking is the analysis of available facts, evidence, observations, and arguments in order to form a judgement by the application of rational, skeptical, and unbiased analyses and evaluation. The application of critical thinking includes self-directed, self-disciplined, self-monitored, and self-corrective habits of the mind, thus a critical thinker is a person who practices the ...

  10. Teaching Ethics and Critical Thinking in Contemporary Schools

    The relationship between Ethics and Critical Thinking can also be established if we think that the construction of ethical thinking involves the cultivation of rational thinking since the ...

  11. Moral Reasoning

    1. The Philosophical Importance of Moral Reasoning 1.1 Defining "Moral Reasoning" This article takes up moral reasoning as a species of practical reasoning - that is, as a type of reasoning directed towards deciding what to do and, when successful, issuing in an intention (see entry on practical reason).Of course, we also reason theoretically about what morality requires of us; but the ...

  12. PDF Ethics and Critical Thinking

    Ethics and critical thinking 201. In addition to prices and markets, this definition encompasses duty and commit ment in economic life (Sen, 1977) and the study of cognitive mechanisms that are not consciously controlled, yet likely play a role in economic activity.

  13. Philosophy, Ethics and Thinking

    This might seem unlikely at first glance. After all, Ethics deals with issues of right and wrong, and we have been discussing "what is right" and "what is wrong" since we were children. Philosophy of Mind, on the other hand, deals with topics like the nature of consciousness, while Metaphysics deals with the nature of existence itself.

  14. Ethics, Education, and Reasoning

    3.4.3.1 Fair-Minded Critical Thinking vs Selfish Critical Thinking in Education to Ethics. Unfortunately—as remarked by Paul, Binker, Jensen and Kreklau—the mere conscious will to do good does not remove the prejudices that affect our perceptions. ... it is possible to clarify and reach conclusions. What makes the difference, however, is ...

  15. Developing critical thinking and ethical global engagement in ...

    Self-awareness and metacognitive skills are necessary for ethical global engagement. Self-awareness and the ability to think about our own thinking are key to developing critical thinking and ethical reasoning in students. Specifically, the ability to recognise and separate one's personal biases or self-interests is important for making ...

  16. What Is Critical Thinking?

    Critical thinking is the ability to effectively analyze information and form a judgment. To think critically, you must be aware of your own biases and assumptions when encountering information, and apply consistent standards when evaluating sources. Critical thinking skills help you to: Identify credible sources. Evaluate and respond to arguments.

  17. Ethics

    The term ethics may refer to the philosophical study of the concepts of moral right and wrong and moral good and bad, to any philosophical theory of what is morally right and wrong or morally good and bad, and to any system or code of moral rules, principles, or values. The last may be associated with particular religions, cultures, professions, or virtually any other group that is at least ...

  18. Critical Thinking and Ethics-Critical Thinking Secrets

    Critical thinking is the process of evaluating and analyzing information to reach well-founded conclusions, while ethics involves the principles and standards that guide our behavior. Both critical thinking and ethics are closely related, as the former enables individuals to discern between right and wrong, fact and fiction, and develop a ...

  19. Critical Thinking and Ethics

    And arguing well is what you learn by studying critical thinking. With that in mind, I was inspired to start a series over at LogicCheck that applies different critical-thinking principles to specific cases in this year's Ethics Bowl national case set . The first looks at how the ability to peer through persuasive language (commonly referred ...

  20. Full article: Critical Thinking Activities and the Enhancement of

    This article explores how critical thinking activities and assignments can function to enhance students' ethical awareness and sense of civic responsibility. Employing Levinas's Other-centered theory of ethics, Burke's notion of 'the paradox of substance', and Murray's concept of 'a rhetoric of disruption', this article explores the ...

  21. Adapting 'Ethics Bowl' Strategies for Teaching Introductory Ethics

    Teaching introductory ethics course or courses with strong ethics content to first- and second-year undergraduate students presents numerous challenges. Most students register for these courses to meet a general education requirement or believe they do not need education in ethics because they have received cultural, social, and religious ...

  22. Using Critical Thinking in Essays and other Assignments

    Critical thinking, as described by Oxford Languages, is the objective analysis and evaluation of an issue in order to form a judgement. Active and skillful approach, evaluation, assessment, synthesis, and/or evaluation of information obtained from, or made by, observation, knowledge, reflection, acumen or conversation, as a guide to belief and action, requires the critical thinking process ...

  23. Difference Between Thinking and Critical Thinking

    Thinking can be classified as an action, while critical thinking can be said to be a skill. Critical thinking is used with caution, while thinking can be spontaneous. A critical thinker is able to identify the main contention in an issue, look for evidence that supports or opposes that contention, and assess the strength of the reasoning, while ...

  24. Difference Between Critical Thinking And Ethics

    3 Pages. Critical Thinking and Ethics. Critical thinking is being able to analyze and evaluate what you learn or read using the critical are thinking process. The critical thinking process consists of six steps they are remembering, understanding, applying, analyzing, evaluating, and creating. The first step is remembering, with remembering you ...