U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Ethical Issues in Research: Perceptions of Researchers, Research Ethics Board Members and Research Ethics Experts

Marie-josée drolet.

1 Department of Occupational Therapy (OT), Université du Québec à Trois-Rivières (UQTR), Trois-Rivières (Québec), Canada

Eugénie Rose-Derouin

2 Bachelor OT program, Université du Québec à Trois-Rivières (UQTR), Trois-Rivières (Québec), Canada

Julie-Claude Leblanc

Mélanie ruest, bryn williams-jones.

3 Department of Social and Preventive Medicine, School of Public Health, Université de Montréal, Montréal (Québec), Canada

In the context of academic research, a diversity of ethical issues, conditioned by the different roles of members within these institutions, arise. Previous studies on this topic addressed mainly the perceptions of researchers. However, to our knowledge, no studies have explored the transversal ethical issues from a wider spectrum, including other members of academic institutions as the research ethics board (REB) members, and the research ethics experts. The present study used a descriptive phenomenological approach to document the ethical issues experienced by a heterogeneous group of Canadian researchers, REB members, and research ethics experts. Data collection involved socio-demographic questionnaires and individual semi-structured interviews. Following the triangulation of different perspectives (researchers, REB members and ethics experts), emerging ethical issues were synthesized in ten units of meaning: (1) research integrity, (2) conflicts of interest, (3) respect for research participants, (4) lack of supervision and power imbalances, (5) individualism and performance, (6) inadequate ethical guidance, (7) social injustices, (8) distributive injustices, (9) epistemic injustices, and (10) ethical distress. This study highlighted several problematic elements that can support the identification of future solutions to resolve transversal ethical issues in research that affect the heterogeneous members of the academic community.

Introduction

Research includes a set of activities in which researchers use various structured methods to contribute to the development of knowledge, whether this knowledge is theoretical, fundamental, or applied (Drolet & Ruest, accepted ). University research is carried out in a highly competitive environment that is characterized by ever-increasing demands (i.e., on time, productivity), insufficient access to research funds, and within a market economy that values productivity and speed often to the detriment of quality or rigour – this research context creates a perfect recipe for breaches in research ethics, like research misbehaviour or misconduct (i.e., conduct that is ethically questionable or unacceptable because it contravenes the accepted norms of responsible conduct of research or compromises the respect of core ethical values that are widely held by the research community) (Drolet & Girard, 2020 ; Sieber, 2004 ). Problematic ethics and integrity issues – e.g., conflicts of interest, falsification of data, non-respect of participants’ rights, and plagiarism, to name but a few – have the potential to both undermine the credibility of research and lead to negative consequences for many stakeholders, including researchers, research assistants and personnel, research participants, academic institutions, and society as a whole (Drolet & Girard, 2020 ). It is thus evident that the academic community should be able to identify these different ethical issues in order to evaluate the nature of the risks that they pose (and for whom), and then work towards their prevention or management (i.e., education, enhanced policies and procedures, risk mitigation strategies).

In this article, we define an “ethical issue” as any situation that may compromise, in whole or in part, the respect of at least one moral value (Swisher et al., 2005 ) that is considered socially legitimate and should thus be respected. In general, ethical issues occur at three key moments or stages of the research process: (1) research design (i.e., conception, project planning), (2) research conduct (i.e., data collection, data analysis) and (3) knowledge translation or communication (e.g., publications of results, conferences, press releases) (Drolet & Ruest, accepted ). According to Sieber ( 2004 ), ethical issues in research can be classified into five categories, related to: (a) communication with participants and the community, (b) acquisition and use of research data, (c) external influence on research, (d) risks and benefits of the research, and (e) selection and use of research theories and methods. Many of these issues are related to breaches of research ethics norms, misbehaviour or research misconduct. Bruhn et al., ( 2002 ) developed a typology of misbehaviour and misconduct in academia that can be used to judge the seriousness of different cases. This typology takes into consideration two axes of reflection: (a) the origin of the situation (i.e., is it the researcher’s own fault or due to the organizational context?), and (b) the scope and severity (i.e., is this the first instance or a recurrent behaviour? What is the nature of the situation? What are the consequences, for whom, for how many people, and for which organizations?).

A previous detailed review of the international literature on ethical issues in research revealed several interesting findings (Beauchemin et al., 2021 ). Indeed, the current literature is dominated by descriptive ethics, i.e., the sharing by researchers from various disciplines of the ethical issues they have personally experienced. While such anecdotal documentation is relevant, it is insufficient because it does not provide a global view of the situation. Among the reviewed literature, empirical studies were in the minority (Table  1 ) – only about one fifth of the sample (n = 19) presented empirical research findings on ethical issues in research. The first of these studies was conducted almost 50 years ago (Hunt et al., 1984 ), with the remainder conducted in the 1990s. Eight studies were conducted in the United States (n = 8), five in Canada (n = 5), three in England (n = 3), two in Sweden (n = 2) and one in Ghana (n = 1).

Summary of Empirical Studies on Ethical Issues in Research by the year of publication

Further, the majority of studies in our sample (n = 12) collected the perceptions of a homogeneous group of participants, usually researchers (n = 14) and sometimes health professionals (n = 6). A minority of studies (n = 7) triangulated the perceptions of diverse research stakeholders (i.e., researchers and research participants, or students). To our knowledge, only one study has examined perceptions of ethical issues in research by research ethics board members (REB; Institutional Review Boards [IRB] in the USA), and none to date have documented the perceptions of research ethics experts. Finally, nine studies (n = 9) adopted a qualitative design, seven studies (n = 7) a quantitative design, and three (n = 3) a mixed-methods design.

More studies using empirical research methods are needed to better identify broader trends, to enrich discussions on the values that should govern responsible conduct of research in the academic community, and to evaluate the means by which these values can be supported in practice (Bahn, 2012 ; Beauchemin et al., 2021 ; Bruhn et al., 2002 ; Henderson et al., 2013 ; Resnik & Elliot, 2016; Sieber 2004 ). To this end, we conducted an empirical qualitative study to document the perceptions and experiences of a heterogeneous group of Canadian researchers, REB members, and research ethics experts, to answer the following broad question: What are the ethical issues in research?

Research Methods

Research design.

A qualitative research approach involving individual semi-structured interviews was used to systematically document ethical issues (De Poy & Gitlin, 2010 ; Hammell et al., 2000 ). Specifically, a descriptive phenomenological approach inspired by the philosophy of Husserl was used (Husserl, 1970 , 1999 ), as it is recommended for documenting the perceptions of ethical issues raised by various practices (Hunt & Carnavale, 2011 ).

Ethical considerations

The principal investigator obtained ethics approval for this project from the Research Ethics Board of the Université du Québec à Trois-Rivières (UQTR). All members of the research team signed a confidentiality agreement, and research participants signed the consent form after reading an information letter explaining the nature of the research project.

Sampling and recruitment

As indicated above, three types of participants were sought: (1) researchers from different academic disciplines conducting research (i.e., theoretical, fundamental or empirical) in Canadian universities; (2) REB members working in Canadian organizations responsible for the ethical review, oversight or regulation of research; and (3) research ethics experts, i.e., academics or ethicists who teach research ethics, conduct research in research ethics, or are scholars who have acquired a specialization in research ethics. To be included in the study, participants had to work in Canada, speak and understand English or French, and be willing to participate in the study. Following Thomas and Polio’s (2002) recommendation to recruit between six and twelve participants (for a homogeneous sample) to ensure data saturation, for our heterogeneous sample, we aimed to recruit approximately twelve participants in order to obtain data saturation. Having used this method several times in related projects in professional ethics, data saturation is usually achieved with 10 to 15 participants (Drolet & Goulet, 2018 ; Drolet & Girard, 2020 ; Drolet et al., 2020 ). From experience, larger samples only serve to increase the degree of data saturation, especially in heterogeneous samples (Drolet et al., 2017 , 2019 ; Drolet & Maclure, 2016 ).

Purposive sampling facilitated the identification of participants relevant to documenting the phenomenon in question (Fortin, 2010 ). To ensure a rich and most complete representation of perceptions, we sought participants with varied and complementary characteristics with regards to the social roles they occupy in research practice (Drolet & Girard, 2020 ). A triangulation of sources was used for the recruitment (Bogdan & Biklen, 2006 ). The websites of Canadian universities and Canadian health institution REBs, as well as those of major Canadian granting agencies (i.e., the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, and the Social Sciences and Humanities Research Council of Canada, Fonds de recherche du Quebec), were searched to identify individuals who might be interested in participating in the study. Further, people known by the research team for their knowledge and sensitivity to ethical issues in research were asked to participate. Research participants were also asked to suggest other individuals who met the study criteria.

Data Collection

Two tools were used for data collecton: (a) a socio-demographic questionnaire, and (b) a semi-structured individual interview guide. English and French versions of these two documents were used and made available, depending on participant preferences. In addition, although the interview guide contained the same questions, they were adapted to participants’ specific roles (i.e., researcher, REB member, research ethics expert). When contacted by email by the research assistant, participants were asked to confirm under which role they wished to participate (because some participants might have multiple, overlapping responsibilities) and they were sent the appropriate interview guide.

The interview guides each had two parts: an introduction and a section on ethical issues. The introduction consisted of general questions to put the participant at ease (i.e., “Tell me what a typical day at work is like for you”). The section on ethical issues was designed to capture the participant’s perceptions through questions such as: “Tell me three stories you have experienced at work that involve an ethical issue?” and “Do you feel that your organization is doing enough to address, manage, and resolve ethical issues in your work?”. Although some interviews were conducted in person, the majority were conducted by videoconference to promote accessibility and because of the COVID-19 pandemic. Interviews were digitally recorded so that the verbatim could be transcribed in full, and varied between 40 and 120 min in duration, with an average of 90 min. Research assistants conducted the interviews and transcribed the verbatim.

Data Analysis

The socio-demographic questionnaires were subjected to simple descriptive statistical analyses (i.e., means and totals), and the semi-structured interviews were subjected to qualitative analysis. The steps proposed by Giorgi ( 1997 ) for a Husserlian phenomenological reduction of the data were used. After collecting, recording, and transcribing the interviews, all verbatim were analyzed by at least two analysts: a research assistant (2nd author of this article) and the principal investigator (1st author) or a postdoctoral fellow (3rd author). The repeated reading of the verbatim allowed the first analyst to write a synopsis, i.e., an initial extraction of units of meaning. The second analyst then read the synopses, which were commented and improved if necessary. Agreement between analysts allowed the final drafting of the interview synopses, which were then analyzed by three analysts to generate and organize the units of meaning that emerged from the qualitative data.

Participants

Sixteen individuals (n = 16) participated in the study, of whom nine (9) identified as female and seven (7) as male (Table  2 ). Participants ranged in age from 22 to 72 years, with a mean age of 47.5 years. Participants had between one (1) and 26 years of experience in the research setting, with an average of 14.3 years of experience. Participants held a variety of roles, including: REB members (n = 11), researchers (n = 10), research ethics experts (n = 4), and research assistant (n = 1). As mentioned previously, seven (7) participants held more than one role, i.e., REB member, research ethics expert, and researcher. The majority (87.5%) of participants were working in Quebec, with the remaining working in other Canadian provinces. Although all participants considered themselves to be francophone, one quarter (n = 4) identified themselves as belonging to a cultural minority group.

Description of Participants

With respect to their academic background, most participants (n = 9) had a PhD, three (3) had a post-doctorate, two (2) had a master’s degree, and two (2) had a bachelor’s degree. Participants came from a variety of disciplines: nine (9) had a specialty in the humanities or social sciences, four (4) in the health sciences and three (3) in the natural sciences. In terms of their knowledge of ethics, five (5) participants reported having taken one university course entirely dedicated to ethics, four (4) reported having taken several university courses entirely dedicated to ethics, three (3) had a university degree dedicated to ethics, while two (2) only had a few hours or days of training in ethics and two (2) reported having no knowledge of ethics.

Ethical issues

As Fig.  1 illustrates, ten units of meaning emerge from the data analysis, namely: (1) research integrity, (2) conflicts of interest, (3) respect for research participants, (4) lack of supervision and power imbalances, (5) individualism and performance, (6) inadequate ethical guidance, (7) social injustices, (8) distributive injustices, (9) epistemic injustices, and (10) ethical distress. To illustrate the results, excerpts from verbatim interviews are presented in the following sub-sections. Most of the excerpts have been translated into English as the majority of interviews were conducted with French-speaking participants.

An external file that holds a picture, illustration, etc.
Object name is 10805_2022_9455_Fig1_HTML.jpg

Ethical issues in research according to the participants

Research Integrity

The research environment is highly competitive and performance-based. Several participants, in particular researchers and research ethics experts, felt that this environment can lead both researchers and research teams to engage in unethical behaviour that reflects a lack of research integrity. For example, as some participants indicated, competition for grants and scientific publications is sometimes so intense that researchers falsify research results or plagiarize from colleagues to achieve their goals.

Some people will lie or exaggerate their research findings in order to get funding. Then, you see it afterwards, you realize: “ah well, it didn’t work, but they exaggerated what they found and what they did” (participant 14). Another problem in research is the identification of authors when there is a publication. Very often, there are authors who don’t even know what the publication is about and that their name is on it. (…) The time that it surprised me the most was just a few months ago when I saw someone I knew who applied for a teaching position. He got it I was super happy for him. Then I looked at his publications and … there was one that caught my attention much more than the others, because I was in it and I didn’t know what that publication was. I was the second author of a publication that I had never read (participant 14). I saw a colleague who had plagiarized another colleague. [When the colleague] found out about it, he complained. So, plagiarism is a serious [ethical breach]. I would also say that there is a certain amount of competition in the university faculties, especially for grants (…). There are people who want to win at all costs or get as much as possible. They are not necessarily going to consider their colleagues. They don’t have much of a collegial spirit (participant 10).

These examples of research misbehaviour or misconduct are sometimes due to or associated with situations of conflicts of interest, which may be poorly managed by certain researchers or research teams, as noted by many participants.

Conflict of interest

The actors and institutions involved in research have diverse interests, like all humans and institutions. As noted in Chap. 7 of the Canadian Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS2, 2018),

“researchers and research students hold trust relationships, either directly or indirectly, with participants, research sponsors, institutions, their professional bodies and society. These trust relationships can be put at risk by conflicts of interest that may compromise independence, objectivity or ethical duties of loyalty. Although the potential for such conflicts has always existed, pressures on researchers (i.e., to delay or withhold dissemination of research outcomes or to use inappropriate recruitment strategies) heighten concerns that conflicts of interest may affect ethical behaviour” (p. 92).

The sources of these conflicts are varied and can include interpersonal conflicts, financial partnerships, third-party pressures, academic or economic interests, a researcher holding multiple roles within an institution, or any other incentive that may compromise a researcher’s independence, integrity, and neutrality (TCPS2, 2018). While it is not possible to eliminate all conflicts of interest, it is important to manage them properly and to avoid temptations to behave unethically.

Ethical temptations correspond to situations in which people are tempted to prioritize their own interests to the detriment of the ethical goods that should, in their own context, govern their actions (Swisher et al., 2005 ). In the case of researchers, this refers to situations that undermine independence, integrity, neutrality, or even the set of principles that govern research ethics (TCPS2, 2018) or the responsible conduct of research. According to study participants, these types of ethical issues frequently occur in research. Many participants, especially researchers and REB members, reported that conflicts of interest can arise when members of an organization make decisions to obtain large financial rewards or to increase their academic profile, often at the expense of the interests of members of their research team, research participants, or even the populations affected by their research.

A company that puts money into making its drug work wants its drug to work. So, homeopathy is a good example, because there are not really any consequences of homeopathy, there are not very many side effects, because there are no effects at all. So, it’s not dangerous, but it’s not a good treatment either. But some people will want to make it work. And that’s a big issue when you’re sitting at a table and there are eight researchers, and there are two or three who are like that, and then there are four others who are neutral, and I say to myself, this is not science. I think that this is a very big ethical issue (participant 14). There are also times in some research where there will be more links with pharmaceutical companies. Obviously, there are then large amounts of money that will be very interesting for the health-care institutions because they still receive money for clinical trials. They’re still getting some compensation because its time consuming for the people involved and all that. The pharmaceutical companies have money, so they will compensate, and that is sometimes interesting for the institutions, and since we are a bit caught up in this, in the sense that we have no choice but to accept it. (…) It may not be the best research in the world, there may be a lot of side effects due to the drugs, but it’s good to accept it, we’re going to be part of the clinical trial (participant 3). It is integrity, what we believe should be done or said. Often by the pressure of the environment, integrity is in tension with the pressures of the environment, so it takes resistance, it takes courage in research. (…) There were all the debates there about the problems of research that was funded and then the companies kept control over what was written. That was really troubling for a lot of researchers (participant 5).

Further, these situations sometimes have negative consequences for research participants as reported by some participants.

Respect for research participants

Many research projects, whether they are psychosocial or biomedical in nature, involve human participants. Relationships between the members of research teams and their research participants raise ethical issues that can be complex. Research projects must always be designed to respect the rights and interests of research participants, and not just those of researchers. However, participants in our study – i.e., REB members, researchers, and research ethics experts – noted that some research teams seem to put their own interests ahead of those of research participants. They also emphasized the importance of ensuring the respect, well-being, and safety of research participants. The ethical issues related to this unit of meaning are: respect for free, informed and ongoing consent of research participants; respect for and the well-being of participants; data protection and confidentiality; over-solicitation of participants; ownership of the data collected on participants; the sometimes high cost of scientific innovations and their accessibility; balance between the social benefits of research and the risks to participants (particularly in terms of safety); balance between collective well-being (development of knowledge) and the individual rights of participants; exploitation of participants; paternalism when working with populations in vulnerable situations; and the social acceptability of certain types of research. The following excerpts present some of these issues.

Where it disturbs me ethically is in the medical field – because it’s more in the medical field that we’re going to see this – when consent forms are presented to patients to solicit them as participants, and then [these forms] have an average of 40 pages. That annoys me. When they say that it has to be easy to understand and all that, adapted to the language, and then the hyper-technical language plus there are 40 pages to read, I don’t understand how you’re going to get informed consent after reading 40 pages. (…) For me, it doesn’t work. I read them to evaluate them and I have a certain level of education and experience in ethics, and there are times when I don’t understand anything (participant 2). There is a lot of pressure from researchers who want to recruit research participants (…). The idea that when you enter a health care institution, you become a potential research participant, when you say “yes to a research, you check yes to all research”, then everyone can ask you. I think that researchers really have this fantasy of saying to themselves: “as soon as people walk through the door of our institution, they become potential participants with whom we can communicate and get them involved in all projects”. There’s a kind of idea that, yes, it can be done, but it has to be somewhat supervised to avoid over-solicitation (…). Researchers are very interested in facilitating recruitment and making it more fluid, but perhaps to the detriment of confidentiality, privacy, and respect; sometimes that’s what it is, to think about what type of data you’re going to have in your bank of potential participants? Is it just name and phone number or are you getting into more sensitive information? (participant 9).

In addition, one participant reported that their university does not provide the resources required to respect the confidentiality of research participants.

The issue is as follows: researchers, of course, commit to protecting data with passwords and all that, but we realize that in practice, it is more difficult. It is not always as protected as one might think, because professor-researchers will run out of space. Will the universities make rooms available to researchers, places where they can store these things, especially when they have paper documentation, and is there indeed a guarantee of confidentiality? Some researchers have told me: “Listen; there are even filing cabinets in the corridors”. So, that certainly poses a concrete challenge. How do we go about challenging the administrative authorities? Tell them it’s all very well to have an ethics committee, but you have to help us, you also have to make sure that the necessary infrastructures are in place so that what we are proposing is really put into practice (participant 4).

If the relationships with research participants are likely to raise ethical issues, so too are the relationships with students, notably research assistants. On this topic, several participants discussed the lack of supervision or recognition offered to research assistants by researchers as well as the power imbalances between members of the research team.

Lack of Supervision and Power Imbalances

Many research teams are composed not only of researchers, but also of students who work as research assistants. The relationship between research assistants and other members of research teams can sometimes be problematic and raise ethical issues, particularly because of the inevitable power asymmetries. In the context of this study, several participants – including a research assistant, REB members, and researchers – discussed the lack of supervision or recognition of the work carried out by students, psychological pressure, and the more or less well-founded promises that are sometimes made to students. Participants also mentioned the exploitation of students by certain research teams, which manifest when students are inadequately paid, i.e., not reflective of the number of hours actually worked, not a fair wage, or even a wage at all.

[As a research assistant], it was more of a feeling of distress that I felt then because I didn’t know what to do. (…) I was supposed to get coaching or be supported, but I didn’t get anything in the end. It was like, “fix it by yourself”. (…) All research assistants were supposed to be supervised, but in practice they were not (participant 1). Very often, we have a master’s or doctoral student that we put on a subject and we consider that the project will be well done, while the student is learning. So, it happens that the student will do a lot of work and then we realize that the work is poorly done, and it is not necessarily the student’s fault. He wasn’t necessarily well supervised. There are directors who have 25 students, and they just don’t supervise them (participant 14). I think it’s really the power relationship. I thought to myself, how I saw my doctorate, the beginning of my research career, I really wanted to be in that laboratory, but they are the ones who are going to accept me or not, so what do I do to be accepted? I finally accept their conditions [which was to work for free]. If these are the conditions that are required to enter this lab, I want to go there. So, what do I do, well I accepted. It doesn’t make sense, but I tell myself that I’m still privileged, because I don’t have so many financial worries, one more reason to work for free, even though it doesn’t make sense (participant 1). In research, we have research assistants. (…). The fact of using people… so that’s it, you have to take into account where they are, respect them, but at the same time they have to show that they are there for the research. In English, we say “carry” or take care of people. With research assistants, this is often a problem that I have observed: for grant machines, the person is the last to be found there. Researchers, who will take, use student data, without giving them the recognition for it (participant 5). The problem at our university is that they reserve funding for Canadian students. The doctoral clientele in my field is mostly foreign students. So, our students are poorly funded. I saw one student end up in the shelter, in a situation of poverty. It ended very badly for him because he lacked financial resources. Once you get into that dynamic, it’s very hard to get out. I was made aware of it because the director at the time had taken him under her wing and wanted to try to find a way to get him out of it. So, most of my students didn’t get funded (participant 16). There I wrote “manipulation”, but it’s kind of all promises all the time. I, for example, was promised a lot of advancement, like when I got into the lab as a graduate student, it was said that I had an interest in [this particular area of research]. I think there are a lot of graduate students who must have gone through that, but it is like, “Well, your CV has to be really good, if you want to do a lot of things and big things. If you do this, if you do this research contract, the next year you could be the coordinator of this part of the lab and supervise this person, get more contracts, be paid more. Let’s say: you’ll be invited to go to this conference, this big event”. They were always dangling something, but you have to do that first to get there. But now, when you’ve done that, you have to do this business. It’s like a bit of manipulation, I think. That was very hard to know who is telling the truth and who is not (participant 1).

These ethical issues have significant negative consequences for students. Indeed, they sometimes find themselves at the mercy of researchers, for whom they work, struggling to be recognized and included as authors of an article, for example, or to receive the salary that they are due. For their part, researchers also sometimes find themselves trapped in research structures that can negatively affect their well-being. As many participants reported, researchers work in organizations that set very high productivity standards and in highly competitive contexts, all within a general culture characterized by individualism.

Individualism and performance

Participants, especially researchers, discussed the culture of individualism and performance that characterizes the academic environment. In glorifying excellence, some universities value performance and productivity, often at the expense of psychological well-being and work-life balance (i.e., work overload and burnout). Participants noted that there are ethical silences in their organizations on this issue, and that the culture of individualism and performance is not challenged for fear of retribution or simply to survive, i.e., to perform as expected. Participants felt that this culture can have a significant negative impact on the quality of the research conducted, as research teams try to maximize the quantity of their work (instead of quality) in a highly competitive context, which is then exacerbated by a lack of resources and support, and where everything must be done too quickly.

The work-life balance with the professional ethics related to work in a context where you have too much and you have to do a lot, it is difficult to balance all that and there is a lot of pressure to perform. If you don’t produce enough, that’s it; after that, you can’t get any more funds, so that puts pressure on you to do more and more and more (participant 3). There is a culture, I don’t know where it comes from, and that is extremely bureaucratic. If you dare to raise something, you’re going to have many, many problems. They’re going to make you understand it. So, I don’t talk. It is better: your life will be easier. I think there are times when you have to talk (…) because there are going to be irreparable consequences. (…) I’m not talking about a climate of terror, because that’s exaggerated, it’s not true, people are not afraid. But people close their office door and say nothing because it’s going to make their work impossible and they’re not going to lose their job, they’re not going to lose money, but researchers need time to be focused, so they close their office door and say nothing (participant 16).

Researchers must produce more and more, and they feel little support in terms of how to do such production, ethically, and how much exactly they are expected to produce. As this participant reports, the expectation is an unspoken rule: more is always better.

It’s sometimes the lack of a clear line on what the expectations are as a researcher, like, “ah, we don’t have any specific expectations, but produce, produce, produce, produce.” So, in that context, it’s hard to be able to put the line precisely: “have I done enough for my work?” (participant 3).

Inadequate ethical Guidance

While the productivity expectation is not clear, some participants – including researchers, research ethics experts, and REB members – also felt that the ethical expectations of some REBs were unclear. The issue of the inadequate ethical guidance of research includes the administrative mechanisms to ensure that research projects respect the principles of research ethics. According to those participants, the forms required for both researchers and REB members are increasingly long and numerous, and one participant noted that the standards to be met are sometimes outdated and disconnected from the reality of the field. Multicentre ethics review (by several REBs) was also critiqued by a participant as an inefficient method that encumbers the processes for reviewing research projects. Bureaucratization imposes an ever-increasing number of forms and ethics guidelines that actually hinder researchers’ ethical reflection on the issues at stake, leading the ethics review process to be perceived as purely bureaucratic in nature.

The ethical dimension and the ethical review of projects have become increasingly bureaucratized. (…) When I first started working (…) it was less bureaucratic, less strict then. I would say [there are now] tons of forms to fill out. Of course, we can’t do without it, it’s one of the ways of marking out ethics and ensuring that there are ethical considerations in research, but I wonder if it hasn’t become too bureaucratized, so that it’s become a kind of technical reflex to fill out these forms, and I don’t know if people really do ethical reflection as such anymore (participant 10). The fundamental structural issue, I would say, is the mismatch between the normative requirements and the real risks posed by the research, i.e., we have many, many requirements to meet; we have very long forms to fill out but the research projects we evaluate often pose few risks (participant 8). People [in vulnerable situations] were previously unable to participate because of overly strict research ethics rules that were to protect them, but in the end [these rules] did not protect them. There was a perverse effect, because in the end there was very little research done with these people and that’s why we have very few results, very little evidence [to support practices with these populations] so it didn’t improve the quality of services. (…) We all understand that we have to be careful with that, but when the research is not too risky, we say to ourselves that it would be good because for once a researcher who is interested in that population, because it is not a very popular population, it would be interesting to have results, but often we are blocked by the norms, and then we can’t accept [the project] (participant 2).

Moreover, as one participant noted, accessing ethics training can be a challenge.

There is no course on research ethics. […] Then, I find that it’s boring because you go through university and you come to do your research and you know how to do quantitative and qualitative research, but all the research ethics, where do you get this? I don’t really know (participant 13).

Yet, such training could provide relevant tools to resolve, to some extent, the ethical issues that commonly arise in research. That said, and as noted by many participants, many ethical issues in research are related to social injustices over which research actors have little influence.

Social Injustices

For many participants, notably researchers, the issues that concern social injustices are those related to power asymmetries, stigma, or issues of equity, diversity, and inclusion, i.e., social injustices related to people’s identities (Blais & Drolet, 2022 ). Participants reported experiencing or witnessing discrimination from peers, administration, or lab managers. Such oppression is sometimes cross-sectional and related to a person’s age, cultural background, gender or social status.

I have my African colleague who was quite successful when he arrived but had a backlash from colleagues in the department. I think it’s unconscious, nobody is overtly racist. But I have a young person right now who is the same, who has the same success, who got exactly the same early career award and I don’t see the same backlash. He’s just as happy with what he’s doing. It’s normal, they’re young and they have a lot of success starting out. So, I think there is discrimination. Is it because he is African? Is it because he is black? I think it’s on a subconscious level (participant 16).

Social injustices were experienced or reported by many participants, and included issues related to difficulties in obtaining grants or disseminating research results in one’s native language (i.e., even when there is official bilingualism) or being considered credible and fundable in research when one researcher is a woman.

If you do international research, there are things you can’t talk about (…). It is really a barrier to research to not be able to (…) address this question [i.e. the question of inequalities between men and women]. Women’s inequality is going to be addressed [but not within the country where the research takes place as if this inequality exists elsewhere but not here]. There are a lot of women working on inequality issues, doing work and it’s funny because I was talking to a young woman who works at Cairo University and she said to me: “Listen, I saw what you had written, you’re right. I’m willing to work on this but guarantee me a position at your university with a ticket to go”. So yes, there are still many barriers [for women in research] (participant 16).

Because of the varied contextual characteristics that intervene in their occurrence, these social injustices are also related to distributive injustices, as discussed by many participants.

Distributive Injustices

Although there are several views of distributive justice, a classical definition such as that of Aristotle ( 2012 ), describes distributive justice as consisting in distributing honours, wealth, and other social resources or benefits among the members of a community in proportion to their alleged merit. Justice, then, is about determining an equitable distribution of common goods. Contemporary theories of distributive justice are numerous and varied. Indeed, many authors (e.g., Fraser 2011 ; Mills, 2017 ; Sen, 2011 ; Young, 2011 ) have, since Rawls ( 1971 ), proposed different visions of how social burdens and benefits should be shared within a community to ensure equal respect, fairness, and distribution. In our study, what emerges from participants’ narratives is a definite concern for this type of justice. Women researchers, francophone researchers, early career researchers or researchers belonging to racialized groups all discussed inequities in the distribution of research grants and awards, and the extra work they need to do to somehow prove their worth. These inequities are related to how granting agencies determine which projects will be funded.

These situations make me work 2–3 times harder to prove myself and to show people in power that I have a place as a woman in research (participant 12). Number one: it’s conservative thinking. The older ones control what comes in. So, the younger people have to adapt or they don’t get funded (participant 14).

Whether it is discrimination against stigmatized or marginalized populations or interest in certain hot topics, granting agencies judge research projects according to criteria that are sometimes questionable, according to those participants. Faced with difficulties in obtaining funding for their projects, several strategies – some of which are unethical – are used by researchers in order to cope with these situations.

Sometimes there are subjects that everyone goes to, such as nanotechnology (…), artificial intelligence or (…) the therapeutic use of cannabis, which are very fashionable, and this is sometimes to the detriment of other research that is just as relevant, but which is (…), less sexy, less in the spirit of the time. (…) Sometimes this can lead to inequities in the funding of certain research sectors (participant 9). When we use our funds, we get them given to us, we pretty much say what we think we’re going to do with them, but things change… So, when these things change, sometimes it’s an ethical decision, but by force of circumstances I’m obliged to change the project a little bit (…). Is it ethical to make these changes or should I just let the money go because I couldn’t use it the way I said I would? (participant 3).

Moreover, these distributional injustices are not only linked to social injustices, but also epistemic injustices. Indeed, the way in which research honours and grants are distributed within the academic community depends on the epistemic authority of the researchers, which seems to vary notably according to their language of use, their age or their gender, but also to the research design used (inductive versus deductive), their decision to use (or not use) animals in research, or to conduct activist research.

Epistemic injustices

The philosopher Fricker ( 2007 ) conceptualized the notions of epistemic justice and injustice. Epistemic injustice refers to a form of social inequality that manifests itself in the access, recognition, and production of knowledge as well as the various forms of ignorance that arise (Godrie & Dos Santos, 2017 ). Addressing epistemic injustice necessitates acknowledging the iniquitous wrongs suffered by certain groups of socially stigmatized individuals who have been excluded from knowledge, thus limiting their abilities to interpret, understand, or be heard and account for their experiences. In this study, epistemic injustices were experienced or reported by some participants, notably those related to difficulties in obtaining grants or disseminating research results in one’s native language (i.e., even when there is official bilingualism) or being considered credible and fundable in research when a researcher is a woman or an early career researcher.

I have never sent a grant application to the federal government in English. I have always done it in French, even though I know that when you receive the review, you can see that reviewers didn’t understand anything because they are English-speaking. I didn’t want to get in the boat. It’s not my job to translate, because let’s be honest, I’m not as good in English as I am in French. So, I do them in my first language, which is the language I’m most used to. Then, technically at the administrative level, they are supposed to be able to do it, but they are not good in French. (…) Then, it’s a very big Canadian ethical issue, because basically there are technically two official languages, but Canada is not a bilingual country, it’s a country with two languages, either one or the other. (…) So I was not funded (participant 14).

Researchers who use inductive (or qualitative) methods observed that their projects are sometimes less well reviewed or understood, while research that adopts a hypothetical-deductive (or quantitative) or mixed methods design is better perceived, considered more credible and therefore more easily funded. Of course, regardless of whether a research project adopts an inductive, deductive or mixed-methods scientific design, or whether it deals with qualitative or quantitative data, it must respect a set of scientific criteria. A research project should achieve its objectives by using proven methods that, in the case of inductive research, are credible, reliable, and transferable or, in the case of deductive research, generalizable, objective, representative, and valid (Drolet & Ruest, accepted ). Participants discussing these issues noted that researchers who adopt a qualitative design or those who question the relevance of animal experimentation or are not militant have sometimes been unfairly devalued in their epistemic authority.

There is a mini war between quantitative versus qualitative methods, which I think is silly because science is a method. If you apply the method well, it doesn’t matter what the field is, it’s done well and it’s perfect ” (participant 14). There is also the issue of the place of animals in our lives, because for me, ethics is human ethics, but also animal ethics. Then, there is a great evolution in society on the role of the animal… with the new law that came out in Quebec on the fact that animals are sensitive beings. Then, with the rise of the vegan movement, [we must ask ourselves]: “Do animals still have a place in research?” That’s a big question and it also means that there are practices that need to evolve, but sometimes there’s a disconnection between what’s expected by research ethics boards versus what’s expected in the field (participant 15). In research today, we have more and more research that is militant from an ideological point of view. And so, we have researchers, because they defend values that seem important to them, we’ll talk for example about the fight for equality and social justice. They have pressure to defend a form of moral truth and have the impression that everyone thinks like them or should do so, because they are defending a moral truth. This is something that we see more and more, namely the lack of distance between ideology and science (participant 8).

The combination or intersectionality of these inequities, which seems to be characterized by a lack of ethical support and guidance, is experienced in the highly competitive and individualistic context of research; it provides therefore the perfect recipe for researchers to experience ethical distress.

Ethical distress

The concept of “ethical distress” refers to situations in which people know what they should do to act ethically, but encounter barriers, generally of an organizational or systemic nature, limiting their power to act according to their moral or ethical values (Drolet & Ruest, 2021 ; Jameton, 1984 ; Swisher et al., 2005 ). People then run the risk of finding themselves in a situation where they do not act as their ethical conscience dictates, which in the long term has the potential for exhaustion and distress. The examples reported by participants in this study point to the fact that researchers in particular may be experiencing significant ethical distress. This distress takes place in a context of extreme competition, constant injunctions to perform, and where administrative demands are increasingly numerous and complex to complete, while paradoxically, they lack the time to accomplish all their tasks and responsibilities. Added to these demands are a lack of resources (human, ethical, and financial), a lack of support and recognition, and interpersonal conflicts.

We are in an environment, an elite one, you are part of it, you know what it is: “publish or perish” is the motto. Grants, there is a high level of performance required, to do a lot, to publish, to supervise students, to supervise them well, so yes, it is clear that we are in an environment that is conducive to distress. (…). Overwork, definitely, can lead to distress and eventually to exhaustion. When you know that you should take the time to read the projects before sharing them, but you don’t have the time to do that because you have eight that came in the same day, and then you have others waiting… Then someone rings a bell and says: “ah but there, the protocol is a bit incomplete”. Oh yes, look at that, you’re right. You make up for it, but at the same time it’s a bit because we’re in a hurry, we don’t necessarily have the resources or are able to take the time to do things well from the start, we have to make up for it later. So yes, it can cause distress (participant 9). My organization wanted me to apply in English, and I said no, and everyone in the administration wanted me to apply in English, and I always said no. Some people said: “Listen, I give you the choice”, then some people said: “Listen, I agree with you, but if you’re not [submitting] in English, you won’t be funded”. Then the fact that I am young too, because very often they will look at the CV, they will not look at the project: “ah, his CV is not impressive, we will not finance him”. This is complete nonsense. The person is capable of doing the project, the project is fabulous: we fund the project. So, that happened, organizational barriers: that happened a lot. I was not eligible for Quebec research funds (…). I had big organizational barriers unfortunately (participant 14). At the time of my promotion, some colleagues were not happy with the type of research I was conducting. I learned – you learn this over time when you become friends with people after you enter the university – that someone was against me. He had another candidate in mind, and he was angry about the selection. I was under pressure for the first three years until my contract was renewed. I almost quit at one point, but another colleague told me, “No, stay, nothing will happen”. Nothing happened, but these issues kept me awake at night (participant 16).

This difficult context for many researchers affects not only the conduct of their own research, but also their participation in research. We faced this problem in our study, despite the use of multiple recruitment methods, including more than 200 emails – of which 191 were individual solicitations – sent to potential participants by the two research assistants. REB members and organizations overseeing or supporting research (n = 17) were also approached to see if some of their employees would consider participating. While it was relatively easy to recruit REB members and research ethics experts, our team received a high number of non-responses to emails (n = 175) and some refusals (n = 5), especially by researchers. The reasons given by those who replied were threefold: (a) fear of being easily identified should they take part in the research, (b) being overloaded and lacking time, and (c) the intrusive aspect of certain questions (i.e., “Have you experienced a burnout episode? If so, have you been followed up medically or psychologically?”). In light of these difficulties and concerns, some questions in the socio-demographic questionnaire were removed or modified. Talking about burnout in research remains a taboo for many researchers, which paradoxically can only contribute to the unresolved problem of unhealthy research environments.

Returning to the research question and objective

The question that prompted this research was: What are the ethical issues in research? The purpose of the study was to describe these issues from the perspective of researchers (from different disciplines), research ethics board (REB) members, and research ethics experts. The previous section provided a detailed portrait of the ethical issues experienced by different research stakeholders: these issues are numerous, diverse and were recounted by a range of stakeholders.

The results of the study are generally consistent with the literature. For example, as in our study, the literature discusses the lack of research integrity on the part of some researchers (Al-Hidabi et al., 2018 ; Swazey et al., 1993 ), the numerous conflicts of interest experienced in research (Williams-Jones et al., 2013 ), the issues of recruiting and obtaining the free and informed consent of research participants (Provencher et al., 2014 ; Keogh & Daly, 2009 ), the sometimes difficult relations between researchers and REBs (Drolet & Girard, 2020 ), the epistemological issues experienced in research (Drolet & Ruest, accepted; Sieber 2004 ), as well as the harmful academic context in which researchers evolve, insofar as this is linked to a culture of performance, an overload of work in a context of accountability (Berg & Seeber, 2016 ; FQPPU; 2019 ) that is conducive to ethical distress and even burnout.

If the results of the study are generally in line with those of previous publications on the subject, our findings also bring new elements to the discussion while complementing those already documented. In particular, our results highlight the role of systemic injustices – be they social, distributive or epistemic – within the environments in which research is carried out, at least in Canada. To summarize, the results of our study point to the fact that the relationships between researchers and research participants are likely still to raise worrying ethical issues, despite widely accepted research ethics norms and institutionalized review processes. Further, the context in which research is carried out is not only conducive to breaches of ethical norms and instances of misbehaviour or misconduct, but also likely to be significantly detrimental to the health and well-being of researchers, as well as research assistants. Another element that our research also highlighted is the instrumentalization and even exploitation of students and research assistants, which is another important and worrying social injustice given the inevitable power imbalances between students and researchers.

Moreover, in a context in which ethical issues are often discussed from a micro perspective, our study helps shed light on both the micro- and macro-level ethical dimensions of research (Bronfenbrenner, 1979 ; Glaser 1994 ). However, given that ethical issues in research are not only diverse, but also and above all complex, a broader perspective that encompasses the interplay between the micro and macro dimensions can enable a better understanding of these issues and thereby support the identification of the multiple factors that may be at their origin. Triangulating the perspectives of researchers with those of REB members and research ethics experts enabled us to bring these elements to light, and thus to step back from and critique the way that research is currently conducted. To this end, attention to socio-political elements such as the performance culture in academia or how research funds are distributed, and according to what explicit and implicit criteria, can contribute to identifying the sources of the ethical issues described above.

Contemporary culture characterized by the social acceleration

The German sociologist and philosopher Rosa (2010) argues that late modernity – that is, the period between the 1980s and today – is characterized by a phenomenon of social acceleration that causes various forms of alienation in our relationship to time, space, actions, things, others and ourselves. Rosa distinguishes three types of acceleration: technical acceleration , the acceleration of social changes and the acceleration of the rhythm of life . According to Rosa, social acceleration is the main problem of late modernity, in that the invisible social norm of doing more and faster to supposedly save time operates unchallenged at all levels of individual and collective life, as well as organizational and social life. Although we all, researchers and non-researchers alike, perceive this unspoken pressure to be ever more productive, the process of social acceleration as a new invisible social norm is our blind spot, a kind of tyrant over which we have little control. This conceptualization of the contemporary culture can help us to understand the context in which research is conducted (like other professional practices). To this end, Berg & Seeber ( 2016 ) invite faculty researchers to slow down in order to better reflect and, in the process, take care of their health and their relationships with their colleagues and students. Many women professors encourage their fellow researchers, especially young women researchers, to learn to “say No” in order to protect their mental and physical health and to remain in their academic careers (Allaire & Descheneux, 2022 ). These authors also remind us of the relevance of Kahneman’s ( 2012 ) work which demonstrates that it takes time to think analytically, thoroughly, and logically. Conversely, thinking quickly exposes humans to cognitive and implicit biases that then lead to errors in thinking (e.g., in the analysis of one’s own research data or in the evaluation of grant applications or student curriculum vitae). The phenomenon of social acceleration, which pushes the researcher to think faster and faster, is likely to lead to unethical bad science that can potentially harm humankind. In sum, Rosa’s invitation to contemporary critical theorists to seriously consider the problem of social acceleration is particularly insightful to better understand the ethical issues of research. It provides a lens through which to view the toxic context in which research is conducted today, and one that was shared by the participants in our study.

Clark & Sousa ( 2022 ) note, it is important that other criteria than the volume of researchers’ contributions be valued in research, notably quality. Ultimately, it is the value of the knowledge produced and its influence on the concrete lives of humans and other living beings that matters, not the quantity of publications. An interesting articulation of this view in research governance is seen in a change in practice by Australia’s national health research funder: they now restrict researchers to listing on their curriculum vitae only the top ten publications from the past ten years (rather than all of their publications), in order to evaluate the quality of contributions rather than their quantity. To create environments conducive to the development of quality research, it is important to challenge the phenomenon of social acceleration, which insidiously imposes a quantitative normativity that is both alienating and detrimental to the quality and ethical conduct of research. Based on our experience, we observe that the social norm of acceleration actively disfavours the conduct of empirical research on ethics in research. The fact is that researchers are so busy that it is almost impossible for them to find time to participate in such studies. Further, operating in highly competitive environments, while trying to respect the values and ethical principles of research, creates ethical paradoxes for members of the research community. According to Malherbe ( 1999 ), an ethical paradox is a situation where an individual is confronted by contradictory injunctions (i.e., do more, faster, and better). And eventually, ethical paradoxes lead individuals to situations of distress and burnout, or even to ethical failures (i.e., misbehaviour or misconduct) in the face of the impossibility of responding to contradictory injunctions.

Strengths and Limitations of the study

The triangulation of perceptions and experiences of different actors involved in research is a strength of our study. While there are many studies on the experiences of researchers, rarely are members of REBs and experts in research ethics given the space to discuss their views of what are ethical issues. Giving each of these stakeholders a voice and comparing their different points of view helped shed a different and complementary light on the ethical issues that occur in research. That said, it would have been helpful to also give more space to issues experienced by students or research assistants, as the relationships between researchers and research assistants are at times very worrying, as noted by a participant, and much work still needs to be done to eliminate the exploitative situations that seem to prevail in certain research settings. In addition, no Indigenous or gender diverse researchers participated in the study. Given the ethical issues and systemic injustices that many people from these groups face in Canada (Drolet & Goulet, 2018 ; Nicole & Drolet, in press ), research that gives voice to these researchers would be relevant and contribute to knowledge development, and hopefully also to change in research culture.

Further, although most of the ethical issues discussed in this article may be transferable to the realities experienced by researchers in other countries, the epistemic injustice reported by Francophone researchers who persist in doing research in French in Canada – which is an officially bilingual country but in practice is predominantly English – is likely specific to the Canadian reality. In addition, and as mentioned above, recruitment proved exceedingly difficult, particularly amongst researchers. Despite this difficulty, we obtained data saturation for all but two themes – i.e., exploitation of students and ethical issues of research that uses animals. It follows that further empirical research is needed to improve our understanding of these specific issues, as they may diverge to some extent from those documented here and will likely vary across countries and academic research contexts.

Conclusions

This study, which gave voice to researchers, REB members, and ethics experts, reveals that the ethical issues in research are related to several problematic elements as power imbalances and authority relations. Researchers and research assistants are subject to external pressures that give rise to integrity issues, among others ethical issues. Moreover, the current context of social acceleration influences the definition of the performance indicators valued in academic institutions and has led their members to face several ethical issues, including social, distributive, and epistemic injustices, at different steps of the research process. In this study, ten categories of ethical issues were identified, described and illustrated: (1) research integrity, (2) conflicts of interest, (3) respect for research participants, (4) lack of supervision and power imbalances, (5) individualism and performance, (6) inadequate ethical guidance, (7) social injustices, (8) distributive injustices, (9) epistemic injustices, and (10) ethical distress. The triangulation of the perspectives of different members (i.e., researchers from different disciplines, REB members, research ethics experts, and one research assistant) involved in the research process made it possible to lift the veil on some of these ethical issues. Further, it enabled the identification of additional ethical issues, especially systemic injustices experienced in research. To our knowledge, this is the first time that these injustices (social, distributive, and epistemic injustices) have been clearly identified.

Finally, this study brought to the fore several problematic elements that are important to address if the research community is to develop and implement the solutions needed to resolve the diverse and transversal ethical issues that arise in research institutions. A good starting point is the rejection of the corollary norms of “publish or perish” and “do more, faster, and better” and their replacement with “publish quality instead of quantity”, which necessarily entails “do less, slower, and better”. It is also important to pay more attention to the systemic injustices within which researchers work, because these have the potential to significantly harm the academic careers of many researchers, including women researchers, early career researchers, and those belonging to racialized groups as well as the health, well-being, and respect of students and research participants.

Acknowledgements

The team warmly thanks the participants who took part in the research and who made this study possible. Marie-Josée Drolet thanks the five research assistants who participated in the data collection and analysis: Julie-Claude Leblanc, Élie Beauchemin, Pénéloppe Bernier, Louis-Pierre Côté, and Eugénie Rose-Derouin, all students at the Université du Québec à Trois-Rivières (UQTR), two of whom were active in the writing of this article. MJ Drolet and Bryn Williams-Jones also acknowledge the financial contribution of the Social Sciences and Humanities Research Council of Canada (SSHRC), which supported this research through a grant. We would also like to thank the reviewers of this article who helped us improve it, especially by clarifying and refining our ideas.

Competing Interests and Funding

As noted in the Acknowledgements, this research was supported financially by the Social Sciences and Humanities Research Council of Canada (SSHRC).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Al-Hidabi, Abdulmalek, M. D., & The, P. L. (2018). Multiple Publications: The Main Reason for the Retraction of Papers in Computer Science. In K. Arai, S. Kapoor, & R. Bhatia (eds), Future of Information and Communication Conference (FICC): Advances in Information and Communication, Advances in Intelligent Systems and Computing (AISC), Springer, vol. 886, pp. 511–526
  • Allaire, S., & Deschenaux, F. (2022). Récits de professeurs d’université à mi-carrière. Si c’était à refaire… . Presses de l’Université du Québec
  • Aristotle . Aristotle’s Nicomachean Ethics. Chicago: The University of Chicago Press; 2012. [ Google Scholar ]
  • Bahn S. Keeping Academic Field Researchers Safe: Ethical Safeguards. Journal of Academic Ethics. 2012; 10 :83–91. doi: 10.1007/s10805-012-9159-2. [ CrossRef ] [ Google Scholar ]
  • Balk DE. Bereavement Research Using Control Groups: Ethical Obligations and Questions. Death Studies. 1995; 19 :123–138. doi: 10.1080/07481189508252720. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Beauchemin, É., Côté, L. P., Drolet, M. J., & Williams-Jones, B. (2021). Conceptualizing Ethical Issues in the Conduct of Research: Results from a Critical and Systematic Literature Review. Journal of Academic Ethics , Early Online. 10.1007/s10805-021-09411-7
  • Berg, M., & Seeber, B. K. (2016). The Slow Professor . University of Toronto Press
  • Birchley G, Huxtable R, Murtagh M, Meulen RT, Flach P, Gooberman-Hill R. Smart homes, private homes? An empirical study of technology researchers’ perceptions of ethical issues in developing smart-home health technologies. BMC Medical Ethics. 2017; 18 (23):1–13. doi: 10.1186/s12910-017-0183-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Blais, J., & Drolet, M. J. (2022). Les injustices sociales vécues en camp de réfugiés: les comprendre pour mieux intervenir auprès de personnes ayant séjourné dans un camp de réfugiés. Recueil annuel belge d’ergothérapie , 14, 37–48
  • Bogdan, R. C., & Biklen, S. K. (2006). Qualitative research in education: An introduction to theory and methods . Allyn & Bacon
  • Bouffard C. Le développement des pratiques de la génétique médicale et la construction des normes bioéthiques. Anthropologie et Sociétés. 2000; 24 (2):73–90. doi: 10.7202/015650ar. [ CrossRef ] [ Google Scholar ]
  • Bronfenbrenner, U. (1979). The Ecology of Human development. Experiments by nature and design . Harvard University Press
  • Bruhn JG, Zajac G, Al-Kazemi AA, Prescott LD. Moral positions and academic conduct: Parameters of tolerance for ethics failure. Journal of Higher Education. 2002; 73 (4):461–493. doi: 10.1353/jhe.2002.0033. [ CrossRef ] [ Google Scholar ]
  • Clark, A., & Sousa (2022). It’s time to end Canada’s obsession with research quantity. University Affairs/Affaires universitaires , February 14th. https://www.universityaffairs.ca/career-advice/effective-successfull-happy-academic/its-time-to-end-canadas-obsession-with-research-quantity/?utm_source=University+Affairs+e-newsletter&utm_campaign=276a847f 70-EMAIL_CAMPAIGN_2022_02_16&utm_medium=email&utm_term=0_314bc2ee29-276a847f70-425259989
  • Colnerud G. Ethical dilemmas in research in relation to ethical review: An empirical study. Research Ethics. 2015; 10 (4):238–253. doi: 10.1177/1747016114552339. [ CrossRef ] [ Google Scholar ]
  • Davison J. Dilemmas in Research: Issues of Vulnerability and Disempowerment for the Social Workers/Researcher. Journal of Social Work Practice. 2004; 18 (3):379–393. doi: 10.1080/0265053042000314447. [ CrossRef ] [ Google Scholar ]
  • DePoy E, Gitlin LN. Introduction to Research. St. Louis: Elsevier Mosby; 2010. [ Google Scholar ]
  • Drolet, M. J., & Goulet, M. (2018). Travailler avec des patients autochtones du Canada ? Perceptions d’ergothérapeutes du Québec des enjeux éthiques de cette pratique. Recueil annuel belge francophone d’ergothérapie , 10 , 25–56
  • Drolet MJ, Girard K. Les enjeux éthiques de la recherche en ergothérapie: un portrait préoccupant. Revue canadienne de bioéthique. 2020; 3 (3):21–40. doi: 10.7202/1073779ar. [ CrossRef ] [ Google Scholar ]
  • Drolet MJ, Girard K, Gaudet R. Les enjeux éthiques de l’enseignement en ergothérapie: des injustices au sein des départements universitaires. Revue canadienne de bioéthique. 2020; 3 (1):22–36. [ Google Scholar ]
  • Drolet MJ, Maclure J. Les enjeux éthiques de la pratique de l’ergothérapie: perceptions d’ergothérapeutes. Revue Approches inductives. 2016; 3 (2):166–196. doi: 10.7202/1037918ar. [ CrossRef ] [ Google Scholar ]
  • Drolet MJ, Pinard C, Gaudet R. Les enjeux éthiques de la pratique privée: des ergothérapeutes du Québec lancent un cri d’alarme. Ethica – Revue interdisciplinaire de recherche en éthique. 2017; 21 (2):173–209. [ Google Scholar ]
  • Drolet MJ, Ruest M. De l’éthique à l’ergothérapie: un cadre théorique et une méthode pour soutenir la pratique professionnelle. Québec: Presses de l’Université du Québec; 2021. [ Google Scholar ]
  • Drolet, M. J., & Ruest, M. (accepted). Quels sont les enjeux éthiques soulevés par la recherche scientifique? In M. Lalancette & J. Luckerhoff (dir). Initiation au travail intellectuel et à la recherche . Québec: Presses de l’Université du Québec, 18 p
  • Drolet MJ, Sauvageau A, Baril N, Gaudet R. Les enjeux éthiques de la formation clinique en ergothérapie. Revue Approches inductives. 2019; 6 (1):148–179. doi: 10.7202/1060048ar. [ CrossRef ] [ Google Scholar ]
  • Fédération québécoise des professeures et des professeurs d’université (FQPPU) Enquête nationale sur la surcharge administrative du corps professoral universitaire québécois. Principaux résultats et pistes d’action. Montréal: FQPPU; 2019. [ Google Scholar ]
  • Fortin MH. Fondements et étapes du processus de recherche. Méthodes quantitatives et qualitatives. Montréal, QC: Chenelière éducation; 2010. [ Google Scholar ]
  • Fraser DM. Ethical dilemmas and practical problems for the practitioner researcher. Educational Action Research. 1997; 5 (1):161–171. doi: 10.1080/09650799700200014. [ CrossRef ] [ Google Scholar ]
  • Fraser, N. (2011). Qu’est-ce que la justice sociale? Reconnaissance et redistribution . La Découverte
  • Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing . Oxford University Press
  • Giorgi A, et al. De la méthode phénoménologique utilisée comme mode de recherche qualitative en sciences humaines: théories, pratique et évaluation. In: Poupart J, Groulx LH, Deslauriers JP, et al., editors. La recherche qualitative: enjeux épistémologiques et méthodologiques. Boucherville, QC: Gaëtan Morin; 1997. pp. 341–364. [ Google Scholar ]
  • Giorgini V, Mecca JT, Gibson C, Medeiros K, Mumford MD, Connelly S, Devenport LD. Researcher Perceptions of Ethical Guidelines and Codes of Conduct. Accountability in Research. 2016; 22 (3):123–138. doi: 10.1080/08989621.2014.955607. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Glaser, J. W. (1994). Three realms of ethics: Individual, institutional, societal. Theoretical model and case studies . Kansas Cuty, Sheed & Ward
  • Godrie B, Dos Santos M. Présentation: inégalités sociales, production des savoirs et de l’ignorance. Sociologie et sociétés. 2017; 49 (1):7. doi: 10.7202/1042804ar. [ CrossRef ] [ Google Scholar ]
  • Hammell KW, Carpenter C, Dyck I. Using Qualitative Research: A Practical Introduction for Occupational and Physical Therapists. Edinburgh: Churchill Livingstone; 2000. [ Google Scholar ]
  • Henderson M, Johnson NF, Auld G. Silences of ethical practice: dilemmas for researchers using social media. Educational Research and Evaluation. 2013; 19 (6):546–560. doi: 10.1080/13803611.2013.805656. [ CrossRef ] [ Google Scholar ]
  • Husserl E. The crisis of European sciences and transcendental phenomenology. Evanston, IL: Northwestern University Press; 1970. [ Google Scholar ]
  • Husserl E. The train of thoughts in the lectures. In: Polifroni EC, Welch M, editors. Perspectives on Philosophy of Science in Nursing. Philadelphia, PA: Lippincott; 1999. [ Google Scholar ]
  • Hunt SD, Chonko LB, Wilcox JB. Ethical problems of marketing researchers. Journal of Marketing Research. 1984; 21 :309–324. doi: 10.1177/002224378402100308. [ CrossRef ] [ Google Scholar ]
  • Hunt MR, Carnevale FA. Moral experience: A framework for bioethics research. Journal of Medical Ethics. 2011; 37 (11):658–662. doi: 10.1136/jme.2010.039008. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jameton, A. (1984). Nursing practice: The ethical issues . Englewood Cliffs, Prentice-Hall
  • Jarvis K. Dilemmas in International Research and the Value of Practical Wisdom. Developing World Bioethics. 2017; 17 (1):50–58. doi: 10.1111/dewb.12121. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kahneman D. Système 1, système 2: les deux vitesses de la pensée. Paris: Flammarion; 2012. [ Google Scholar ]
  • Keogh B, Daly L. The ethics of conducting research with mental health service users. British Journal of Nursing. 2009; 18 (5):277–281. doi: 10.12968/bjon.2009.18.5.40539. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lierville AL, Grou C, Pelletier JF. Enjeux éthiques potentiels liés aux partenariats patients en psychiatrie: État de situation à l’Institut universitaire en santé mentale de Montréal. Santé mentale au Québec. 2015; 40 (1):119–134. doi: 10.7202/1032386ar. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lynöe N, Sandlund M, Jacobsson L. Research ethics committees: A comparative study of assessment of ethical dilemmas. Scandinavian Journal of Public Health. 1999; 27 (2):152–159. doi: 10.1177/14034948990270020401. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Malherbe JF. Compromis, dilemmes et paradoxes en éthique clinique. Anjou: Éditions Fides; 1999. [ Google Scholar ]
  • McGinn R. Discernment and denial: Nanotechnology researchers’ recognition of ethical responsibilities related to their work. NanoEthics. 2013; 7 :93–105. doi: 10.1007/s11569-013-0174-6. [ CrossRef ] [ Google Scholar ]
  • Mills, C. W. (2017). Black Rights / White rongs. The Critique of Racial Liberalism . Oxford University Press
  • Miyazaki AD, Taylor KA. Researcher interaction biases and business ethics research: Respondent reactions to researcher characteristics. Journal of Business Ethics. 2008; 81 (4):779–795. doi: 10.1007/s10551-007-9547-5. [ CrossRef ] [ Google Scholar ]
  • Mondain N, Bologo E. L’intentionnalité du chercheur dans ses pratiques de production des connaissances: les enjeux soulevés par la construction des données en démographie et santé en Afrique. Cahiers de recherche sociologique. 2009; 48 :175–204. doi: 10.7202/039772ar. [ CrossRef ] [ Google Scholar ]
  • Nicole, M., & Drolet, M. J. (in press). Fitting transphobia and cisgenderism in occupational therapy, Occupational Therapy Now
  • Pope KS, Vetter VA. Ethical dilemmas encountered by members of the American Psychological Association: A national survey. The American Psychologist. 1992; 47 (3):397–411. doi: 10.1037/0003-066X.47.3.397. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Provencher V, Mortenson WB, Tanguay-Garneau L, Bélanger K, Dagenais M. Challenges and strategies pertaining to recruitment and retention of frail elderly in research studies: A systematic review. Archives of Gerontology and Geriatrics. 2014; 59 (1):18–24. doi: 10.1016/j.archger.2014.03.006. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rawls, J. (1971). A Theory of Justice . Harvard University Press
  • Resnik DB, Elliott KC. The Ethical Challenges of Socially Responsible Science. Accountability in Research. 2016; 23 (1):31–46. doi: 10.1080/08989621.2014.1002608. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosa, H. (2010). Accélération et aliénation. Vers une théorie critique de la modernité tardive . Paris, Découverte
  • Sen, A. K. (2011). The Idea of Justice . The Belknap Press of Harvard University Press
  • Sen, A. K. (1995). Inegality Reexaminated . Oxford University Press
  • Sieber JE. Empirical Research on Research Ethics. Ethics & Behavior. 2004; 14 (4):397–412. doi: 10.1207/s15327019eb1404_9. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sigmon ST. Ethical practices and beliefs of psychopathology researchers. Ethics & Behavior. 1995; 5 (4):295–309. doi: 10.1207/s15327019eb0504_1. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Swazey JP, Anderson MS, Lewis KS. Ethical Problems in Academic Research. American Scientist. 1993; 81 (6):542–553. [ Google Scholar ]
  • Swisher LL, Arsalanian LE, Davis CM. The realm-individual-process-situation (RIPS) model of ethical decision-making. HPA Resource. 2005; 5 (3):3–8. [ Google Scholar ]
  • Tri-Council Policy Statement (TCPS2) (2018). Ethical Conduct for Research Involving Humans . Government of Canada, Secretariat on Responsible Conduct of Research. https://ethics.gc.ca/eng/documents/tcps2-2018-en-interactive-final.pdf
  • Thomas SP, Pollio HR. Listening to Patients: A Phenomenological Approach to Nursing Research and Practice. New York: Springer Publishing Company; 2002. [ Google Scholar ]
  • Wiegand DL, Funk M. Consequences of clinical situations that cause critical care nurses to experience moral distress. Nursing Ethics. 2012; 19 (4):479–487. doi: 10.1177/0969733011429342. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Williams-Jones B, Potvin MJ, Mathieu G, Smith E. Barriers to research on research ethics review and conflicts of interest. IRB: Ethics & Human Research. 2013; 35 (5):14–20. [ PubMed ] [ Google Scholar ]
  • Young, I. M. (2011). Justice and the Politics of difference . Princeton University Press

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 17 June 2020

Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward

  • Samuele Lo Piano   ORCID: orcid.org/0000-0002-2625-483X 1 , 2  

Humanities and Social Sciences Communications volume  7 , Article number:  9 ( 2020 ) Cite this article

95k Accesses

89 Citations

176 Altmetric

Metrics details

  • Science, technology and society

Decision-making on numerous aspects of our daily lives is being outsourced to machine-learning (ML) algorithms and artificial intelligence (AI), motivated by speed and efficiency in the decision process. ML approaches—one of the typologies of algorithms underpinning artificial intelligence—are typically developed as black boxes. The implication is that ML code scripts are rarely scrutinised; interpretability is usually sacrificed in favour of usability and effectiveness. Room for improvement in practices associated with programme development have also been flagged along other dimensions, including inter alia fairness, accuracy, accountability, and transparency. In this contribution, the production of guidelines and dedicated documents around these themes is discussed. The following applications of AI-driven decision-making are outlined: (a) risk assessment in the criminal justice system, and (b) autonomous vehicles, highlighting points of friction across ethical principles. Possible ways forward towards the implementation of governance on AI are finally examined.

Similar content being viewed by others

research paper ethical issue

Principles alone cannot guarantee ethical AI

Brent Mittelstadt

research paper ethical issue

The global landscape of AI ethics guidelines

Anna Jobin, Marcello Ienca & Effy Vayena

How AI can learn from the law: putting humans in the loop only on appeal

I. Glenn Cohen, Boris Babic, … Klaus Wertenbroch

Introduction

Artificial intelligence (AI) is the branch of computer science that deals with the simulation of intelligent behaviour in computers as regards their capacity to mimic, and ideally improve , human behaviour. To achieve this, the simulation of human cognition and functions, including learning and problem-solving, is required (Russell, 2010 ). This simulation may limit itself to some simple predictable features, thus limiting human complexity (Cowls, 2019 ).

AI became a self-standing discipline in the year 1955 (McCarthy et al., 2006 ) with significant development over the last decades. AI resorts to ML to implement a predictive functioning based on data acquired from a given context. The strength of ML resides in its capacity to learn from data without need to be explicitly programmed (Samuel, 1959 ); ML algorithms are autonomous and self-sufficient when performing their learning function. This is the reason why they are ubiquitous in AI developments. Further to this, ML implementations in data science and other applied fields are conceptualised in the context of a final decision-making application, hence their prominence.

Applications in our daily lives encompass fields, such as (precision) agriculture (Sennaar, 2019 ), air combat and military training (Gallagher, 2016 ; Wong, 2020 ), education (Sears, 2018 ), finance (Bahrammirzaee, 2010 ), health care (Beam and Kohane, 2018 ), human resources and recruiting (Hmoud and Laszlo, 2019 ), music composition (Cheng, 2009/09 ), customer service (Kongthon et al., 2009 ), reliable engineering and maintenance (Dragicevic et al., 2019 ), autonomous vehicles and traffic management (Ye, 2018 ), social-media news-feed (Rader et al., 2018 ), work scheduling and optimisation (O’Neil, 2016 ), and several others.

In all these fields, an increasing amount of functions are being ceded to algorithms to the detriment of human control, raising concern for loss of fairness and equitability (Sareen et al., 2020 ). Furthermore, issues of garbage-in-garbage-out (Saltelli and Funtowicz, 2014 ) may be prone to emerge in contexts when external control is entirely removed. This issue may be further exacerbated by the offer of new services of auto-ML (Chin, 2019 ), where the entire algorithm development workflow is automatised and the residual human control practically removed.

In the following sections, we will (i) detail a series of research questions around the ethical principles in AI; (ii) take stock of the production of guidelines elaborated in the field; (iii) showcase their prominence in practical examples; and (iv) discuss actions towards the inclusion of these dimensions in the future of AI ethics.

Research questions on the ethical dimensions of artificial intelligence

Critical aspects in AI deployment have already gained traction in mainstreaming literature and media. For instance, according to O’Neil ( 2016 ), a main shortcoming of ML approaches is the fact these resort to proxies for driving trends, such as person’s ZIP code or language in relation with the capacity of an individual to pay back a loan or handle a job, respectively. However, these correlations may be discriminatory, if not illegal.

Potential black swans (Taleb, 2007 ) in the code should also be considered. These have been documented, for instance, in the case of the Amazon website, for which errors, such as the quotation of plain items (often books) up to 10,000 dollars (Smith, 2018 ) have been reported. While mistakes about monetary values may be easy to spot, the situation may become more complex and less intelligible when incommensurable dimensions come to play. That is the reason why a number of guidelines on the topic of ethics in AI have been proliferating over the last few years.

While reflections around the ethical implications of machines and automation deployment were already put forth in the ’50s and ’60s (Samuel, 1959 ; Wiener, 1988 ), the increasing use of AI in many fields raises new important questions about its suitability (Yu et al., 2018 ). This stems from the complexity of the aspects undertaken and the plurality of views, stakes, and values at play. A fundamental aspect is how and to what extent the values and the perspectives of the involved stakeholders have been taken care of in the design of the decision-making algorithm (Saltelli, 2020 ). In addition to this ex-ante evaluation, an ex-post evaluation would need to be put in place so as to monitor the consequences of AI-driven decisions in making winners and losers.

To wrap up, it is fundamental to assess how and if ethical aspects have been included in the AI-driven decision-making implemented by asking questions such as:

What are the most prominent ethical concerns raised by large-scale deployment of AI applications?

How are these multiple dimensions interwoven?

What are the actions the involved stakeholders are carrying out to address these concerns?

What are possible ways forward to improve ML and AI development and use over their full life-cycle?

We will firstly examine the production of relevant guidelines in the fields along with academic secondary literature. These aspects will then be discussed in the context of two applied cases: (i) recidivism-risk assessment in the criminal justice system, and (ii) autonomous vehicles.

Guidelines and secondary literature on AI ethics, its dimensions and stakes

The production of dedicated documents has been skyrocketing from 2016 (Jobin et al., 2019 ). We here report on the most prominent international initiatives. A suggested reading on national and international AI strategies providing a comprehensive list of documents (Future of Earth Institute, 2020 ).

The France’s Digital Republic Act gives the right to an explanation as regards decisions on an individual made through the use of administrative algorithms (Edwards and Veale, 2018 ). This law touches upon several aspects including:

how and to what extent the algorithmic processing contributed to the decision-making;

which data was processed and its source;

how parameters were treated and weighted;

which operations were carried out in the treatment.

Sensitive governmental areas, such as national security and defence, and the private sector (the largest user and producer of ML algorithms by far) are excluded from this document.

An international European initiative is the multi-stakeholder European Union High-Level Expert Group on Artificial Intelligence , which is composed by 52 experts from academia, civil society, and industry. The group produced a deliverable on the required criteria for AI trustworthiness (Daly, 2019 ). Even articles 21 and 22 of the recent European Union General Data Protection Regulation include passages functional to AI governance, although further action has been recently demanded from the European Parliament (De Sutter, 2019 ). In this context, China has also been allocating efforts on privacy and data protection (Roberts, 2019 ).

As regards secondary literature, Floridi and Cowls ( 2019 ) examined a list of statements/declarations elaborated since 2016 from multi-stakeholder organisations. A set of 47 principles has been identified, which mapped onto five overarching dimensions (Floridi and Cowls, 2019 ): beneficence, non-maleficence, autonomy, justice and, explicability . The latter is a new dimension specifically acknowledged in the case of AI, while the others were already identified in the controversial domain of bioethics .

Jobin et al. ( 2019 ) reviewed 84 documents, which were produced by several actors of the field, almost half of which from private companies or governmental agencies. The classification proposed by Jobin et al. ( 2019 ) is around a slightly different set of values: transparency, justice and fairness, non-maleficience, responsibility and privacy . Other potentially relevant dimensions, such as accountability and responsibility, were rarely defined in the studies reviewed by these authors.

Seven of the most prominent value statements from the AI/ML fields were examined in Greene et al. ( 2019 ): The Partnership on AI to Benefit People and Society ; The Montreal Declaration for a Responsible Development of Artificial Intelligence ; The Toronto Declaration Protecting the rights to equality and non-discrimination in machine-learning systems ; OpenAI ; The Centre for Humane Technology ; Fairness, Accountability and Transparency in Machine Learning ; Axon’s AI Ethics Board for Public Safety . Greene et al. ( 2019 ) found seven common core elements across these documents: (i) design’s moral background (universal concerns, objectively measured); (ii) expert oversight; (iii) values-driven determinism; (iv) design as locus of ethical scrutiny; (v) better building; (vi) stakeholder-driven legitimacy; and, (vii) machine translation.

Mittelstadt ( 2019 ) critically analysed the current debate and actions in the field of AI ethics and noted that the dimensions addressed in AI ethics are converging towards those of medical ethics. However, this process appears problematic due to four main differences between medicine and the medical professionals on one side, and AI and its developers on the other. Firstly, the medical professional rests on common aims and fiduciary duties, which AI developers lack. Secondly, a formal profession with a set of clearly defined and governed good-behaviour practices exists in medicine. This is not the case for AI, which also lacks a full understanding of the consequences of the actions enacted by algorithms (Wallach and Allen, 2008 ). Thirdly, AI faces the difficulty of translating overarching principle into practices. Even its current setting of seeking maximum speed, efficiency and profit clashes with the resource and time requirements of an ethical assessment and/or counselling. Finally, the accountability of professionals or institutions is at this stage mainly theoretical, having the vast majority of these guidelines been defined on a merely voluntary basis and hence with the total lack of a sanctionary scheme for non-compliance.

Points of friction between ethical dimensions

Higher transparency is a common refrain when discussing ethics of algorithms, in relation to dimensions such as how an algorithmic decision is arrived at, based on what assumptions, and how this could be corrected to incorporate feedback from the involved parties. Rudin ( 2019 ) argued that the community of algorithm developers should go beyond explaining black-box models by developing interpretable models in the first place.

On a larger scale, the use of open-source software in the context of ML applications has already been advocated for over a decade (Thimbleby, 2003 ) with an indirect call for tools to execute more interpretable and reproducible programming such as Jupyter Notebooks , available from 2015 onwards. However, publishing scripts expose their developers to the public scrutiny of professional programmers, who may find shortcomings in the development of the code (Sonnenburg, 2007 ).

Ananny and Crawford ( 2018 ) comment that resorting to full algorithmic transparency may not be an adequate means to address their ethical dimensions; opening up the black-box would not suffice to disclose their modus operandi . Moreover, developers of algorithm may not be capable of explaining in plain language how a given tool works and what functional elements it is based on. A more social relevant understanding would encompass the human/non-human interface (i.e., looking across the system rather than merely inside ). Algorithmic complexity and all its implications unravel at this level, in terms of relationships rather than as mere self-standing properties.

Other authors pointed to possible points of friction between transparency and other relevant ethical dimensions. de Laat ( 2018 ) argues that transparency and accountability may even be at odds in the case of algorithms. Hence, he argues against full transparency along four main lines of reasoning: (i) leaking of privacy sensitive data into the open; (ii) backfiring into an implicit invitation to game the system; (iii) harming of the company property rights with negative consequences on their competitiveness (and on the developers reputation as discussed above); (iv) inherent opacity of algorithms, whose interpretability may be even hard for experts (see the example below about the code adopted in some models of autonomous vehicles). All these arguments suggest limitations to full disclosure of algorithms, be it that the normative implications behind these objections should be carefully scrutinised.

Raji et al. ( 2020 ) suggest that a process of algorithmic auditing within the software-development company could help in tackling some of the ethical issues raised. Larger interpretability could be in principle achieved by using simpler algorithms, although this may come at the expenses of accuracy. To this end, Watson and Floridi ( 2019 ) defined a formal framework for interpretable ML, where explanatory accuracy can be assessed against algorithmic simplicity and relevance.

Loss in accuracy may be produced by the exclusion of politically critical features (such as gender, race, age, etc.) from the pool of training predictive variables. For instance, Amazon scrapped a gender-biased recruitment algorithm once it realised that despite excluding gender, the algorithm was resorting to surrogate gender variables to implement its decisions (Dastin, 2018 ). This aspect points again to possible political issues of a trade-off between fairness, demanded by society, and algorithmic accuracy, demanded by, e.g., a private actor.

Fairness may be further hampered by reinforcement effects. This is the case of algorithms attributing credit scores, that have a reinforcement effect proportional to people wealth that de facto rules out credit access for people in a more socially difficult condition (O’Neil, 2016 ).

According to Floridi and Cowls ( 2019 ) a prominent role is also played by the autonomy dimension; the possibility of refraining from ceding decision power to AI for overriding reasons (e.g., the gain of efficacy is not deemed fit to justify the loss of control over decision-making). In other words, machines autonomy could be reduced in favour of human autonomy according to this meta-autonomy dimension.

Contrasting dimensions in terms of the theoretical framing of the issue also emerged from the review of Jobin et al. ( 2019 ), as regards interpretation of ethical principles, reasons for their importance, ownership and responsibility of their implementation. This also applies to different ethical principles, resulting in the trade-offs previously discussed, difficulties in setting prioritisation strategies, operationalisation and actual compliance with the guidelines. For instance, while private actors demand and try to cultivate trust from their users, this runs counter to the need for society to scrutinise the operation of algorithms in order to maintain developer accountability (Cowls, 2019 ). Attributing responsibilities in complicated projects where many parties and developers may be involved, an issue known as the problem of many hands (Nissenbaum, 1996 ), may indeed be very difficult.

Conflicts may also emerge between the requirements to overcome potential algorithm deficits in accuracy associated with large data bases and the individual rights to privacy and autonomy of decision. Such conflicts may exacerbate tensions, further complicating agreeing on standards and practices.

In the following two sections, the issues and points of friction raised are examined in two practical case studies, criminal justice and autonomous vehicles. These examples have been selected due to their prominence in the public debate on the ethical aspects of AI and ML algorithms.

Machine-learning algorithms in the field of criminal justice

ML algorithms have been largely used to assist juridical deliberation in many states of the USA (Angwin and Larson, 2016 ). This country faces the issue of the world’s highest incarcerated population, both in absolute and per-capita terms (Brief, 2020 ). The COMPAS algorithm, developed by the private company Northpointe , attributes a 2-year recidivism-risk score to arrested people. It also evaluates the risk of violent recidivism as a score.

The fairness of the algorithm has been questioned in an investigative report, that examined a pool of cases where a recidivism score was attributed to >18,000 criminal defendants in Broward County, Florida and flagged up a potential racial bias in the application of the algorithm (Angwin and Larson, 2016 ). According to the authors of the report, the recidivism-risk was systematically overestimated for black people: the decile distribution of white defendants was skewed towards the lower end. Conversely, the decile distribution of black defendants was only slightly decreasing towards the higher end. The risk of violent recidivism within 2 years followed a similar trend. This analysis was debunked by the company, which, however, refused to disclose the full details of its proprietary code. While the total number of variables amounts to about 140, only the core variables were disclosed (Northpointe, 2012 ). The race of the subject was not one of those.

Here, a crucial point is how this fairness is to be attained: whether it is more important a fair treatment across groups of individuals or within the same group. For instance, let us take the case of gender, where men are overrepresented in prison in comparison with women. As to account for this aspect, the algorithm may discount violent priors for men in order to reduce their recidivism-risk score. However, attaining this sort of algorithmic fairness would imply inequality of treatment across genders (Berk et al., 2018 ).

Fairness could be further hampered by the combined use of this algorithm with others driving decisions on neighbourhood police patrolling. The fact these algorithms may be prone to drive further patrolling in poor neighbourhoods may result from a training bias as crimes occurring in public tend to be more frequently reported (Karppi, 2018 ). One can easily understand how these algorithms may jointly produce a vicious cycle—more patrolling would lead to more arrests that would worsen the neighbourhood average recidivism-risk score , which would in turn trigger more patrolling. All this would result in exacerbated inequalities, likewise the case of credit scores previously discussed (O’Neil, 2016 ).

A potential point of friction may also emerge between the algorithm dimensions of fairness and accuracy. The latter may be theoretically defined as the classification error in terms of rate of false positive (individuals labelled at risk of recidivism, that did not re-offend within 2 years) and false negative (individuals labelled at low risk of recidivism, that did re-offend within the same timeframe) (Loi and Christen, 2019 ). Different classification accuracy (the fraction of observed outcomes in disagreement with the predictions) and forecasting accuracy (the fraction of predictions in disagreement with the observed outcomes) may exist across different classes of individuals (e.g., black or white defendants). Seeking equal rates of false positive and false negative across these two pools would imply a different forecasting error (and accuracy) given the different characteristics of the two different training pools available for the algorithm. Conversely, having the same forecasting accuracy would come at the expense of different classification errors between these two pools (Corbett-Davies et al., 2016 ). Hence, a trade-off exists between these two different shades of fairness, which derives from the very statistical properties of the data population distributions the algorithm has been trained on. However, the decision-making rests again on the assumptions the algorithm developers have adopted, e.g., on the relative importance of false positive and false negative (i.e., the weights attributed to the different typologies of errors, and the accuracy sought (Berk, 2019 )). When it comes to this point, an algorithm developer may decide (or be instructed) to train his/her algorithm to attribute, e.g., a five/ten/twenty times higher weight for a false negative (re-offender, low recidivism-risk score) in comparison with a false positive (non re-offender, high recidivism-risk score).

As with all ML, an issue of transparency exists as no one knows what type of inference is drawn on the variables out of which the recidivism-risk score is estimated. Reverse-engineering exercises have been run so as to understand what are the key drivers on the observed scores. Rudin ( 2019 ) found that the algorithm seemed to behave differently from the intentions of their creators (Northpointe, 2012 ) with a non-linear dependence on age and a weak correlation with one’s criminal history. These exercises (Rudin, 2019 ; Angelino et al., 2018 ) showed that it is possible to implement interpretable classification algorithms that lead to a similar accuracy as COMPAS. Dressel and Farid ( 2018 ) achieved this result by using a linear predictor-logistic regressor that made use of only two variables (age and total number of previous convictions of the subject).

Machine-learning algorithms in the field of autonomous vehicles

The case of autonomous vehicles, also known as self-driving vehicles, poses different challenges as a continuity of decisions is to be enacted while the vehicle is moving. It is not a one-off decision as in the case of the assessment of recidivism risk.

An exercise to appreciate the value-ladenness of these decisions is the moral-machine experiment (Massachussets Institute of Technology 2019 )—a serious game where users are requested to fulfil the function of an autonomous-vehicle decision-making algorithm in a situation of danger. This experiment entails performing choices that would prioritise the safety of some categories of users over others. For instance, choosing over the death of car occupants, pedestrians, or occupants of other vehicles, et cetera. While such extreme situations may be a simplification of reality, one cannot exclude that the algorithms driving an autonomous-vehicle may find themselves in circumstances where their decisions may result in harming some of the involved parties (Bonnefon et al., 2019 ).

In practice, the issue would be framed by the algorithm in terms of a statistical trolley dilemma in the words of Bonnefon et al. ( 2019 ), whereby the risk of harm for some road users will be increased. This corresponds to a risk management situation by all means, with a number of nuances and inherent complexity (Goodall, 2016 ).

Hence, autonomous vehicles are not bound to play the role of silver bullets, solving once and forever the vexing issue of traffic fatalities (Smith, 2018 ). Furthermore, the way decisions enacted could backfire in complex contexts to which the algorithms had no extrapolative power, is an unpredictable issue one has to deal with (Wallach and Allen, 2008 ; Yurtsever et al., 2020 ).

Coding algorithms that assure fairness in autonomous vehicles can be a very challenging issue. Contrasting and incommensurable dimensions are likely to emerge (Goodall, 2014 ) when designing an algorithm to reduce the harm of a given crash. For instance, in terms of material damage against human harm. Odds may emerge between the interest of the vehicle owner and passengers, on one side, and the collective interest of minimising the overall harm, on the other. Minimising the overall physical harm may be achieved by implementing an algorithm that, in the circumstance of an unavoidable collision, would target the vehicles with the highest safety standards. However, one may want to question the fairness of targeting those who have invested more in their own and others’ safety. The algorithm may also face a dilemma between low probability of a serious harm and higher probability of a mild harm. Unavoidable normative rules will need to be included in the decision-making algorithms to tackle these types of situations.

Accuracy in the context of self-autonomous vehicles rests on their capacity to correctly simulate the course of the events. While this is based on physics and can be informed by the numerous sensors these vehicles are equipped with, unforeseen events can still play a prominent role, and profoundly affect the vehicles behaviour and reactions (Yurtsever et al., 2020 ). For instance, fatalities due to autonomous-vehicle malfunctioning were reported as caused by the following failures: (i) the incapability of perceiving a pedestrian as such (National Transport Safety Board 2018 ); (ii) the acceleration of the vehicle in a situation when braking was required due to contrasting instructions from different algorithms the vehicle was hinged upon (Smith, 2018 ). In this latter case, the complexity of autonomous-vehicle algorithms was witnessed by the millions lines of code composing their scripts, a universe no one fully understands in the words of The Guardian (Smith, 2018 ), so that the causality of the decisions made was practically impossible to scrutinise. Hence, no corrective action in the algorithm code may be possible at this stage, with no room for improvement in accuracy.

One should also not forget that these algorithms are learning by direct experience and they may still end up conflicting with the initial set of ethical rules around which they have been conceived. Learning may occur through algorithms interaction taking place at a higher hierarchical level than the one imagined in the first place (Smith, 2018 ). This aspect would represent a further open issue to be taken into account in their development (Markham et al., 2018 ). It also poses further tension between the accuracy a vehicle manufacturer seeks and the capability to keep up the agreed fairness standards upstream from the algorithm development process.

Discussion and conclusions

In this contribution, we have examined the ethical dimensions affected by the application of algorithm-driven decision-making. These are entailed both ex-ante, in terms of the assumptions underpinning the algorithm development, and ex-post as regards the consequences upon society and social actors on whom the elaborated decisions are to be enforced.

Decision-making-based algorithms rest inevitably on assumptions, even silent ones, such as the quality of data the algorithm is trained on (Saltelli and Funtowicz, 2014 ), or the actual modelling relations adopted (Hoerl, 2019 ), with all the implied consequences (Saltelli, 2019 ).

A decision-making algorithm will always be based on a formal system, which is a representation of a real system (Rosen, 2005 ). As such, it will always be based on a restricted set of relevant relations, causes, and effects. It does not matter how complicated the algorithm may be (how many relations may be factored in), it will always represent one-specific vision of the system being modelled (Laplace, 1902 ).

Eventually, the set of decision rules underpinning the AI algorithm derives from human-made assumptions, such as, where to define the boundary between action and no action, between different possible choices. This can only take place at the human/non-human interface: the response of the algorithm is driven by these human-made assumptions and selection rules. Even the data on which an algorithm is trained on are not an objective truth, they are dependent upon the context in which they have been produced (Neff et al., 2017 ).

Tools for technically scrutinising the potential behaviour of an algorithm and its uncertainty already exist and could be included in the workflow of algorithm development. For instance, global sensitivity analysis (Saltelli, 2008 ) may help in exploring how the uncertainty in the input parameters and modelling assumptions would affect the output. Additionally, a modelling of the modelling process would assist in the model transparency and in addressing questions such as: Are the results from a particular model more sensitive to changes in the model and the methods used to estimate its parameters, or to changes in the data? (Majone, 1989 ).

Tools of post-normal-science inspiration for knowledge and modelling quality assessment could be adapted to the analysis of algorithms, such as the NUSAP (Numeral Unit Spread Assessment Pedigree) notation system for the management and communication of uncertainty (Funtowicz and Ravetz, 1990 ; Van Der Sluijs, 2005 ) and sensitivity auditing (Saltelli and Funtowicz, 2014 ), respectively. Ultimately, developers should acknowledge the limits of AI, and what its ultimate function should be in the equivalent of an Hippocratic Oath for ML developers (O’Neil, 2016 ). An example comes from the field of financial modelling, with a manifesto elaborated in the aftermath of the 2008 financial crisis (Derman and Wilmott, 2009 ).

As to address these dimensions, value statements and guidelines have been elaborated by political and multi-stakeholder organisations. For instance, The Alan Turing Institute released a guide for responsible design and implementation of AI (Leslie, 2019 ) that covers the whole life-cycle of design, use, and monitoring. However, the field of AI ethics is just at its infancy and it is still to be conceptualised how AI developments that encompass ethical dimensions could be attained. Some authors are pessimistic, such as Supiot ( 2017 ) who speaks of governance by numbers , where quantification is replacing the traditional decision-making system and profoundly affecting the pillar of equality of judgement. Trying to revert the current state of affairs may expose the first movers in the AI field to a competitive disadvantage (Morley et al., 2019 ). One should also not forget that points of friction across ethical dimensions may emerge, e.g., between transparency and accountability, or accuracy and fairness as highlighted in the case studies. Hence, the development process of the algorithm cannot be perfect in this setting, one has to be open to negotiation and unavoidably work with imperfections and clumsiness (Ravetz, 1987 ).

The development of decision-making algorithms remains quite obscure in spite of the concerns raised and the intentions manifested to address them. Attempts to expose to public scrutiny the algorithms developed are yet scant. As are the attempt to make the process more inclusive, with a higher participation from all the stakeholders. Identifying a relevant pool of social actors may require an important effort in terms of stakeholders’ mapping so as to assure a complete, but also effective, governance in terms of number of participants and simplicity of working procedures. The post-normal-science concept of extended peer communities could assist also in this endeavour (Funtowicz and Ravetz, 1997 ). Example-based explanations (Molnar, 2020 ) may also contribute to an effective engagement of all the parties by helping in bridging technical divides across developers, experts in other fields, and lay-people.

An overarching meta-framework for the governance of AI in experimental technologies (i.e., robot use) has also been proposed (Rego de Almeida et al., 2020 ). This initiative stems from the attempt to include all the forms of governance put forth and would rest on an integrated set of feedback and interactions across dimensions and actors. An interesting proposal comes from Berk ( 2019 ), who asked for the intervention of super partes authorities to define standards of transparency, accuracy and fairness for algorithm developers in line with the role of the Food and Drug administration in the US and other regulation bodies. A shared regulation could help in tackling the potential competitive disadvantage a first mover may suffer. The development pace of new algorithms would be necessarily reduced so as to comply with the standards defined and the required clearance processes. In this setting, seeking algorithm transparency would not be harmful for their developers as scrutiny would be delegated to entrusted intermediate parties, to take place behind closed doors (de Laat, 2018 ).

As noted by a perceptive reviewer, ML systems that keep learning are dangerous and hard to understand because they can quickly change. Thus, could a ML system with real world consequences be “locked down” to increase transparency? If yes, the algorithm could become defective. If not, transparency today may not be helpful in understanding what the system does tomorrow. This issue could be tackled by hard-coding the set of rules on the behaviour of the algorithm, once these are agreed upon among the involved stakeholders. This would prevent the algorithm-learning process from conflicting with the standards agreed. Making mandatory to deposit these algorithms in a database owned and operated by this entrusted super-partes body could ease the development of this overall process.

Ananny M, Crawford K (2018) Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20:973–989

Article   Google Scholar  

Angelino E, Larus-Stone N, Alabi D, Seltzer M, Rudin C (2018) Learning certifiably optimal rule lists for categorical data. http://arxiv.org/abs/1704.01701

Angwin J, Larson J (2016) Machine bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Bahrammirzaee A (2010) A comparative survey of artificial intelligence applications in finance: artificial neural networks, expert system and hybrid intelligent systems. Neural Comput Appl 19:1165–1195

Beam AL, Kohane IS (2018) Big data and machine learning in health care. JAMA 319:1317

Berk R (2019) Machine learning risk assessments in criminal justice settings. Springer International Publishing, Cham

Berk R, Heidari H, Jabbari S, Kearns M, Roth A (2018) Fairness in criminal justice risk assessments: the state of the art. Soc Methods Res 004912411878253

Board NTS (2018) Vehicle automation report. Tech. Rep. HWY18MH010, Office of Highway Safety, Washington, D.C.

Bonnefon J-F, Shariff A, Rahwan I (2019) The trolley, the bull bar, and why engineers should care about the ethics of autonomous cars [point of view]. Proc IEEE 107:502–504

Brief WP (2020) World prison brief- an online database comprising information on prisons and the use of imprisonment around the world. https://www.prisonstudies.org/

Cheng J (2009) Virtual composer makes beautiful music and stirs controversy. https://arstechnica.com/science/news/2009/09/virtual-composer-makes-beautiful-musicand-stirs-controversy.ars

Chin J (2019) The death of data scientists. https://towardsdatascience.com/the-death-of-data-scientists-c243ae167701

Corbett-Davies S, Pierson E, Feller A, Goel S (2016) A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. Washington Post. https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/

Cowls J (2020) Deciding how to decide: six key questions for reducing AI’s democratic deficit. In: Burr C, Milano S (eds) The 2019 Yearbook of the Digital Ethics Lab, Digital ethics lab yearbook. Springer International Publishing, Cham. pp. 101–116. https://doi.org/10.1007/978-3-030-29145-7_7

Daly A et al. (2019) Artificial intelligence, governance and ethics: global perspectives. SSRN Electron J. https://www.ssrn.com/abstract=3414805

Dastin J (2018) Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

De Sutter P (2020) Automated decision-making processes: ensuring consumer protection, and free movement of goods and services. https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/IMCO/DV/2020/01-22/Draft_OQ_Automated_decision-making_EN.pdf

Derman E, Wilmott P (2009) The financial modelers’ manifesto. SSRN Electron J. http://www.ssrn.com/abstract=1324878 .

Dragičević T, Wheeler P, Blaabjerg F (2019) Artificial intelligence aided automated design for reliability of power electronic systems. IEEE Trans Power Electron 34:7161–7171

Article   ADS   Google Scholar  

Dressel J, Farid H (2018) The accuracy, fairness, and limits of predicting recidivism. Sci Adv 4:eaao5580

Edwards L, Veale M (2018) Enslaving the algorithm: from A -right to an explanation- to A -right to better decisions-? IEEE Security, Priv 16:46–54

Floridi L, Cowls J (2019) A unified framework of five principles for AI in society. Harvard Data Science Review. https://hdsr.mitpress.mit.edu/pub/l0jsh9d1

Funtowicz SO, Ravetz JR (1990) Uncertainty and quality in science for policy. Springer Science, Business Media, Berlin, Heidelberg

Book   Google Scholar  

Funtowicz S, Ravetz J (1997) Environmental problems, post-normal science, and extended peer communities. Études et Recherches sur les Systémes Agraires et le Développement. INRA Editions. pp. 169–175

Future of Earth Institute (2020) National and International AI Strategies. https://futureoflife.org/national-international-ai-strategies/

Gallagher S (2016) AI bests Air Force combat tactics experts in simulated dogfights. https://arstechnica.com/information-technology/2016/06/ai-bests-air-force-combat-tactics-experts-in-simulated-dogfights/

Goodall NJ (2014) Ethical decision making during automated vehicle crashes. Transportation Res Rec: J Transportation Res Board 2424:58–65

Goodall NJ (2016) Away from trolley problems and toward risk management. Appl Artif Intell 30:810–821

Greene D, Hoffmann AL, Stark L (2019) Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences

Hmoud B, Laszlo V (2019) Will artificial intelligence take over human-resources recruitment and selection? Netw Intell Stud VII:21–30

Hoerl RW (2019) The integration of big data analytics into a more holistic approach-JMP. Tech. Rep., SAS Institute. https://www.jmp.com/en_us/whitepapers/jmp/integration-of-big-data-analytics-holistic-approach.html

Jobi A, Ienca M, Vayena E (2019) Artificial intelligence: the global landscape of ethics guidelines. Nat Mach Intell 1:389–399

Karppi T (2018) The computer said so-: on the ethics, effectiveness, and cultural techniques of predictive policing. Soc Media + Soc 4:205630511876829

Kongthon A, Sangkeettrakarn C, Kongyoung S, Haruechaiyasak C (2009) Implementing an online help desk system based on conversational agent. In: Proceedings of the International Conference on Management of Emergent Digital EcoSystems, MEDES ’09, vol. 69. ACM, New York, NY, USA. pp. 450–69:451. Event-place: France. https://doi.org/10.1145/1643823.1643908

de Laat PB (2018) Algorithmic decision-making based on machine learning from big data: can transparency restore accountability? Philos Technol 31:525–541

Laplace PS (1902) A philosophical essay on probabilities. J. Wiley, New York; Chapman, Hall, London. http://archive.org/details/philosophicaless00lapliala

Leslie D (2019) Understanding artificial intelligence ethics and safety. http://arxiv.org/abs/1906.05684

Loi M, Christen M (2019) How to include ethics in machine learning research. https://ercim-news.ercim.eu/en116/r-s/how-to-include-ethics-in-machine-learning-research

Majone G (1989) Evidence, argument, and persuasion in the policy process. Yale University Press, Yale

Google Scholar  

Markham AN, Tiidenberg K, Herman A (2018) Ethics as methods: doing ethics in the era of big data research-introduction. Soc Media + Soc 4:205630511878450

Massachussets Institute of Technology (2019) Moral machine. Massachussets Institute of Technology. http://moralmachine.mit.edu

McCarthy J, Minsky ML, Rochester N, Shannon CE (2006) A proposal for the dartmouth summer research project on artificial intelligence, August 31, 1955. AI Mag 27:12–12

Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell 1:501–507

Molnar C (2020) Interpretable machine learning (2020). https://christophm.github.io/interpretable-ml-book/

Morley J, Floridi L, Kinsey K, Elhalal A (2019) From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Tech Rep. https://arxiv.org/abs/1905.06876

Neff G, Tanweer A, Fiore-Gartland B, Osburn L (2017) Critique and contribute: a practice-based framework for improving critical data studies and data science. Big Data 5:85–97

Nissenbaum H (1996) Accountability in a computerized society. Sci Eng Ethics 2:25–42

Northpointe (2012) Practitioner’s guide to COMPAS. northpointeinc.com/files/technical_documents/FieldGuide2_081412.pdf

O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy, 1st edn. Crown, New York

MATH   Google Scholar  

Rader E, Cotter K, Cho J (2018) Explanations as mechanisms for supporting algorithmic transparency. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’1 8 . ACM Press, Montreal QC, Canada. pp. 1–13. http://dl.acm.org/citation.cfm?doid=3173574.3173677

Raji ID et al. Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency pp 33–44 (Association for Computing Machinery, 2020). https://doi.org/10.1145/3351095.3372873

Ravetz JR (1987) Usable knowledge, usable ignorance: incomplete science with policy implications. Knowledge 9:87–116

Rêgo de Almeida PG, Denner dos Santos C, Silva Farias J (2020) Artificial intelligence regulation: a meta-framework for formulation and governance. In: Proceedings of the 53rd Hawaii International Conference on System Sciences (2020). http://hdl.handle.net/10125/64389

Roberts H et al. (2019) The Chinese approach to artificial intelligence: an analysis of policy and regulation. SSRN Electron J. https://www.ssrn.com/abstract=3469784

Rosen R (2005) Life itself: a comprehensive inquiry into the nature, origin, and fabrication of life. Columbia University Press, New York

Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. http://arxiv.org/abs/1811.10154

Russell SJ (2010) Artificial intelligence : a modern approach. Prentice Hall, Upper Saddle River, NJ

Saltelli A et al. (2008) Global sensitivity analysis: the primer. Wiley, Hoboken, NJ

Saltelli A (2019) A short comment on statistical versus mathematical modelling. Nat Commun 10:3870

Saltelli A (2020) Ethics of quantification or quantification of ethics? Futures 116:102509

Saltelli A, Funtowicz S (2014) When all models are wrong. Issues Sci Technol 30:79–85

Samuel AL (1959) Some studies in machine learning using the game of checkers. IBM J Res Dev 3:210–229

Article   MathSciNet   Google Scholar  

Sareen S, Saltelli A, Rommetveit K (2020) Ethics of quantification: illumination, obfuscation and performative legitimation. Palgrave Commun 6:1–5

Sears (2018) The role of artificial intelligence in the classroom. https://elearningindustry.com/artificial-intelligence-in-the-classroom-role

Sennaar K (2019) AI in agriculture-present applications and impact. https://emerj.com/ai-sector-overviews/ai-agriculture-present-applications-impact/

Van Der Sluijs JP et al. (2005) Combining quantitative and qualitative measures of uncertainty in model-based environmental assessment: The NUSAP system. Risk Anal 25:481–492

Smith A (2018) Franken-algorithms: the deadly consequences of unpredictable code. The Guardian. https://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger

Sonnenburg S et al. (2007) The need for open source software in machine learning. J Mach Learn Res 8:2443–2466

Supiot A (2017) Governance by numbers: the making of a legal model of allegiance. Hart Publishing, Oxford; Portland, Oregon

Taleb NN (2007) The Black Swan: the impact of the highly improbable. Random House Publishing Group, New York, NY

Thimbleby H (2003) Explaining code for publication. Softw: Pract Experience 33:975–1001

Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford, USA

Watson D, Floridi L (2019) The explanation game: A formal framework for interpretable machine learning. https://papers.ssrn.com/abstract=3509737

Wiener N (1988) The human use of human beings: cybernetics and society. Da Capo Press, New York, N.Y, new edition

Wong YH et al. (2020). Deterrence in the age of thinking machines: product page. RAND Corporation. https://www.rand.org/pubs/research_reports/RR2797.html

Ye H et al. (2018) Machine learning for vehicular networks: recent advances and application examples. IEEE Vehicular Technol Mag 13:94–101

Yu H et al. (2018) Building ethics into artificial intelligence. http://arxiv.org/abs/1812.02953

Yurtsever E, Capito L, Redmill K, Ozguner U (2020) Integrating deep reinforcement learning with model-based path planners for automated driving. http://arxiv.org/abs/2002.00434

Download references

Acknowledgements

I would like to thank Kjetil Rommetveit, Andrea Saltelli and Siddarth Sareen for the organisation of the Workshop Ethics of Quantification , and the Centre for the Study of Sciences and the Humanities of the University of Bergen for the travel grant, at which a previous version of this paper was presented. Thomas Hodgson, Jill Walter Rettberg, Elizabeth Chatterjee, Ragnar Fjelland and Marta Kuc-Czarnecka for their useful comments in this venue. Finally, Stefn Thor Smith and Andrea Saltelli for their suggestions and constructive criticism on a draft version of the present manuscript.

Author information

Authors and affiliations.

School of the Built Environment, University of Reading, Reading, UK

Samuele Lo Piano

Open Evidence, Universitat Oberta de Catalunya, Barcelona, Catalonia, Spain

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Samuele Lo Piano .

Ethics declarations

Competing interests.

The author declares no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Lo Piano, S. Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Humanit Soc Sci Commun 7 , 9 (2020). https://doi.org/10.1057/s41599-020-0501-9

Download citation

Received : 29 January 2020

Accepted : 12 May 2020

Published : 17 June 2020

DOI : https://doi.org/10.1057/s41599-020-0501-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Reframing data ethics in research methods education: a pathway to critical data literacy.

  • Javiera Atenas
  • Leo Havemann
  • Cristian Timmermann

International Journal of Educational Technology in Higher Education (2023)

AI ethics: from principles to practice

  • Jianlong Zhou

AI & SOCIETY (2023)

The Challenge of Quantification: An Interdisciplinary Reading

  • Monica Di Fiore
  • Marta Kuc-Czarnecka
  • Andrea Saltelli

Minerva (2023)

Developing persuasive systems for marketing: the interplay of persuasion techniques, customer traits and persuasive message design

  • Annye Braca
  • Pierpaolo Dondio

Italian Journal of Marketing (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research paper ethical issue

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Ethical Issues in Research

Profile image of Simply J

Related Papers

Jones Adu-Gyamfi

research paper ethical issue

udo schuklenk

ABSTRACT This module will introduce you to the ethical concepts underlying applied ethical decision-making in the area of research involving human participants. We will also learn what the issues are that people involved in research on research ethics are concerned with. Ethics without an understanding of historical and legal context makes arguably little sense. It is for this reason that this module will begin with a brief history of research ethics and ends with a brief overview of the relevant national and international guidelines pertaining to ethical issues in research involving human participants.

Gabby Samuel

Hannah Farrimond

Chapter on ethics of research with children and young people in SAGE/BERA Handbook of Educational Research Eds: Dominic Wyse, Emma Smith, Larry E. Suter and Neil Selwyn, London, New York: SAGE Publications Ltd.

Dr. Basanta P D Adhikari

Ethics in research is imperative in this contemporary world for a good and quality research because many complications and sensitive ethical issues of human beings have been increasing day by day so that my research work is also not excluded from these types of possible ethical complications. I learned a good lesson of ethics in research that there is always the possibility of ethical threats continuously from the beginning to ending of my research work. It is not possible to identify all potential ethical questions or adjudicate on what is correct researcher behaviour. This module projects me with a greater awareness and fuller understanding of the ethical dilemmas and moral issues of a scientific research work prowling in the interstics of the research process. I learned that all problems can't be anticipated by the codes of ethics but they can minimize the threats of our research activities. I am reflected that there is a six-fold advantage in fashioning a personal code of ethical practice which can support me to manage my research processes. The first one is that a code establishes one as a member of the wider scientific community having a shared interest in its value and concerns. The second one is that a code of ethical practice makes me aware of my right obligations to my subject and also to those problems area where there is a general consensus about what is acceptable and what is not, i.e. clarificatory vale. The next one is my professional behaviour which should be guided by a principled codes of ethics, then is possible to consider that there may be alternative ways of doing the same thing, the forth-one is that a balanced code can be an important organizing factor in my perceptions of the research situation, and such a may assist me in my need to anticipate and prepare. The second last advantage of code is that a code of practice validated by my own sense of rightness will help me to develop an intuitive sensitivity that will be particularly helpful to me in dealing with the unknown and unexpected, particularly in my qualitative interviews. The last advantage of a code is that the practice of a code will bring discipline to my awareness for the better research planning and implement the codes of conduct in my ongoing research project (Cohen et al, 2007; Broom, 2006; Houghton et al, 2012).

chiedozie ejimakonye

Johnson Ibidapo

Qualitative Inquiry

Marilys Guillemin

RELATED PAPERS

Experimental Studies in Linguistics

Frank Kügler

International Journal of Medical Research and Review

rabindran chandran

idha royani

Danylo Kaminskyy

Research Journal of Finance and Accounting

simon kamau

Frank von Hagel

Journal of Addiction Medicine

Stephen Butler

Muhammad Muhtarom

Journal of Photochemistry and Photobiology A: Chemistry

abhishek pal

Proceedings of the 2007 SIAM International Conference on Data Mining

Sugato Basu

Civil Engineering and Architecture

Pasent Yousef

Cemal Salman

KURIKULUM MERDEKA PAUD

raihany zahra

Vietnam Journal of Mechanics

Phong Thanh

Journal on Education

Jepri Utomo

American Journal of Respiratory and Critical Care Medicine

Canzio Romano

Brain Research

Wytse Wadman

Hypertension Research

Hiroshi Kanegae

Journal of High Energy Physics

Brett McInnes

Tạp chí Khoa học & Công nghệ

37. Trần Thị Thùy Trang

The Journal of Thoracic and Cardiovascular Surgery

Ralph Korpman

Mycopathologia

Saham Ansari

ghjtftgd ghhtrf

Light-Emitting Devices, Materials, and Applications XXIV

Paul Michael Petersen

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Ethical Research in Business Ethics

  • Editorial Essay
  • Published: 29 November 2022
  • Volume 182 , pages 1–5, ( 2023 )

Cite this article

  • Gazi Islam 1 &
  • Michelle Greenwood 2  

6939 Accesses

3 Altmetric

Explore all metrics

In this editorial essay, we argue that business ethics research should be aware of the ethical implications of its own methodological choices, and that these implications include, but go beyond, mere compliance with standardized ethical norms. Methodological choices should be made specifically with reference to their effects on the world, both within and outside the academy. Awareness of these effects takes researchers beyond assuring ethics in their methods to more fully consider the ethics of their methods as knowledge practices that have broader institutional consequences. Drawing from examples in published research, we examine five ways in which authors can formulate their methodological approaches with purpose, care and reflexivity.

Avoid common mistakes on your manuscript.

Business ethicists are accustomed to confronting the “hard cases” of ethical choices in organizational life. We believe that business ethics scholarship must be equally sensitive to ethical nuances in the design and implementation of research methods in our own activities. In the complexities of research practice, ethical considerations around method and design exceed the standardized templates of methods textbooks. Where research designs begin and end and whom they implicate as protagonists, who receives voice, protection and authority, and what is rendered visible and invisible within the field of study. These are thorny questions that are not amenable to check-list style compliance guidelines, even where such guidelines also have an important role (cf., Greenwood, 2016 ).

In our exchanges with authors and within the editorial team, we have confronted a plethora of hard cases that highlight the challenges of research ethics beyond rule compliance. To what extent should the mode of data collection (such as crowdsourced data or social media platforms) answer to ethical quandaries around digital labour and online surveillance? When should organizations or individuals engaging in ethically problematic practices be named, and when must they be anonymized? To what extent should the relationships between researchers and participants be problematized within methods sections, including financial and power relationships between funders, researchers and participants? What are the respective roles of institutional ethics boards and journal editorial teams (along with other actors in the research ecosystem) in validating the ethical permissibility of a design? When should hard ethical questions lead a study to be rejected at the review stage, rather than passed along to the research community to make its own judgment? Such questions (and many, many more) have filled our days with deep reflection, and the current editorial aims to share some of these reflections with the Journal of Business Ethics community, albeit in necessarily schematic form. Specifically, we aim to both expand thinking about research ethics to include elements that are often considered outside of methods, and situate conventional methodological ethics in relation to this broader vision. The result will be a plea for a research ethics based on purpose, care and reflexivity.

Between Prescriptive and Evaluative Research Ethics

In a previous editorial essay (Islam & Greenwood, 2021 ), we borrowed a distinction by Williams ( 1985 ) between prescriptive and evaluative ethics; the former refers to what one should do, while the latter to what the world should look like. Mapped onto methods, this analytical distinction differentiates between specific methodological practices (e.g., one should design measures that fit the core constructs, one should gather informed consent) and the broader social and practical implications of research (e.g., the goals of science to innovate, educate or emancipate). We emphasize that this is an “analytical” distinction because, in practice, these aspects of ethics are deeply intertwined, and we distinguish them primarily to show how they spill into each other. Actions should be prescribed, at least in part, for the worlds they contribute to making, although in the fog of situated practice, we are often unaware of, or unable to, clearly link our actions to those future worlds.

From this distinction, it is easy to differentiate heuristically between ethics in research methods, that is, the ethical norms and practices internal to research design and execution, and the ethics of research methods, that is, whether those methods should be used in the broader evaluative sense. In many cases, these ethical levels align, with ethical practices working toward an evaluatively desirable world. Gathering informed consent is important because it is desirable to promote a world of autonomous choice (e.g., Hansson, 2006 ). Hypothesizing after the results are known is problematic because promoting false positive statistical results reduces replicability and thus scientific certainty about the world (Kerr, 1998 ). To take the previous example, however, some have argued that “HARK”ing is less ethically problematic when research is transparently exploratory (Hollenbeck & Wright, 2017 ); in this case, what is ethically problematic is not the practice per se, but the lack of transparency between a given practice and its exploratory (rather than confirmatory) intent. As for informed consent, in cases where a signed form substitutes for, rather than expresses, true participant autonomy (cf., Dubois et al, 2012 ), it can obscure rather than clarify the ethics of a research project. To begin with, the presentation of a priori formulated protocols for consent presumes that the identified participant is the only stakeholder in the research who is affected by the research in a manner that would require their consent. Moreover, this protocol may preclude collaborative models in which participants actively construct research protocols with researchers (Hansson, 2006 ). In both of these examples, a practice is justified on the basis of a deeper evaluative motive, but the mapping between the two is imperfect and situation-dependent.

Tensions may appear between prescriptive and evaluative dimensions of research methods, giving rise to ethical polemics or dilemmas. To give one example, we have had recent debates around the ethics of online data crowdsourcing from platforms such as Amazon MTurk (e.g., Newman et al., 2021 ). Much discussion has been given to best practice in terms of construct validity and similar “internal” considerations of research design as well as issues such as “bots” or fraudulent respondent activity that affect validity. However, broader considerations in terms of labour exploitation on online platforms (e.g., Shank, 2016 ) bridge internal and external research ethics, given internal norms for participant autonomy and external considerations of the public good. Less discussed are the systematic effects of widespread use of online data collection for disembodying researchers from participant communities, entrenching economies of digital labour and surveillance, and reifying a context-free individual as the object of social scientific study. These, we would argue, are methodological outcomes that may contribute to undesirable worlds, and thus are materially relevant for ethical consideration.

Other examples illustrate the opposite tension between prescriptive and evaluative research ethics. In a provocative article, Roulet et al. ( 2017 ) describe the potentials of “covert” research, where normally unacceptable practices of researcher concealment are weighed against laudable goals such as revealing workplace abuse or unethical organizational practices. In such cases, practices that are prescriptively problematic (e.g., collecting data without consent, concealing researcher identity) are defended on the grounds that the ethical goods, in terms of creating a better world, legitimate such practices. While the example of online platforms seems more defensible at the level of practice but questionable at the level of broad systemic implications, that of covert research seems more problematic at the level of practices while (possibly) defensible in terms of its ethical purposes.

More than simply a conflict between means and ends, however, such tensions reveal discrepancies between ends that are “localized” as specific practices (e.g., the goal of conducting a valid study according to current norms) and the more broad-based ends of research (e.g., creating a better world through socially reflexive knowledge production). Our challenge at the Journal of Business Ethics as editors, and our counsel to authors, reviewers and editors is to reflexively seek equilibrium between the practical ethics of research design and execution and the broader promotion of the public good that is the ultimate end of science.

Guiding Ethical Research in Business Ethics

Situating research ethics within the relationship between concrete ethical practices and evaluative goals of social improvement adds complexity to ethical decisions, forcing researchers, reviewers and editors to confront real ethical dilemmas that cannot be dissolved in mere compliance practices. We think the recognition of this complexity is salutary. It emphasizes that the review process is one moment in the broader network of evaluative practices that includes—but is not limited to—institutional ethics approval processes prior to submission, ethical and legal considerations of publishing houses and scholarly societies that administer academic production, and reception of research after publication. Each of these moments bring into light different ethical stakes, and we see our editorial role as an important but not exhaustive evaluative moment. From our perspective, our role is not to present a hurdle over which only the most flawless research can pass, but to curate a conversation with the greatest potential for scholarly generativity and progress. This makes our goal a collective one, and we judge research for its ability to promote the field, by being rigorous, by being interesting, by being reflexive, or by some combination of these epistemic virtues. From the research ethics we have outlined we derive certain guiding principles for evaluation.

Showing Links Between Methodological Design and the Broader Purpose of the Study

Business ethics scholarship should clarify its purpose through clearly articulated research questions and hypotheses, while explaining in its methods why specific research practices are important for a broader purpose, and why that purpose is itself ethically relevant. Specifically, the methods discussion should reflect how the ethics-related purpose of the study is consistent with the methodological approach adopted, both in terms of the broad design and specific practices. In short, integration of methods with the wider purpose of the study, and alignment between the two, is a mark of ethically sensitive research.

In their recent study of child labour in Indian cottonseed oil farms, D’Cruz et al. ( 2022 ) demonstrate an exemplary integration of methods and purpose to explore a topic that is notoriously difficult to study methodologically. Drawing on analyses of children’s drawings, together with detailed conversational extracts, the authors paint a powerful picture of the experience of violence in a population of working children. Rather than staying only at the level of lived experiences, however, the authors use those experiences to understand how processes of embedding and disembedding labour within society are manifested at the micro level. Thus, their visual and discursive methods become powerful tools to link everyday suffering with macro processes of economy and society.

Acknowledging the Web of Relationships Within Which Research Methods are Embedded

Each aspect of the research process, from protocol design to data collection to peer review, involves multiple actors who collectively construct the meaning of scholarship (Greenwood, 2016 ). While it may not be possible to make this network entirely visible, the ability to do so increases the transparency and value of a scholarly inquiry.

In his study of external funding on research freedom, Goduscheit ( 2022 ) uses qualitative interviews, program materials and observations to understand how funding bodies shape research outcomes. He shows how expectations from funding bodies can shape the types of topics studied, the ways in which research questions are answered and the forms of research output that are produced. Rather than simply deeming such influences to be unethical, he analyses the positive and negative features of the evolving relationships between researchers and funding bodies and their implications for developing scholarship.

Similarly acknowledging relationships but on a very different topic, Allen et al. ( 2019 ) describe the role of reflexivity in sustainability research, where ecological responsibility can result from acknowledging the multiple relationships between humans and the environment. Promoting an “ecocentric radical-reflexivity”, they point to how methods such as participatory action research and arts-based methods can help identify organizational actors as embedded in ecological relationships. In this example, as in the previous one, research is recognized as more than simply the execution of accepted standards. Rather, ethical research depends on developing sensibilities towards the complex economic and ecological relationships in which scholarship is situated.

Complementing Compliance with Purpose

Ethics should be explicitly discussed as an aspect of methodology, but this is best done when a focus on compliance with standards is complemented by a consideration of core ethical issues and a transparent discussion of how decisions were made in response to those issues. Doing so reveals those decisions as tailor-made for the case at hand and not imposed upon the case without regard for its specificities (Greenwood, 2016 ). In other words, compliance is not a sufficient criterion for ethical research methods, and a methodological approach focused exclusively on ethical compliance criteria may miss the “bigger picture” of the role of the methods in the broader scientific and social goals of the study.

Nielsen’s ( 2016 ) paper on ethical praxis and action research elaborates on how research involves ethical decision making and situated, pragmatic choices that go beyond simply ticking the correct ethical boxes. Describing these from an Aristotelian perspective, he elaborates how researcher-participant interactions give rise to emergent research concerns that are both knowledge-related problems and problems for practice. The ethics of action research in this context is about facing unique problems that cut across the researcher-practitioner divide and can draw upon but are not limited to pre-existing ethics templates.

Adopting an Explanatory Versus a Justificatory Orientation

Methodological descriptions of ethics often have the tone of justification claims legitimizing authorial choices in terms of sample, data collection or analysis. Such justifications are warranted, and are good practice, but we believe that value is added when authors are more forthright about their ethical difficulties and dilemmas. Specifically, we value their attempts to work out those dilemmas transparently for a scholarly audience, that is thereby given access into the workings of scientific decision-making process and not simply presented with a black box labeled “method”. There is more value in showing the path taken to an ethical judgement than simply defending that the end decision was a good one. This also implies that wrong turns, changes of track, and similar ethical revisions should be described and contribute to the value of a paper.

Litz’s and Turner’s ( 2013 ) study of unethical practices in inherited family firms provides an interesting case of how researchers can productively describe the dilemmas they face methodologically. Given the difficulty of gathering data about the unethical practices of family members, they candidly ask “how does one approach a question so laced with shame and stigma?”(p.303). Rather than presenting their method in terms of templates used to justify their choices, they recruit the readers directly into their dilemma and walk them through their choices, which involved confronting participants with dramatic scenarios that allowed them to disclose intimately held views more safely. Ultimately building this technique into a validation exercise and a quantitative analysis, the latter are given credibility by their grounding in the initial researcher dilemma that led to the methodological approach.

Transparency and Reflexivity in Writing and Link Between Methods and Results Sections

Because transparent and reflexive description of methods integrates theoretical considerations within the methods itself, such description allows the method to operate more organically within the broader argument of the paper. Doing so allows authors to establish links between the methods and discussion sections, to describe what went right or wrong, what the limitations and possibilities of the method were, and how future research could remedy possible shortcomings or harms of the given method.

For example, Bontempi et al. ( 2021 ) study of CSR reporting inspired by the case of the Ethiopian Gibe III dam is exemplary of how methods can be used to reflexively and transparently link methods and results. Engaging in a “counter reporting”, the study draws upon conceptual literature, archival and theoretical research, and activist on-the-ground engagement to build an alternative view of reported social engagement around hydroelectric dams. Alternating between inductive and deductive approaches, these authors were particularly reflexive and deeply transparent in their methodological description, including detailed and publicly available information from their codebook in the article’s supplementary materials. The result went beyond the standard critique of CSR discourses to actively create a counter-discourse that was both scholarly and activist in orientation. The resulting discursive struggle continued onto the blogosphere, with methodological debate between the authors and the company itself over methods. Footnote 1 We see such interaction and engagement as key to the social relevance of research.

Purpose, Care and Reflexivity

Research ethics have conventionally been concerned with the procedural aspects of scholarship, in particular the methods. Gold standard in this regard has been to not merely treat ethical standards as hurdles but as aspirations. In this sense an ethical researcher is one who does not only comply but who also cares. We suggest that care requires researcher to actively reflect on and take responsibility for their ethical practices and their research goals, and to situate their practices reflexively within a broader collective process of scholarly inquiry. Thus, we extend the notion of care to embrace the reflexivity of the researcher with regard to their own positionality (and privilege) and with regard to the purpose of research, treating ethics as central to the entire research endeavor. Complementing ethical theorizing that draws data from orthodox empirical methods, we encourage scholars to take up new forms of ethical empirical research in which connections between the conduct of the research and the motivation of the research are deeply and actively formed. The guiding principles we outline in this editorial are aimed at integrating organic, particularized and reflective narratives about the ethical conduct and goals of research in the methods section and throughout the manuscript. Editors, reviewers and authors can all contribute to treating research ethics more centrally in business ethics research.

https://www.business-humanrights.org/es/%C3%BAltimas-noticias/rejoinder-to-webuilds-response/

Allen, S., Cunliffe, A. L., & Easterby-Smith, M. (2019). Understanding sustainability through the lens of ecocentric radical-reflexivity: Implications for management education. Journal of Business Ethics, 154 (3), 781–795.

Article   Google Scholar  

Bontempi, A., Del Bene, D., & Di Felice, L. J. (2021). Counter-reporting sustainability from the bottom up: The case of the construction company WeBuild and dam-related conflicts. Journal of Business Ethics, 2021 , 1–26.

Google Scholar  

D’Cruz, P., Noronha, E., Banday, M. U. L., & Chakraborty, S. (2022). Place matters:(Dis) embeddedness and child labourers’ experiences of depersonalized bullying in Indian Bt cottonseed global production networks. Journal of Business Ethics, 176 (2), 241–263.

DuBois, J. M., Beskow, L., Campbell, J., Dugosh, K., Festinger, D., Hartz, S., & Lidz, C. (2012). Restoring balance: A consensus statement on the protection of vulnerable research participants. American Journal of Public Health, 102 (12), 2220–2225.

Goduscheit, R. C. (2022). No strings attached? Potential effects of external funding on freedom of research. Journal of Business Ethics, 176 (1), 1–15.

Greenwood, M. (2016). Approving or improving research ethics in management journals. Journal of Business Ethics, 137 (3), 507–520.

Islam, G., & Greenwood, M. (2021). Reconnecting to the social in business ethics. Journal of Business Ethics, 170 (1), 1–4.

Hansson, S. O. (2006). Informed consent out of context. Journal of Business Ethics, 63 (2), 149–154.

Hollenbeck, J. R., & Wright, P. M. (2017). Harking, sharking, and tharking: Making the case for post hoc analysis of scientific data. Journal of Management, 43 (1), 5–18.

Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality & Social Psychology Review, 2 , 196.

Litz, R. A., & Turner, N. (2013). Sins of the father’s firm: Exploring responses to inherited ethical dilemmas in family business. Journal of Business Ethics, 113 (2), 297–315.

Newman, A., Bavik, Y. L., Mount, M., & Shao, B. (2021). Data collection via online platforms: Challenges and recommendations for future research. Applied Psychology, 70 (3), 1380–1402.

Nielsen, R. P. (2016). Action research as an ethics praxis method. Journal of Business Ethics, 135 (3), 419–428.

Roulet, T. J., Gill, M. J., Stenger, S., & Gill, D. J. (2017). Reconsidering the value of covert research: The role of ambiguous consent in participant observation. Organizational Research Methods, 20 (3), 487–517.

Shank, D. B. (2016). Using crowdsourcing websites for sociological research: The case of Amazon Mechanical Turk. American Sociologist, 47 (1), 47–55.

Williams, B. (1985). Ethics and the limits of philosophy . Harvard University Press.

Download references

Author information

Authors and affiliations.

Grenoble Ecole de Management and IREGE, Grenoble, France

Faculty of Business and Economics, Monash University, Melbourne, VIC, Australia

Michelle Greenwood

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Gazi Islam .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Islam, G., Greenwood, M. Ethical Research in Business Ethics. J Bus Ethics 182 , 1–5 (2023). https://doi.org/10.1007/s10551-022-05301-z

Download citation

Published : 29 November 2022

Issue Date : January 2023

DOI : https://doi.org/10.1007/s10551-022-05301-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research ethics
  • Reflexivity
  • Research purpose
  • Methodology
  • Research integrity
  • Social impact
  • Beyond compliance

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Ethical Issue Essay

    research paper ethical issue

  2. Ajs 532 week 6 ethical issues research paper and presentation by ajs532ft

    research paper ethical issue

  3. Ethical Considerations in Research: Safeguarding Principles Free Essay Example

    research paper ethical issue

  4. Ethical Research Paper Proposal Instructions.docx

    research paper ethical issue

  5. (PDF) Common Ethical Issues In Research And Publication

    research paper ethical issue

  6. Buy Essay Online Cheap

    research paper ethical issue

VIDEO

  1. HOW TO WRITE ETHICAL CONSIDERATIONS IN ACADEMIC RESEARCH

  2. Ethics in Acute Care

  3. Ethical Issues in Research

  4. PUBLISHING AN OBGYN PAPER IN A JOURNAL

  5. Webinar 5 Being expansive Sharing your work in ways you may not have considered

  6. ETHICAL ISSUES IN HUMAN RESOURCE MANAGEMENT

COMMENTS

  1. Ethical Issues in Research: Perceptions of Researchers, Research Ethics Board Members and Research Ethics Experts

    Introduction. Research includes a set of activities in which researchers use various structured methods to contribute to the development of knowledge, whether this knowledge is theoretical, fundamental, or applied (Drolet & Ruest, accepted).University research is carried out in a highly competitive environment that is characterized by ever-increasing demands (i.e., on time, productivity ...

  2. Ethical Considerations in Research

    Revised on June 22, 2023. Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people. The goals of human research often include understanding real-life phenomena, studying effective ...

  3. (PDF) Ethical Issues in Research

    Ethics or moral philosophy is a branch of philos-. ophy with standards or codes or value systems. and involves defending, systematizing, recommending concepts of right, and minimizing. wrong ...

  4. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    To discuss ethical issues and research with small connected communities: Case study involving small communities: Qualitative research in small, connected communities presents ethical challenges. ... The reflection on the researcher's role and his/her influence on the research field was the theme of the paper by Råheim et al. (2016). During ...

  5. Ethical Issues in Research

    Definition. Ethics is a set of standards, a code, or value system, worked out from human reason and experience, by which free human actions are determined as ultimately right or wrong, good, or evil. If acting agrees with these standards, it is ethical, otherwise unethical. Scientific research refers to a persistent exercise towards producing ...

  6. Research Ethics: Sage Journals

    Research Ethics. Research Ethics is aimed at all readers and authors interested in ethical issues in the conduct of research, the regulation of research, the procedures and process of ethical review as well as broader ethical issues related to research … | View full journal description. This journal is a member of the Committee on Publication ...

  7. The Ethics of Research, Writing, and Publication

    According to Resnik (2011), many people think of ethics as a set of rules distinguishing right from wrong, but actually the term "ethics" refers to norms of conduct or of action and in disciplines of study. Research ethics or norms promote the "knowledge, truth, and avoidance of error" (p. 1) and protect against "fabricating ...

  8. Ethics and Related Issues in Research

    Research Papers. A major domain of scientific fraud is related to the publication of research papers. We know that a major activity of professional societies is to publish research journals. However, it is highly deplorable that sectarian activities in professional societies are rampant, and often, power groups seize the control of research ...

  9. Full article: A framework for ethical research in international and

    Introduction. This article puts forward a dialogic, values-based framework for working through ethical issues encountered in international and comparative education (ICE) research. These issues - related to consent, harm, respect, and transparency - are not restricted to ICE, but can be heightened in the international and collaborative ...

  10. Fundamentals of Medical Ethics

    Our hope is that the Fundamentals of Medical Ethics series will suggest broad lessons to keep in mind as physicians, patients, research participants, families, and communities struggle with new ...

  11. Ethical Issues in Patient Safety Research: A Systematic Review ...

    Although important international ethical guidelines for research exist, literature has been emerging in the last 20 years that begins to apply such guidelines to patient safety research specifically. This paper provides a reviewof the literature related to ethics, oversight, and patient safety research; identifies issues highlighted in articles as

  12. Ethical challenges in online research: Public/private perceptions

    Online research has created new challenges for ethics committees and institutions as well as for researchers: as Ackland (2013: 43) succinctly puts it, 'Ethical guidelines for use of digital trace data are still a moving target.'New digital online spaces force researchers to rethink established ethical principles of informed consent, privacy and anonymity.

  13. Ethical Issues in Research: Perceptions of Researchers, Research Ethics

    Research Design. A qualitative research approach involving individual semi-structured interviews was used to systematically document ethical issues (De Poy & Gitlin, 2010; Hammell et al., 2000).Specifically, a descriptive phenomenological approach inspired by the philosophy of Husserl was used (Husserl, 1970, 1999), as it is recommended for documenting the perceptions of ethical issues raised ...

  14. Ethical principles in machine learning and artificial intelligence

    Decision-making on numerous aspects of our daily lives is being outsourced to machine-learning (ML) algorithms and artificial intelligence (AI), motivated by speed and efficiency in the decision ...

  15. (DOC) Ethical Issues in Research

    ABSTRACT This module will introduce you to the ethical concepts underlying applied ethical decision-making in the area of research involving human participants. We will also learn what the issues are that people involved in research on research ethics are concerned with. Ethics without an understanding of historical and legal context makes ...

  16. Ethical Research in Business Ethics

    In this editorial essay, we argue that business ethics research should be aware of the ethical implications of its own methodological choices, and that these implications include, but go beyond, mere compliance with standardized ethical norms. Methodological choices should be made specifically with reference to their effects on the world, both within and outside the academy. Awareness of these ...