Professional Ethics of Software Engineers: An Ethical Framework

  • Original Paper
  • Published: 06 June 2015
  • Volume 22 , pages 417–434, ( 2016 )

Cite this article

  • Yotam Lurie 1 &
  • Shlomo Mark 2  

4678 Accesses

20 Citations

Explore all metrics

The purpose of this article is to propose an ethical framework for software engineers that connects software developers’ ethical responsibilities directly to their professional standards. The implementation of such an ethical framework can overcome the traditional dichotomy between professional skills and ethical skills, which plagues the engineering professions, by proposing an approach to the fundamental tasks of the practitioner, i.e., software development, in which the professional standards are intrinsically connected to the ethical responsibilities. In so doing, the ethical framework improves the practitioner’s professionalism and ethics. We call this approach Ethical-Driven Software Development (EDSD), as an approach to software development. EDSD manifests the advantages of an ethical framework as an alternative to the all too familiar approach in professional ethics that advocates “stand-alone codes of ethics”. We believe that one outcome of this synergy between professional and ethical skills is simply better engineers. Moreover, since there are often different software solutions, which the engineer can provide to an issue at stake, the ethical framework provides a guiding principle, within the process of software development, that helps the engineer evaluate the advantages and disadvantages of different software solutions. It does not and cannot affect the end-product in and of-itself. However, it can and should, make the software engineer more conscious and aware of the ethical ramifications of certain engineering decisions within the process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

The eight major engineering principles (Hambling 1995 ) are: “plan before building” Planning requires knowledge, experience and availability of resources, planning tools and finances/cost. If these are the constraints on planning, subsequently the quality of planning is predetermined. Assure compatibility; the idea is that all producers are working according to the same standards, Design testing procedures before building; check designs before commitment; configuration management; quality assurance and quality control—learn from mistakes (reuse); know where you’re going.

In 2004 they further clarified this definition (IEEE 2004 ): “Software engineering is about creating high-quality software in a systematic, controlled, and efficient manner. Consequently, there are important emphases on analysis and evaluation, specification, design, and evolution of software. In addition, there are issues related to management and quality, to novelty and creativity, to standards, to individual skills, and to teamwork and professional practice that play a vital role in software engineering”.

Amity, E. (2014). Agile and proffesional ethics . M.Sc. Thesis in software engineering. SCE - Shamoon College of engineering, Israel.

Basart, J. M. (2013). Engineering ethics beyond engineers’ ethics. Science and Engineering Ethics, 19 (1), 179–187.

Article   Google Scholar  

Bayles, M. D. (1982). Professional ethics . Belmont, CA: Wadsworth Pub. Co.

Google Scholar  

Boisjoly, R. P., Curtis, F. E., & Mellican, E. (1989). Roger Boisjoly and the challenger disaster: The ethical dimensions. Journal of Business Ethics, 8 (4), 217–230.

Braude, E. J. (2011). Software engineering: Modern approaches . NewYork: Wiley.

Brooks, F. P. (1986). No silver bullet—Essence and accident in software engineering. In Proceedings of the IFIP Tenth world computing conference (pp. 1069–1076).

Davis, M. (1996). Defining ‘engineer’: How to do it and why it matters. Journal of Engineering Education, 85 , 97–101.

Davis, M. (1999). Professional responsibility: Just following the rules? Business and Professional Ethics Journal, 18 , 65–87.

De George, R. T. (1981). Ethical responsibilities of engineers in large organizations: The Pinto case. Business and Professional Ethics Journal, 1 (1), 1–14.

Dodig-Crnkovic, G., & Feldt, R. (2009). Professional and ethical issues of software engineering curriculum applied in swedish academic context. In HAoSE 2009 first workshop on human aspects of software engineering . Orlando, Florida.

Farrell, B. C. (2002). Codes of ethics: Their evolution, development and other controversies. Journal of Management Development, 21 (2), 152–163.

Foot, P. (1978). Virtues and VICES and other essays in moral philosophy . Berkeley and Oxford: University of California Press and Blackwell.

Ford, G., & Gibbs, N. (1996). A mature profession of software engineering . Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University.

Friedman, B. (2012). The envisioning cards: A toolkit for catalyzing humanistic and technical imaginations. In Proceedings of the SIGCHI conference on human factors in computing systems . ACM.

Gaumnitz, B. R. (2004). A classification scheme for codes of business ethics. Journal of Business Ethics, 49 (4), 329–335.

Gotterbarn, D. M. (1999). Computer society and ACM approve software engineering code of ethics. Retrieved from http://www.computer.org/cms/Computer.org/Publications/code-of-ethics.pdf

Greenwood, E. (1957). Attributes of a profession. Social Work, 2 , 44–55.

Hambling, B. (1995). Managing software quality . New York: McGraw Hill.

IEEE. (1993). IEEE standards collection. In IEEE standard glossary of software engineering terminology . IEEE Computer Society.

IEEE. (2004). Software engineering 2004 curriculum guidelines for undergraduate degree programs in software engineering . Washington, DC: IEEE computer society.

IEEE. (2009). Curriculum guidelines for undergraduate degree programs in software engineering . IEEE Computer Society. Retrieved from http://sites.computer.org/ccsepp15

Kasher, A. (2005). Professional ethics and collective professional autonomy. Ethical Perspectives, 11 (1), 67–98.

MacIntyre, A. (1981). After virtue: A study in moral theory . Notre Dame, IN: University of Notre Dame Press.

Miller, K. W. (2011). Moral responsibility for computing artifacts: The rules. IT Professional, 13 (3), 57–59.

Mnkandla, E. (2009). About software engineering frameworks and methodologies. In AFRICON, 2009. AFRICON’09 . IEEE.

Mullet, D. (1999). The software crisis. Benchmarks Online—A monthly publication of Academic Computing Services of the University of North Texas Computing Center , 2 (7). https://www.unt.edu/benchmarks/archives/1999/july99/crisis.htm .

Naur, P. & Randell D. B. (1968). Software engineering: Report of a conference sponsored by the NATO Science Committee . Garmisch, Germany: The first NATO Software Engineering Conference in 1968 Garmisch, Germany. Retrieved October 7–11, 1968

NIST. (2002). New Release of June 28, 2002 . The National Institute of Standards and Technology.

Parsons, T. (1939). The professions and social structure. In T. Parsons (Ed.), Essays in sociological theory (pp. 34–49). New York: The Free Press.

Pressman, R. (2010). Software engineering: A practitioner’s approach (7th ed.). New York: McGraw Hill.

Putnam, H. (2002). The collapse of the fact/value dichotomy and other essays . Cambridge: Harvard University Press.

Schmemann, S. (1997). 2 Die at games in Israel as bridge collapses . Retrieved from http://www.nytimes.com/1997/07/15/world/2-die-at-games-in-israel-as-bridge-collapses.html

Sommerville, I. (2004). Software engineering, international computer science series (7th ed.). Boston: Addison Wesley, Pearson Education.

StandishGroup. (n.d.). Retrieved from http://www.standishgroup.com/newsroom/chaos_2009.php

Tilmann, G., & Weinberger, J. (2004). Technology never fails, but project can. Baseline, 1 (26), 28.

Download references

Author information

Authors and affiliations.

Department of Management, Ben-Gurion University of the Negev, P.O. Box 653, Beersheba, 84105, Israel

Yotam Lurie

Department of Software Engineering, SCE- Shamoon College of Engineering, Bialik/Basel Street, Beersheba, 84100, Israel

Shlomo Mark

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yotam Lurie .

Rights and permissions

Reprints and permissions

About this article

Lurie, Y., Mark, S. Professional Ethics of Software Engineers: An Ethical Framework. Sci Eng Ethics 22 , 417–434 (2016). https://doi.org/10.1007/s11948-015-9665-x

Download citation

Received : 10 January 2015

Accepted : 29 May 2015

Published : 06 June 2015

Issue Date : April 2016

DOI : https://doi.org/10.1007/s11948-015-9665-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Code of ethics
  • Professional ethics
  • Software engineer
  • Find a journal
  • Publish with us
  • Track your research

Using the Software Engineering Code of Ethics in Professional Computing Issues

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • IEEE CS Standards
  • Career Center
  • Subscribe to Newsletter
  • IEEE Standards

software ethics research paper

  • For Industry Professionals
  • For Students
  • Launch a New Career
  • Membership FAQ
  • Membership FAQs
  • Membership Grades
  • Special Circumstances
  • Discounts & Payments
  • Distinguished Contributor Recognition
  • Grant Programs
  • Find a Local Chapter
  • Find a Distinguished Visitor
  • Find a Speaker on Early Career Topics
  • Technical Communities
  • Collabratec (Discussion Forum)
  • Start a Chapter
  • My Subscriptions
  • My Referrals
  • Computer Magazine
  • ComputingEdge Magazine
  • Let us help make your event a success. EXPLORE PLANNING SERVICES
  • Events Calendar
  • Calls for Papers
  • Conference Proceedings
  • Conference Highlights
  • Top 2024 Conferences
  • Conference Sponsorship Options
  • Conference Planning Services
  • Conference Organizer Resources
  • Virtual Conference Guide
  • Get a Quote
  • CPS Dashboard
  • CPS Author FAQ
  • CPS Organizer FAQ
  • Find the latest in advanced computing research. VISIT THE DIGITAL LIBRARY
  • Open Access
  • Tech News Blog
  • Author Guidelines
  • Reviewer Information
  • Guest Editor Information
  • Editor Information
  • Editor-in-Chief Information
  • Volunteer Opportunities
  • Video Library
  • Member Benefits
  • Institutional Library Subscriptions
  • Advertising and Sponsorship

Code of Ethics

  • Educational Webinars
  • Online Education
  • Certifications
  • Industry Webinars & Whitepapers
  • Research Reports
  • Bodies of Knowledge
  • CS for Industry Professionals
  • Resource Library
  • Newsletters
  • Women in Computing
  • Digital Library Access
  • Organize a Conference
  • Run a Publication
  • Become a Distinguished Speaker
  • Participate in Standards Activities
  • Peer Review Content
  • Author Resources
  • Publish Open Access
  • Society Leadership
  • Boards & Committees
  • Local Chapters
  • Governance Resources
  • Conference Publishing Services
  • Chapter Resources
  • About the Board of Governors
  • Board of Governors Members
  • Diversity & Inclusion
  • Open Volunteer Opportunities
  • Award Recipients
  • Student Scholarships & Awards
  • Nominate an Election Candidate
  • Nominate a Colleague
  • Corporate Partnerships
  • Conference Sponsorships & Exhibits
  • Advertising
  • Recruitment
  • Publications
  • Education & Career

Short Version

The short version of the code summarizes aspirations at a high level of the abstraction; the clauses that are included in the full version give examples and details of how these aspirations change the way we act as software engineering professionals. Without the aspirations, the details can become legalistic and tedious; without the details, the aspirations can become high-sounding but empty; together, the aspirations and the details form a cohesive code.

Software engineers shall commit themselves to making the analysis, specification, design, development, testing and maintenance of software a beneficial and respected profession. In accordance with their commitment to the health, safety and welfare of the public, software engineers shall adhere to the following Eight Principles:

1. PUBLIC – Software engineers shall act consistently with the public interest. 2. CLIENT AND EMPLOYER – Software engineers shall act in a manner that is in the best interests of their client and employer consistent with the public interest. 3. PRODUCT – Software engineers shall ensure that their products and related modifications meet the highest professional standards possible. 4. JUDGMENT – Software engineers shall maintain integrity and independence in their professional judgment. 5. MANAGEMENT – Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance. 6. PROFESSION – Software engineers shall advance the integrity and reputation of the profession consistent with the public interest. 7. COLLEAGUES – Software engineers shall be fair to and supportive of their colleagues. 8. SELF – Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession.

Full Version

Computers have a central and growing role in commerce, industry, government, medicine, education, entertainment and society at large. Software engineers are those who contribute by direct participation or by teaching, to the analysis, specification, design, development, certification, maintenance and testing of software systems. Because of their roles in developing software systems, software engineers have significant opportunities to do good or cause harm, to enable others to do good or cause harm, or to influence others to do good or cause harm. To ensure, as much as possible, that their efforts will be used for good, software engineers must commit themselves to making software engineering a beneficial and respected profession. In accordance with that commitment, software engineers shall adhere to the following Code of Ethics and Professional Practice.

The Code contains eight Principles related to the behavior of and decisions made by professional software engineers, including practitioners, educators, managers, supervisors and policy makers, as well as trainees and students of the profession. The Principles identify the ethically responsible relationships in which individuals, groups, and organizations participate and the primary obligations within these relationships. The Clauses of each Principle are illustrations of some of the obligations included in these relationships. These obligations are founded in the software engineer’s humanity, in special care owed to people affected by the work of software engineers, and the unique elements of the practice of software engineering. The Code prescribes these as obligations of anyone claiming to be or aspiring to be a software engineer.

It is not intended that the individual parts of the Code be used in isolation to justify errors of omission or commission. The list of Principles and Clauses is not exhaustive. The Clauses should not be read as separating the acceptable from the unacceptable in professional conduct in all practical situations. The Code is not a simple ethical algorithm that generates ethical decisions. In some situations standards may be in tension with each other or with standards from other sources. These situations require the software engineer to use ethical judgment to act in a manner which is most consistent with the spirit of the Code of Ethics and Professional Practice, given the circumstances.

Ethical tensions can best be addressed by thoughtful consideration of fundamental principles, rather than blind reliance on detailed regulations. These Principles should influence software engineers to consider broadly who is affected by their work; to examine if they and their colleagues are treating other human beings with due respect; to consider how the public, if reasonably well informed, would view their decisions; to analyze how the least empowered will be affected by their decisions; and to consider whether their acts would be judged worthy of the ideal professional working as a software engineer. In all these judgments concern for the health, safety and welfare of the public is primary; that is, the “Public Interest” is central to this Code.

The dynamic and demanding context of software engineering requires a code that is adaptable and relevant to new situations as they occur. However, even in this generality, the Code provides support for software engineers and managers of software engineers who need to take positive action in a specific case by documenting the ethical stance of the profession. The Code provides an ethical foundation to which individuals within teams and the team as a whole can appeal. The Code helps to define those actions that are ethically improper to request of a software engineer or teams of software engineers.

The Code is not simply for adjudicating the nature of questionable acts; it also has an important educational function. As this Code expresses the consensus of the profession on ethical issues, it is a means to educate both the public and aspiring professionals about the ethical obligations of all software engineers.

Principle 1 – PUBLIC

Software engineers shall act consistently with the public interest. In particular, software engineers shall, as appropriate:

1.01. Accept full responsibility for their own work. 1.02. Moderate the interests of the software engineer, the employer, the client and the users with the public good. 1.03. Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to the public good. 1.04. Disclose to appropriate persons or authorities any actual or potential danger to the user, the public, or the environment, that they reasonably believe to be associated with software or related documents. 1.05. Cooperate in efforts to address matters of grave public concern caused by software, its installation, maintenance, support or documentation. 1.06. Be fair and avoid deception in all statements, particularly public ones, concerning software or related documents, methods and tools. 1.07. Consider issues of physical disabilities, allocation of resources, economic disadvantage and other factors that can diminish access to the benefits of software. 1.08. Be encouraged to volunteer professional skills to good causes and contribute to public education concerning the discipline.

Principle 2 – CLIENT AND EMPLOYER

Software engineers shall act in a manner that is in the best interests of their client and employer, consistent with the public interest. In particular, software engineers shall, as appropriate:

2.01. Provide service in their areas of competence, being honest and forthright about any limitations of their experience and education. 2.02. Not knowingly use software that is obtained or retained either illegally or unethically. 2.03. Use the property of a client or employer only in ways properly authorized, and with the client’s or employer’s knowledge and consent. 2.04. Ensure that any document upon which they rely has been approved, when required, by someone authorized to approve it. 2.05. Keep private any confidential information gained in their professional work, where such confidentiality is consistent with the public interest and consistent with the law. 2.06. Identify, document, collect evidence and report to the client or the employer promptly if, in their opinion, a project is likely to fail, to prove too expensive, to violate intellectual property law, or otherwise to be problematic. 2.07. Identify, document, and report significant issues of social concern, of which they are aware, in software or related documents, to the employer or the client. 2.08. Accept no outside work detrimental to the work they perform for their primary employer. 2.09. Promote no interest adverse to their employer or client, unless a higher ethical concern is being compromised; in that case, inform the employer or another appropriate authority of the ethical concern.

Principle 3 – PRODUCT

Software engineers shall ensure that their products and related modifications meet the highest professional standards possible. In particular, software engineers shall, as appropriate:

3.01. Strive for high quality, acceptable cost and a reasonable schedule, ensuring significant tradeoffs are clear to and accepted by the employer and the client, and are available for consideration by the user and the public. 3.02. Ensure proper and achievable goals and objectives for any project on which they work or propose. 3.03. Identify, define and address ethical, economic, cultural, legal and environmental issues related to work projects. 3.04. Ensure that they are qualified for any project on which they work or propose to work by an appropriate combination of education and training, and experience. 3.05. Ensure an appropriate method is used for any project on which they work or propose to work. 3.06. Work to follow professional standards, when available, that are most appropriate for the task at hand, departing from these only when ethically or technically justified. 3.07. Strive to fully understand the specifications for software on which they work. 3.08. Ensure that specifications for software on which they work have been well documented, satisfy the users requirements and have the appropriate approvals. 3.09. Ensure realistic quantitative estimates of cost, scheduling, personnel, quality and outcomes on any project on which they work or propose to work and provide an uncertainty assessment of these estimates. 3.10. Ensure adequate testing, debugging, and review of software and related documents on which they work. 3.11. Ensure adequate documentation, including significant problems discovered and solutions adopted, for any project on which they work. 3.12. Work to develop software and related documents that respect the privacy of those who will be affected by that software. 3.13. Be careful to use only accurate data derived by ethical and lawful means, and use it only in ways properly authorized. 3.14. Maintain the integrity of data, being sensitive to outdated or flawed occurrences. 3.15 Treat all forms of software maintenance with the same professionalism as new development.

Principle 4 – JUDGMENT

Software engineers shall maintain integrity and independence in their professional judgment. In particular, software engineers shall, as appropriate:

4.01. Temper all technical judgments by the need to support and maintain human values. 4.02 Only endorse documents either prepared under their supervision or within their areas of competence and with which they are in agreement. 4.03. Maintain professional objectivity with respect to any software or related documents they are asked to evaluate. 4.04. Not engage in deceptive financial practices such as bribery, double billing, or other improper financial practices. 4.05. Disclose to all concerned parties those conflicts of interest that cannot reasonably be avoided or escaped. 4.06. Refuse to participate, as members or advisors, in a private, governmental or professional body concerned with software related issues, in which they, their employers or their clients have undisclosed potential conflicts of interest.

Principle 5 – MANAGEMENT

Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance. In particular, those managing or leading software engineers shall, as appropriate:

5.01 Ensure good management for any project on which they work, including effective procedures for promotion of quality and reduction of risk. 5.02. Ensure that software engineers are informed of standards before being held to them. 5.03. Ensure that software engineers know the employer’s policies and procedures for protecting passwords, files and information that is confidential to the employer or confidential to others. 5.04. Assign work only after taking into account appropriate contributions of education and experience tempered with a desire to further that education and experience. 5.05. Ensure realistic quantitative estimates of cost, scheduling, personnel, quality and outcomes on any project on which they work or propose to work, and provide an uncertainty assessment of these estimates. 5.06. Attract potential software engineers only by full and accurate description of the conditions of employment. 5.07. Offer fair and just remuneration. 5.08. Not unjustly prevent someone from taking a position for which that person is suitably qualified. 5.09. Ensure that there is a fair agreement concerning ownership of any software, processes, research, writing, or other intellectual property to which a software engineer has contributed. 5.10. Provide for due process in hearing charges of violation of an employer’s policy or of this Code. 5.11. Not ask a software engineer to do anything inconsistent with this Code. 5.12. Not punish anyone for expressing ethical concerns about a project.

Principle 6 – PROFESSION

Software engineers shall advance the integrity and reputation of the profession consistent with the public interest. In particular, software engineers shall, as appropriate:

6.01. Help develop an organizational environment favorable to acting ethically. 6.02. Promote public knowledge of software engineering. 6.03. Extend software engineering knowledge by appropriate participation in professional organizations, meetings and publications. 6.04. Support, as members of a profession, other software engineers striving to follow this Code. 6.05. Not promote their own interest at the expense of the profession, client or employer. 6.06. Obey all laws governing their work, unless, in exceptional circumstances, such compliance is inconsistent with the public interest. 6.07. Be accurate in stating the characteristics of software on which they work, avoiding not only false claims but also claims that might reasonably be supposed to be speculative, vacuous, deceptive, misleading, or doubtful. 6.08. Take responsibility for detecting, correcting, and reporting errors in software and associated documents on which they work. 6.09. Ensure that clients, employers, and supervisors know of the software engineer’s commitment to this Code of ethics, and the subsequent ramifications of such commitment. 6.10. Avoid associations with businesses and organizations which are in conflict with this code. 6.11. Recognize that violations of this Code are inconsistent with being a professional software engineer. 6.12. Express concerns to the people involved when significant violations of this Code are detected unless this is impossible, counter-productive, or dangerous. 6.13. Report significant violations of this Code to appropriate authorities when it is clear that consultation with people involved in these significant violations is impossible, counter-productive or dangerous.

Principle 7 – COLLEAGUES

Software engineers shall be fair to and supportive of their colleagues. In particular, software engineers shall, as appropriate:

7.01. Encourage colleagues to adhere to this Code. 7.02. Assist colleagues in professional development. 7.03. Credit fully the work of others and refrain from taking undue credit. 7.04. Review the work of others in an objective, candid, and properly- documented way. 7.05. Give a fair hearing to the opinions, concerns, or complaints of a colleague. 7.06. Assist colleagues in being fully aware of current standard work practices including policies and procedures for protecting passwords, files and other confidential information, and security measures in general. 7.07. Not unfairly intervene in the career of any colleague; however, concern for the employer, the client or public interest may compel software engineers, in good faith, to question the competence of a colleague. 7.08. In situations outside of their own areas of competence, call upon the opinions of other professionals who have competence in that area.

Principle 8 – SELF

Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession. In particular, software engineers shall continually endeavor to:

8.01. Further their knowledge of developments in the analysis, specification, design, development, maintenance and testing of software and related documents, together with the management of the development process. 8.02. Improve their ability to create safe, reliable, and useful quality software at reasonable cost and within a reasonable time. 8.03. Improve their ability to produce accurate, informative, and well-written documentation. 8.04. Improve their understanding of the software and related documents on which they work and of the environment in which they will be used. 8.05. Improve their knowledge of relevant standards and the law governing the software and related documents on which they work. 8.06 Improve their knowledge of this Code, its interpretation, and its application to their work. 8.07 Not give unfair treatment to anyone because of any irrelevant prejudices. 8.08. Not influence others to undertake any action that involves a breach of this Code. 8.09. Recognize that personal violations of this Code are inconsistent with being a professional software engineer.

This Code was developed by the IEEE-CS/ACM joint task force on Software Engineering Ethics and Professional Practices (SEEPP):

Executive Committee: Donald Gotterbarn (Chair), Keith Miller and Simon Rogerson; Members: Steve Barber, Peter Barnes, Ilene Burnstein, Michael Davis, Amr El-Kadi, N. Ben Fairweather, Milton Fulghum, N. Jayaram, Tom Jewett, Mark Kanko, Ernie Kallman, Duncan Langford, Joyce Currie Little, Ed Mechler, Manuel J. Norman, Douglas Phillips, Peter Ron Prinzivalli, Patrick Sullivan, John Weckert, Vivian Weil, S. Weisband, and Laurie Honour Werth.

This Code may be published without permission as long as it is not changed in any way and it carries the copyright notice.

Copyright   © 1999 by the Institute for Electrical and Electronics Engineers, Inc. and the Association for Computing Machinery, Inc.

Recommended by IEEE Computer Society

software ethics research paper

Fostering Excellence: A Conversation with Willy Zwaenepoel, Harry H. Goode Memorial Award Recipient

software ethics research paper

From Code Readability to Reusability, Here’s How Terraform Locals Enhance IaC Management

software ethics research paper

Cutting Cloud Costs: Key Strategies to Keep Budgets in Check

software ethics research paper

Pioneering the Way: A Conversation with Franck Cappello, Charles Babbage Award Recipient

software ethics research paper

A Visionary in AI: A Conversation with Yun Raymond Fu, Technical Achievement Award Recipient

software ethics research paper

Diversity, Equity, and Inclusion - The Centerpiece of Cyber

software ethics research paper

Artificial Intelligence Transforming the Culinary Industry

software ethics research paper

Addressing the Gender Gap in Cybersecurity Through Enhanced Recruitment and Retention Approaches

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 14 December 2022

Advancing ethics review practices in AI research

  • Madhulika Srikumar   ORCID: orcid.org/0000-0002-6776-4684 1 ,
  • Rebecca Finlay 1 ,
  • Grace Abuhamad 2 ,
  • Carolyn Ashurst 3 ,
  • Rosie Campbell 4 ,
  • Emily Campbell-Ratcliffe 5 ,
  • Hudson Hongo 1 ,
  • Sara R. Jordan 6 ,
  • Joseph Lindley   ORCID: orcid.org/0000-0002-5527-3028 7 ,
  • Aviv Ovadya   ORCID: orcid.org/0000-0002-8766-0137 8 &
  • Joelle Pineau   ORCID: orcid.org/0000-0003-0747-7250 9 , 10  

Nature Machine Intelligence volume  4 ,  pages 1061–1064 ( 2022 ) Cite this article

8809 Accesses

9 Citations

46 Altmetric

Metrics details

A Publisher Correction to this article was published on 11 January 2023

This article has been updated

The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated community effort, to support experimentation with different ethics review processes, to study their effect, and to provide opportunities for diverse voices from the community to share insights and foster norms.

You have full access to this article via your institution.

As artificial intelligence (AI) and machine learning (ML) technologies continue to advance, awareness of the potential negative consequences on society of AI or ML research has grown. Anticipating and mitigating these consequences can only be accomplished with the help of the leading experts on this work: researchers themselves.

Several leading AI and ML organizations, conferences and journals have therefore started to implement governance mechanisms that require researchers to directly confront risks related to their work that can range from malicious use to unintended harms. Some have initiated new ethics review processes, integrated within peer review, which primarily facilitate a reflection on the potential risks and effects on society after the research is conducted (Box 1 ). This is distinct from other responsibilities that researchers undertake earlier in the research process, such as the protection of the welfare of human participants, which are governed by bodies such as institutional review boards (IRBs).

Box 1 Current ethics review practices

Current ethics review practices can be thought of as a sliding scale that varies according to how submitting authors must conduct an ethical analysis and document it in their contributions. Most conferences and journals are yet to initiate ethics review.

Key examples of different types of ethics review process are outlined below.

Impact statement

NeurIPS 2020 broader impact statements - all authors were required to include a statement of the potential broader impact of their work, such as its ethical aspects and future societal consequences of the research, including positive and negative effects. Organizers also specified additional evaluation criteria for paper reviewers to flag submissions with potential ethical issues.

Other examples include the NAACL 2021 and the EMNLP 2021 ethical considerations sections, which encourages authors and reviewers to consider ethical questions in their submitted papers.

Nature Machine Intelligence asks authors for ethical and societal impact statements in papers that involve the identification or detection of humans or groups of humans, including behavioural and socio-economic data.

NeurIPS 2021 paper checklist - a checklist to prompt authors to reflect on potential negative societal effects of their work during the paper writing process (as well as other criteria). Authors of accepted papers were encouraged to include the checklist as an appendix. Reviewers could flag papers that required additional ethics review by the appointed ethics committee.

Other examples include the ACL Rolling Review (ARR) Responsible NLP Research checklist, which is designed to encourage best practices for responsible research.

Code of ethics or guidelines

International Conference on Learning Representations (ICLR) code of ethics - ICLR required authors to review and acknowledge the conference’s code of ethics during the submission process. Authors were not expected to include discussion on ethical aspects in their submissions unless necessary. Reviewers were encouraged to flag papers that may violate the code of ethics.

Other examples include the ACM Code of Ethics and Professional Conduct, which considers ethical principles but through the wider lens of professional conduct.

Although these initiatives are commendable, they have yet to be widely adopted. They are being pursued largely without the benefit of community alignment. As researchers and practitioners from academia, industry and non-profit organizations in the field of AI and its governance, we believe that community coordination is needed to ensure that critical reflection is meaningfully integrated within AI research to mitigate its harmful downstream consequences. The pace of AI and ML research and its growing potential for misuse necessitates that this coordination happen today.

Writing in Nature Machine Intelligence , Prunkl et al. 1 argue that the AI research community needs to encourage public deliberation on the merits and future of impact statements and other self-governance mechanisms in conference submissions. We agree. Here, we build on this suggestion, and provide three recommendations to enable this effective community coordination, as more ethics review approaches begin to emerge across conferences and journals. We believe that a coordinated community effort will require: (1) more research on the effects of ethics review processes; (2) more experimentation with such processes themselves; and (3) the creation of venues in which diverse voices both within and beyond the AI or ML community can share insights and foster norms. Although many of the challenges we address have been previously highlighted 1 , 2 , 3 , 4 , 5 , 6 , this Comment takes a wider view, calling for collaboration between different conferences and journals by contextualizing this conversation against more recent studies 7 , 8 , 9 , 10 , 11 and developments.

Developments in AI research ethics

In the past, many applied scientific communities have contended with the potential harmful societal effects of their research. The infamous anthrax attacks in 2001, for example, catalysed the creation of the National Science Advisory Board for Biosecurity to prevent the misuse of biomedical research. Virology, in particular, has had long-running debates about the responsibility of individual researchers conducting gain-of-function research. Today, the field of AI research finds itself at a similar juncture 12 . Algorithmic systems are now being deployed for high-stakes applications such as law enforcement and automated decision-making, in which the tools have the potential to increase bias, injustice, misuse and other harms at scale. The recent adoption of ethics and impact statements and checklists at some AI conferences and journals signals a much-needed willingness to deal with these issues. However, these ethics review practices are still evolving and are experimental in nature. The developments acknowledge gaps in existing, well-established governance mechanisms, such as IRBs, which focus on risks to human participants rather than risks to society as a whole. This limited focus leaves ethical issues such as the welfare of data workers and non-participants, and the implications of data generated by or about people outside of their scope 6 . We acknowledge that such ethical reflection, beyond IRB mechanisms, may also be relevant to other academic disciplines, particularly those for whom large datasets created by or about people are increasingly common, but such a discussion is beyond the scope of this piece. The need to reflect on ethical concerns seems particularly pertinent within AI, because of its relative infancy as a field, the rapid development of its capabilities and outputs, and its increasing effects on society.

In 2020, the NeurIPS ML conference required all papers to carry a ‘broader impact’ statement examining the ethical and societal effects of the research. The conference updated its approach in 2021, asking authors to complete a checklist and to document potential downstream consequences of their work. In the same year, the Partnership on AI released a white paper calling for the field to expand peer review criteria to consider the potential effects of AI research on society, including accidents, unintended consequences, inappropriate applications and malicious uses 3 . In an editorial citing the white paper, Nature Machine Intelligence announced that it would ask submissions to carry an ethical statement when the research involves the identification of individuals and related sensitive data 13 , recognizing that mitigating downstream consequences of AI research cannot be completely disentangled from how the research itself is conducted. In another recent development, Stanford University’s Ethics and Society Review (ESR) requires AI researchers who apply for funding to identify if their research poses any risks to society and also explain how those risks will be mitigated through research design 14 .

Other developments include the rising popularity of interdisciplinary conferences examining the effects of AI, such as the ACM Conference on Fairness, Accountability, and Transparency (FAccT), and the emergence of ethical codes of conduct for professional associations in computer science, such as the Association for Computing Machinery (ACM). Other actors have focused on upstream initiatives such as the integration of ethics reflection into all levels of the computer science curriculum.

Reactions from the AI research community to the introduction of ethics review practices include fears that these processes could restrict open scientific inquiry 3 . Scholars also note the inherent difficulty of anticipating the consequences of research 1 , with some AI researchers expressing concern that they do not have the expertise to perform such evaluations 7 . Other challenges include concerns about the lack of transparency in review practices at corporate research labs (which increasingly contribute to the most highly cited papers at premier AI conferences such as NeurIPS and ICML 9 ) as well as academic research culture and incentives supporting the ‘publish or perish’ mentality that may not allow time for ethical reflection.

With the emergence of these new attempts to acknowledge and articulate unique ethical considerations in AI research and the resulting concerns from some researchers, the need for the AI research community to come together to experiment, share knowledge and establish shared best practices is all the more urgent. We recommend the following three steps.

Study community behaviour and share learnings

So far, there are limited studies that have explored the responses of ML researchers to the launch of experimental ethics review practices. To understand how behaviour is changing and how to align practice with intended effect, we need to study what is happening and share learnings iteratively to advance innovation. For example, in response to the NeurIPS 2020 requirement for broader impact statements, a paper found that most researchers surveyed spent fewer than two hours working on this process 7 , perhaps retroactively towards the end of their research, making it difficult to know whether this reflection influenced or shifted research directions or not. Surveyed researchers also expressed scepticism about the mandated reflection on societal impacts 7 . An analysis of preprints found that researchers assessed impact through the narrow lens of technical contributions (that is, describing their work in the context of how it contributes to the research space and not how it may affect society), thereby overlooking potential effects on vulnerable stakeholders 8 . A qualitative analysis of a larger sample 10 and a quantitative analysis of all submitted papers 11 found that engagement was highly variable, and that researchers tended to favour the discussion of positive effects over negative effects.

We need to understand what works. These findings, all drawn from studies examining the implementation of ethics review at NeurIPS 2020, point to a pressing need to review actual versus intended community behaviour more thoroughly and consistently to evaluate the effectiveness of ethics review practices. We recognize that other fields have considered ethics in research in different ways. To get started, we propose the following approach, building on and expanding the analysis of Prunkl et al. 1 .

First, clear articulation of the purposes behind impact statements and other ethics review requirements is needed to evaluate efficacy and motivate future iterations by the community. Publication venues that organize ethics review must communicate expectations of this process comprehensively both at the level of individual contribution and for the community at large. At the individual level, goals could include encouraging researchers to reflect on the anticipated effects on society. At the community level, goals could include creating a culture of shared responsibility among researchers and (in the longer run) identifying and mitigating harms.

Second, because the exercise of anticipating downstream effects can be abstract and risks being reduced to a box-ticking endeavour, we need more data to ascertain whether they effectively promote reflection. Similar to the studies above, conference organizers and journal editors must monitor community behaviour through surveys with researchers and reviewers, partner with information scientists to analyse the responses 15 , and share their findings with the larger community. Reviewing community attitudes more systematically can provide data both on the process and effect of reflecting on harms for individual researchers, the quality of exploration encountered by reviewers, and uncover systemic challenges to practicing thoughtful ethical reflection. Work to better understand how AI researchers view their responsibility about the effects of their work in light of changing social contexts is also crucial.

Evaluating whether AI or ML researchers are more explicit about the downsides of their research in their papers is a preliminary metric for measuring change in community behaviour at large 2 . An analysis of the potential negative consequences of AI research can consider the types of application the research can make possible, the potential uses of those applications, and the societal effects they can cause 4 .

Building on the efforts at NeurIPS 16 and NAACL 17 , we can openly share our learnings as conference organizers and ethics committee members to gain a better understanding of what does and does not work.

Community behaviour in response to ethics review at the publication stage must also be studied to evaluate how structural and cultural forces throughout the research process can be reshaped towards more responsible research. The inclusion of diverse researchers and ethics reviewers, as well as people who face existing and potential harm, is a prerequisite to conduct research responsibly and improve our ability to anticipate harms.

Expand experimentation of ethical review

The low uptake of ethics review practices, and the lack of experimentation with such processes, limits our ability to evaluate the effectiveness of different approaches. Experimentation cannot be limited to a few conferences that focus on some subdomains of ML and computing research — especially for subdomains that envision real-world applications such as in employment, policing and healthcare settings. For instance, NeurIPS, which is largely considered a methods and theoretical conference, began an ethics review process in 2020, whereas conferences closer to applications, such as top-tier conferences in computer vision, have yet to implement such practices.

Sustained experimentation across subfields of AI can help us to study actual community behaviour, including differences in researcher attitudes and the unique opportunities and challenges that come with each domain. In the absence of accepted best practices, implementing ethics review processes will require conference organizers and journal editors to act under uncertainty. For that reason, we recognize that it may be easier for publication venues to begin their ethics review process by making it voluntary for authors. This can provide researchers and reviewers with the opportunity to become familiar with ethical and societal reflection, remove incentives for researchers to ‘game’ the process, and help the organizers and wider community to get closer to identifying how they can best facilitate the reflection process.

Create venues for debate, alignment and collective action

This work requires considerable cultural and institutional change that goes beyond the submission of ethical statements or checklists at conferences.

Ethical codes in scientific research have proven to be insufficient in the absence of community-wide norms and discussion 1 . Venues for open exchange can provide opportunities for researchers to share their experiences and challenges with ethical reflection. Such venues can be conducive to reflect on values as they evolve in AI or ML research, such as topics chosen for research, how research is conducted, and what values best reflect societal needs.

The establishment of venues for dialogue where conference organizers and journal editors can regularly share experiences, monitor trends in attitudes, and exchange insights on actual community behaviour across domains, while considering the evolving research landscape and range of opinions, is crucial. These venues would bring together an international group of actors involved throughout the research process, from funders, research leaders, and publishers to interdisciplinary experts adopting a critical lens on AI impact, including social scientists, legal scholars, public interest advocates, and policymakers.

In addition, reflection and dialogue can have a powerful role in influencing the future trajectory of a technology. Historically, gatherings convened by scientists have had far-reaching effects — setting the norms that guide research, and also creating practices and institutions to anticipate risks and inform downstream innovation. The Asilomar Conference on Recombinant DNA in 1975 and the Bermuda Meetings on genomic data sharing in the 1990s are instructive examples of scientists and funders, respectively, creating spaces for consensus-building 18 , 19 .

Proposing a global forum for gene-editing, scholars Jasanoff and Hulburt argued that such a venue should promote reflection on “what questions should be asked, whose views must be heard, what imbalances of power should be made visible, and what diversity of views exist globally” 20 . A forum for global deliberation on ethical approaches to AI or ML research will also need to do this.

By focusing on building the AI research field’s capacity to measure behavioural change, exchange insights, and act together, we can amplify emerging ethical review and oversight efforts. Doing this will require coordination across the entire research community and, accordingly, will come with challenges that need to be considered by conference organizers and others in their funding strategies. That said, we believe that there are important incremental steps that can be taken today towards realizing this change. For example, hosting an annual workshop on ethics review at pre-eminent AI conferences, or holding public panels on this subject 21 , hosting a workshop to review ethics statements 22 , and bringing conference organizers together 23 . Recent initiatives undertaken by AI research teams at companies to implement ethics review processes 24 , better understand societal impacts 25 and share learnings 26 , 27 also show how industry practitioners can have a positive effect. The AI community recognizes that more needs to be done to mitigate this technology’s potential harms. Recent developments in ethics review in AI research demonstrate that we must take action together.

Change history

11 january 2023.

A Correction to this paper has been published: https://doi.org/10.1038/s42256-023-00608-6

Prunkl, C. E. A. et al. Nat. Mach. Intell. 3 , 104–110 (2021).

Article   Google Scholar  

Hecht, B. et al. Preprint at https://doi.org/10.48550/arXiv.2112.09544 (2021).

Partnership on AI. https://go.nature.com/3UUX0p3 (2021).

Ashurst, C. et al. https://go.nature.com/3gsQfvp (2020).

Hecht, B. https://go.nature.com/3AASZhf (2020).

Ashurst, C., Barocas, S., Campbell, R., Raji, D. in FAccT ‘22: 2022 ACM Conf. on Fairness, Accountability, and Transparency 2057–2068 (2022).

Abuhamad, G. et al. Preprint at https://arxiv.org/abs/2011.13032 (2020).

Boyarskaya, M. et al. Preprint at https://arxiv.org/abs/2011.13416 (2020).

Birhane, A. et al. in FAccT ‘ 22: 2022 ACM Conference on Fairness, Accountability, and Transparency 173–184 (2022).

Nanayakkara, P. et al. in AIES ‘ 21: Proc. 2021 AAAI/ACM Conference on AI, Ethics, and Society 795–806 (2021).

Ashurst, C., Hine, E., Sedille, P. & Carlier, A. in FAccT ‘22: 2022 ACM Conf. on Fairness, Accountability, and Transparency 2047–2056 (2022).

National Academies of Sciences, Engineering, and Medicine. https://go.nature.com/3UTKOEJ (date accessed 16 September 2022).

Nat. Mach. Intell . 3 , 367 (2021).

Bernstein, M. S. et al. Proc. Natl Acad. Sci. USA 118 , e2117261118 (2021).

Pineau, J. et al. J. Mach. Learn. Res. 22 , 7459–7478 (2021).

Google Scholar  

Benjio, S. et al. Neural Information Processing Systems. https://go.nature.com/3tQxGEO (2021).

Bender, E. M. & Fort, K. https://go.nature.com/3TWnbua (2021).

Gregorowius, D., Biller-Andorno, N. & Deplazes-Zemp, A. EMBO Rep. 18 , 355–358 (2017).

Jones, K. M., Ankeny, R. A. & Cook-Deegan, R. J. Hist. Biol. 51 , 693–805 (2018).

Jasanoff, S. & Hurlbut, J. B. Nature 555 , 435–437 (2018).

Partnership on AI. https://go.nature.com/3EpQwY4 (2021).

Sturdee, M. et al. in CHI Conf.Human Factors in Computing Systems Extended Abstracts (CHI ’21 Extended Abstracts) ; https://doi.org/10.1145/3411763.3441330 (2021).

Partnership on AI. https://go.nature.com/3AzdNFW (2022).

DeepMind. https://go.nature.com/3EQyUWT (2022).

Meta AI. https://go.nature.com/3i3PBVX (2022).

Munoz Ferrandis, C. OpenRAIL; https://huggingface.co/blog/open_rail (2022).

OpenAI. https://go.nature.com/3GyZPYk (2022).

Download references

Author information

Authors and affiliations.

Partnership on AI, San Francisco, CA, USA

Madhulika Srikumar, Rebecca Finlay & Hudson Hongo

ServiceNow, Santa Clara, CA, USA

Grace Abuhamad

The Alan Turing Institute, London, UK

Carolyn Ashurst

OpenAI, San Francisco, CA, USA

Rosie Campbell

Centre for Data Ethics and Innovation, London, UK

Emily Campbell-Ratcliffe

Future of Privacy Forum, Washington, DC, USA

Sara R. Jordan

Design Research Works, Lancaster University, Lancaster, UK

Joseph Lindley

Belfer Center for Science and International Affairs, Harvard Kennedy School, Cambridge, MA, USA

Aviv Ovadya

Meta AI, Menlo Park, CA, USA

Joelle Pineau

McGill University, Montreal, Canada

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Madhulika Srikumar .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Machine Intelligence thanks Carina Prunkl and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Srikumar, M., Finlay, R., Abuhamad, G. et al. Advancing ethics review practices in AI research. Nat Mach Intell 4 , 1061–1064 (2022). https://doi.org/10.1038/s42256-022-00585-2

Download citation

Published : 14 December 2022

Issue Date : December 2022

DOI : https://doi.org/10.1038/s42256-022-00585-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

How to design an ai ethics board.

  • Jonas Schuett
  • Ann-Katrin Reuel
  • Alexis Carlier

AI and Ethics (2024)

Machine learning in precision diabetes care and cardiovascular risk prediction

  • Evangelos K. Oikonomou
  • Rohan Khera

Cardiovascular Diabetology (2023)

Generative AI entails a credit–blame asymmetry

  • Sebastian Porsdam Mann
  • Brian D. Earp
  • Julian Savulescu

Nature Machine Intelligence (2023)

Recommendations for the use of pediatric data in artificial intelligence and machine learning ACCEPT-AI

  • V. Muralidharan

npj Digital Medicine (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

software ethics research paper

  • Open access
  • Published: 23 January 2024

Integrating ethics in AI development: a qualitative study

  • Laura Arbelaez Ossa   ORCID: orcid.org/0000-0002-8303-8789 1 ,
  • Giorgia Lorenzini   ORCID: orcid.org/0000-0002-9155-4724 1 ,
  • Stephen R. Milford   ORCID: orcid.org/0000-0002-7325-9940 1 ,
  • David Shaw   ORCID: orcid.org/0000-0001-8180-6927 1 , 2 ,
  • Bernice S. Elger   ORCID: orcid.org/0000-0002-4249-7399 1 , 3 &
  • Michael Rost   ORCID: orcid.org/0000-0001-6537-9793 1  

BMC Medical Ethics volume  25 , Article number:  10 ( 2024 ) Cite this article

1886 Accesses

1 Citations

2 Altmetric

Metrics details

While the theoretical benefits and harms of Artificial Intelligence (AI) have been widely discussed in academic literature, empirical evidence remains elusive regarding the practical ethical challenges of developing AI for healthcare. Bridging the gap between theory and practice is an essential step in understanding how to ethically align AI for healthcare. Therefore, this research examines the concerns and challenges perceived by experts in developing ethical AI that addresses the healthcare context and needs.

We conducted semi-structured interviews with 41 AI experts and analyzed the data using reflective thematic analysis.

We developed three themes that expressed the considerations perceived by experts as essential for ensuring AI aligns with ethical practices within healthcare. The first theme explores the ethical significance of introducing AI with a clear and purposeful objective. The second theme focuses on how experts are concerned about the tension that exists between economic incentives and the importance of prioritizing the interests of doctors and patients. The third theme illustrates the need to develop context-sensitive AI for healthcare that is informed by its underlying theoretical foundations.

Conclusions

The three themes collectively emphasized that beyond being innovative, AI must genuinely benefit healthcare and its stakeholders, meaning AI also aligns with intricate and context-specific healthcare practices. Our findings signal that instead of narrow product-specific AI guidance, ethical AI development may need a systemic, proactive perspective that includes the ethical considerations (objectives, actors, and context) and focuses on healthcare applications. Ethically developing AI involves a complex interplay between AI, ethics, healthcare, and multiple stakeholders.

Peer Review reports

Introduction

The application of Artificial Intelligence (AI) in medicine has become a focus of academic discussions, given its (potentially) disruptive effects on healthcare processes, expectations, and relationships. While many see AI's potential to utilize vast data to improve healthcare and support better clinical decisions, there are also increasing concerns and challenges in aligning AI with ethical practices [ 1 ]. To set the right process and ethical goals, governmental and private institutions have developed many recommendations to guide the development of AI [ 2 , 3 ]. These documents have common themes of designing AI to ensure it is robust, safe, fair, and trustworthy using complementary bioethical principles [ 4 ]. Although these recommendations have served as building blocks of a common ethical framework for AI, less guidance exists on how these ideals must be translated into practical considerations [ 3 , 4 , 5 , 6 ]. Beyond ethical considerations, there is also an ethical imperative that AI complies practically with minimal performance, testing, or therapeutic value requirements as with other medical care products [ 7 ].

While many ethical considerations may not be unique to AI—such as respecting patients’ autonomy or ensuring healthcare remains fair for all—modern AI techniques (especially in opaque Machine Learning (ML) programs) present more challenges to ensure ethical compliance. In comparison with traditional expert systems that rely on visible and understandable sets of if-not statements, many ML techniques have multiple data connections that are less conducive to direct and fruitful human oversight because of the inherent complexity of these systems or many techniques lacking transparency due to hidden decision layers [ 1 ]. In that sense, ensuring AI systems fulfill healthcare’s ethical ideals cannot rely solely on oversight or supervision of their behavior. Ethical ideals and concepts must be often embedded in all steps of AI’s lifecycle, from ideation to development and implementation. Epley and Tannenbaum wrote in “Treating Ethics as a Design Problem” that to develop interventions and policies that encourage ethical AI, the focus must be on the process and context of its development, making ethical considerations part of the practicalities of day-to-day routines to an extent that they should become ingrained habits in practice [ 8 ].

In revising common tools to translate AI ethics into practice, Morley et al. found that most tools focus on documenting decision-making rather than guiding how to design ethical AI (to make it more or less ethically aligned) [ 4 ]. In that sense, these tools offer less support in understanding how to achieve ethical AI in practice or “in the wild”. Two common frameworks for the development of AI are ethics-by-design and responsible research and innovation (RRI), both aim to promote that those developing AI consider ethical aspects or how to make AI ethically acceptable, sustainable, and socially desirable [ 9 , 10 , 11 ]. While both frameworks focus on ethically developing AI, they highlight questions or potential ethical concerns rather than actionable steps. Therefore, challenges remain in translating theoretical discussions into practical impacts [ 3 , 12 , 13 , 14 ].

Evidence from previous qualitative research with participants working in the technology industry demonstrated that the gap between ethical theory and practice exists to a greater degree than initially considered [ 15 , 16 , 17 ]. There is a recognition that ethical practices must be defined by sector, application, and project type, as widespread generic guidance may not answer to context-specific complexities [ 16 , 17 ]. This is not the current development trend, as most AI ethics guides are generic and non-sector-specific [ 2 , 3 ]. Indeed, specific ethical guidelines for AI in healthcare, are scarce, and even more so when particular AI applications are considered. Given the lack of practical recommendations, researchers found that most companies first develop AI systems and only then attempt to understand how to generate ethical principles and standardize best practices, instead of integrating ethical considerations into their daily operations [ 17 ]. In a way, most ethical recommendations for AI’s development may have a “post” orientation where ethical values and consequences are considered “ afterward and just adapted to actions, but do not align action accordingly ” [ 18 ]. For example, researchers found that software developers and engineering students did not change their established ways of working even when a code of ethics was widely available and recommended [ 15 ].

Rising to the challenge of designing and deploying ethical AI to serve healthcare is essential. Still, many questions remain regarding the characteristics and processes that would support AI's ethical development and implementation. Most researchers have focused on “consumer research” on the conditions for people to accept the usage of AI [ 19 ]. In two recently published systematic reviews of empirical research available for AI in healthcare, most studies explored the knowledge and attitudes towards AI or factors contributing to stakeholders’ acceptance or adoption [ 20 , 21 ]. However, how AI is developed may affect its acceptance by stakeholders or usage. According to the systematic review by Tang et al., only 1.8% of all empirical evidence focused on AI’s ethical issues, which signaled the existing gap between ethical aspects of AI development and connecting high-level ethical principles to practices [ 21 ]. Given that evidence is limited regarding the integration of ethics into AI’s development, this research examines the challenges experts perceive in developing ethical AI for healthcare. However, the focus is not on theoretically discussing the ethical risk of AI in healthcare, nor ethical principles, but on how practical aspects may also benefit from being ethically approached during the development and implementation of AI in healthcare. As such, this research is a step forward in bridging the gap between ethical theory and practice for healthcare AI. As acknowledged by Mittelstadt, “ethics is a process, not a destination,” and the real work of AI ethics research must focus on translating and implementing principles to practice not in a formulaic way but as a path to understanding the real ethical challenges of AI that may go unnoticed in theoretical discussions [ 5 ]. Therefore, we used a qualitative approach to explore the topic from the views of professionals involved in developing and using AI (hereafter: AI experts). This paper aims to provide insights to identify the practical ethical challenges of developing AI for healthcare. Our contribution aims to obtain empirical evidence and contribute to the debate on potential practical challenges that may go unnoticed in theoretical ethical discussions, especially when AI is used in Clinical Decision Support Systems (CDSS).

The results presented in this paper are part of a research project funded by the Swiss National Research Program, “EXPLaiN”, which critically evaluates “Ethical and Legal Issues of Mobile Health-Data”. For reporting the methods and results, we followed the criteria for qualitative research (SRQR) [ 22 ]. All experimental protocols were approved by the Ethics Committee of Northwestern and Central Switzerland (EKNZ). This project was done under the regulatory framework for Human Research ACT in Switzerland. After revision, the EKNZ issued a waiver statement (declaration of no objection: AO_2021-00045) declaring that informed consent was not needed for experts. However, informed consent was verbally obtained from all subjects and audio-recorded at the beginning of each interview.

Participants recruitment

To be eligible for recruitment, AI experts had to have experience working with or developing AI for healthcare, allowing us to explore the views of various professional backgrounds. Given that AI for healthcare is a multi- and interdisciplinary field, exploring multiple backgrounds provided insights into AI ethical practices beyond professional silos. We utilized professional networks and contacted authors of academic publications in AI. Using purposive sampling based on experience and exposure to AI allowed us to produce rich, articulated, and expressive data [ 23 ].

Data collection

We used semi-structured interview guides to allow for this study's exploratory approach. An interview guide was developed by the research team and included questions regarding the utilization of AI in healthcare, focusing on key domains: (i) overall perceptions of AI, (ii) AI as a direct-to-patient solution (in the form of wearables), (iii) the dynamics of AI within doctor-patient interactions. After piloting the interview guide with six participants, we decided to contextualize the questions using vignettes (a situation description) to probe for an in-depth discussion. The vignettes were highly plausible scenarios of current and future AI interactions with patients (via smartwatches) or doctors via a CDSS. Vignettes probe for attitudes and beliefs while focusing less on the theoretical knowledge within the research area [ 24 , 25 ]. Although we recognize that vignette responses are primarily based on personal views and moral intuitions rather than being theoretically grounded, how participants interpret the vignette is similar to how they make sense of a situation and make decisions [ 24 ]. The guideline for the semi-structured interview is available in the Supplementary materials .

Two research team members conducted the interviews (L.A.O n  = 21; G.L. n  = 20) between October 2021 and April 2022. All interviews were held in English and audio-recorded using Zoom but stored locally. The audio recordings were transcribed verbatim.

Data analysis

We opted to use reflexive thematic analysis (TA) as our analytical framework, enabling us to contextualize our analysis for healthcare and uncover intricate and underlying patterns of meaning within the available data [ 26 ]. In particular, we chose reflexive TA because this study aimed at a deep and nuanced understanding of the data that captures the complexities of developing AI for healthcare without rigid preconceptions [ 27 ]. Two authors (L.A.O, M.R.) led the analysis, and all the co-authors supported the process. We carried out inductive and deductive thematic coding of the data, initially line-by-line, using descriptive or latent labels (software MAXQDA). L.A.O. and G.L. coded all the AI experts’ interviews with coding sessions supported by M.R., S.M., and D.S. The first two authors L.A.O and M.R. developed overarching themes reviewed and agreed upon by the entire research team later. After iterative analysis and reflections, the team created major themes illustrating the practical ethical concerns of developing AI for healthcare. For this publication, the authors present examples of data without identifying information.

The researchers' backgrounds informed the interpretation of the data and led to actively developed themes that focus on big ethical questions of who is benefiting from AI and why. In behavioral and political science, person-centrism is a widely acknowledged paradigm that helps to question and reflect on power structures and how these affect patients. Although our positionality has informed our analysis, the research group engaged in frequent discussions and included different academic backgrounds (philosophy, ethics, medicine, psychology) to prevent a single or superficial analysis.

We developed three themes presented through representative data extracts (de-identified). Given that AI in healthcare is a multidisciplinary area, most professionals found themselves at the intersection of two or more areas of experience; for example, eight participants were medical professionals with AI experience. The acronyms used aim to illustrate the main field of the expert: MEAI for medical experts with AI experience, BE for bioethicists, DH for digital health experts, LE for legal experts, PE for those experts working in policy analysis, and TE for technical experts either in data, AI techniques or AI product development. To improve readability, the authors removed filler sounds and double words from the data presented in this paper and the Supplementary information . The sample characteristics are described in Table  1 .

Creating AI with an ethical purpose

This theme explores the main challenges of creating AI for healthcare with a purposeful perspective. Several AI experts questioned the reasons behind AI’s development and whether the justifications are enough to deploy it for clinical care ethically. In their words, some experts fear that AI is a “shiny new object” mainly developed to answer the desire for innovation rather than providing actual improvements. Some experts stated that the potential lack of purposeful development may lead to an overestimation of the theoretical benefits of AI while having limited practical application. Viewed this way, a clear purpose becomes vital to creating a useful, ethical AI that answers healthcare needs.

Resisting technology-driven innovation

Some experts challenged the notion that innovation is inherently positive. These experts expressed the (ethical) importance of justifying innovative products beyond their disruptive capabilities. They emphasized avoiding the temptation of treating innovative AI as a panacea capable of solving every healthcare problem (Table  2 ).

Some experts described how defining which problem to solve is a significant hurdle to creating an AI that is useful for healthcare. Experts described that when AI is not designed to solve a specific problem, it can become a hurdle, distraction, or simply ineffective for the application. In their views, AI design should be proactive, focusing on the intention to solve real healthcare problems and not reactive to what technology is capable of doing. One participant [Rn40 (TE)] mentioned the concept of “repairing innovation” and how designing AIs in practice is not about developing a new solution but rather requires adapting AI’s design to the context of the specific application and the (un)expected challenges.

Moving beyond theoretical usability

Several experts highlighted the gap between theoretical and practical objectives. While many AI publications and products have performed well in controlled environments (and in theory), there is a disconnect with clinical practice. There are questions about whether these theoretical results will translate into positive changes “in the wild”. Some experts worried that AI would become “cool” theories with optimistic results that fail to be implemented in the hospital setting because implementation is not the objective or that the results are not transferrable to real-life conditions. A few participants felt concerned about the relative emphasis on publishing AI results that seem good in theory but do not consider whether they can improve patient outcomes. The experts brought attention to the complexity of implementing AI solutions in healthcare and the importance of moving from research and development to actual deployment (Table  3 ).

Balancing AI for different healthcare stakeholders

While the first theme focused on AI as an object of development, this theme explores those who shape and benefit from AI solutions. Some experts were concerned about who benefits from AI and whether AI solutions respond to the needs of patients and doctors. Some experts mentioned the tensions between optimizing processes, increasing profits, and maximizing patients’ benefits. In experts’ opinions, the question of who should decide on the (ethical) acceptability of AI’s development remains open and requires public discussion.

Considering stakeholders’ requirements

AI experts questioned whether AI focuses on the needs of those impacted by its usage. Regarding patient care, a few experts expressed how AI may not be genuinely patient-centric as patients’ views may be systematically omitted from AI’s development (Table  4 ).

A participant [Rn41 (MEAI)] described how AI’s development might be marketing-driven rather than oriented toward patients’ needs. A few experts brought bioethical principles of justice and fairness into the discussion and how important it is to consider the distribution of benefits for patients and doctors. A lingering question is whether AI solves the challenges patients and doctors face, or if it focuses on the goals of the technology industry.

Tensions between incentives

Following the above questions, some experts described the tensions between benefiting those in healthcare and those working for the industry. In contrast to healthcare, where patient benefit is an essential incentive of care provision, those developing AI may be interested in profits or operational efficiencies. A few experts voiced concerns about the entities or people responsible for setting AI standards and pleaded for the critical examination of AI’s adequacy for healthcare requirements (Table  5 ).

Context-sensitive AI development

This theme explores the contextual factors shaping AI within the unique healthcare landscape. Some experts expressed how compared to other industries, healthcare is unique in that risk is high and health is fundamental. A few experts highlighted the importance of considering how established rules and standards govern healthcare. A notable concern voiced by a few experts was the apparent lack of awareness regarding ethical healthcare practices. In some experts’ views, these considerations would help dictate what is expected and ethically acceptable for AI’s development and implementation.

Healthcare is unique

Some experts explicitly expressed how healthcare is a unique context that cannot be compared, regulated, or guided like other industries. In their view, healthcare needs higher standards for AI development and implementation than, for example, retail or autonomous driving. Some experts mentioned that common product development practices, such as time-to-market, testing, and quality assurance standards, may need to be re-considered in healthcare. For example, a participant [Rn25] mentioned that testing a solution during AI product development is not simply a question of iteration as in other industries, because AI may bring unexpected risks and challenges in healthcare. In that sense, a few experts mentioned the importance of including a system perspective during the development of AI and the importance of considering the unique relationships and context dynamics of healthcare (Table  6 ).

No need to "reinvent the wheel"

Some experts pointed out the importance of considering the rules, standards of practice, and ethical codes that dictate what is ethically acceptable in healthcare. In their view, AI is not necessarily a new technique or ethical challenge, and many existing ethical frameworks could be initially applied for its development. A few experts noted how an awareness of ethical healthcare practices could be a solid foundation to guide AI’s development instead of creating new protocols that may be misguidedly technology-focused (Table  7 ).

This research paper explores the development of AI and the considerations perceived by experts as essential for ensuring that AI aligns with ethical practices within healthcare. The experts underlined the ethical significance of introducing AI with a clear and purposeful objective. Experts expressed that beyond being innovative, AI needs to be meaningful for healthcare in practical ways. During the interviews, experts illustrated the ethical complexity of navigating the tension between profit and healthcare benefits as well as the importance of prioritizing the interests of healthcare professionals, and patients who are the stakeholders most affected by AI’s implementation. Experts highlighted the importance of understanding the context, the intrinsic dynamics, and the underlying theoretical foundation of healthcare during the development of AI. The three themes collectively call to deliver AI that serves the interests of doctors and patients and aligns with the intricate and context-specific healthcare landscape. For this to be achieved, those developing AI applications need to be sufficiently aware of clinical and patient interests, and this information transfer to the developers must be prioritized.

To our knowledge, limited evidence exists regarding the practical aspects of developing ethical AI for healthcare. However, in a roundtable discussion by experts, the ideal future agenda for AI and ethics included the questions: “(i) who designs what for whom, and why? (ii) how do we empower the users of AI systems? (iii) how do we go beyond focusing on technological issues for societal problems?” [ 28 ]. Our results validate how integral these questions are within a specific context of application, namely healthcare, and how they can help recognize ethical pitfalls in AI’s development. Our results focus on readily understandable ethical questions such as: Is AI developed for the right reasons? And, is the solution benefiting the right stakeholder? These practical questions can help evaluate the ethical implications of AI in a more understandable and relatable manner [ 29 , 30 ].

One participant mentioned the concept of “repairing innovation” originating from Madeleine Clare Elish and Elizabeth Anne Watkins. This concept adequately summarizes the challenges described by our experts of developing AI solutions in healthcare. Elish and Watkins stated that there is a critical role in examining and understanding how effective clinical AI solutions must be considered part of complex sociotechnical systems in their development [ 31 ]. They advocate seeing AI beyond its potential (and often theoretical) possibilities but centrally investigate whether AI addresses existing problems, exploring how and in what ways AI is integrated into existing processes as well as how it disrupts them [ 31 ]. For them, to repair innovation is to set new practices and possibilities that address the often unexpected changes caused by AI’s disruption and integrate them into an existing professional context. Collectively, our findings suggest experts saw the need to change the way AI for healthcare is currently developed. They often called implicitly to repair the guidance, process, and incentives that help make AI align with ethical frameworks.

The World Health Organization guideline for AI ethics states that implementing ethical principles and human rights obligations into practice must be part of “every stage of a technology’s design, development, and deployment” [ 32 ]. In line with their statement, ethical AI (and AI ethics) cannot be solely involved in defining the ethical concepts or principles that must be part of AI, but must help guide its development. However, the current versions of AI ethics guidance have had limited effect in changing the practices or development of AI to make it more ethical [ 3 , 15 , 33 ]. Hallamaa and Kalliokoski (2022) raise the question: "What could serve as an approach that accounts for the nature of AI as an active element of complex sociotechnical systems?” [ 33 ]. While our results cannot offer an answer to this question; the insights of this study suggest that developing and implementing ethical AI is a complex, multifaceted, and multi-stakeholder process that cannot be removed from the context in which it will be used. In that sense, AI ethics for healthcare may need to become more practically minded and potentially include moral deliberations on AI's objectives, actors, and the specific healthcare context. In this way, our study focuses on the practical ethical challenges that are a part of the puzzle regarding what “ought to be” ethical AI for healthcare. Further research is needed to answer which tools or methods for ethical guidance can achieve in practice better ethical alignment of AI for healthcare.

In particular, the experts in our study were concerned about the innovation-first approach. These concerns, however, are not unique to healthcare. While innovation may be positive when it answers to the specific needs of stakeholders and is context-sensitive, it can also be simply a new, but potentially, useless product. Although the RRI framework places great importance on creating innovative products that are ethically acceptable and socially desirable, there are currently no tools that can help determine whether an innovation fulfills the conditions for RRI [ 34 ]. RRI is mostly used to determine regulatory compliance, which means the assessment of whether an AI fulfills RRI may come “too late” when it can no longer be transformed to impact practice [ 11 , 34 ]. Guidance to develop AI ethically and responsibly may need to shift to a proactive and operationally strategic approach for practical development instead of remaining prescriptive.

Within the frameworks that guide AI’s development, the question remains: Who is in charge or responsible for ethically aligning AI in healthcare? Empirical evidence suggests that development teams are often more concerned with the usefulness and viability of the product rather than its ethical aspects [ 35 ]. In part, these results are expected as software developers are not responsible for strategic decisions regarding how and why AI is developed [ 17 ]. While some academics have suggested embedding ethics into AI’s design by integrating ethicists in the development team [ 36 ], management (including product managers) may be a better entry point to ensure that AI is ethically developed from its initial ideation. In a survey, AI developers felt capable of designing pro-ethical AI, but the question remained whether they were responsible for these decisions [ 37 ]. These developers stressed that although they feel responsible, without senior leadership, their actionability is limited [ 37 ]. This hints at the possibility that operationalizing AI ethics may need to include business ethics and procedural approaches to business practices such as quality assurance [ 30 ].

For our experts, context awareness is undeniably important, and a systemic view of healthcare is essential to understanding how to achieve ethical AI. AI innovations by themselves do not change the interests that determine the way healthcare is delivered or re-engineer the incentives that support existing ways of working, and that is why “ simply adding AI to a fragmented system will not create sustainable change ” [ 38 ]. As suggested by Stahl, rethinking ecosystems to ensure processes and outcomes meet societal goals may be more fruitful than assigning individual responsibility, for example, to developers [ 9 ]. Empirical evidence collected on digital health stakeholders in Switzerland showed that start-up founders may lack awareness or resources to optimize solutions for societal impact or that their vision may be excessively focused on attaining a high valuation and selling the enterprise quickly [ 11 ]. Similar to our results, the participants in Switzerland reflected on the tension between key performance indicators focused on commercial success or maximization of societal goals [ 11 ]. It might be challenging to address this tension without creating regulatory frameworks for AI’s development and business practices.

In contrast to focusing on AI as product development, for example, ethics-by-design, Gerke suggested widening the perspective to design processes that can manage AI ethically, including considering systemic and human factors [ 39 ]. Attention may be needed to evaluate the interactions of AI with doctors and patients and whether it is usable and valuable for them. For example, an AI assisting diagnosis of diabetic retinopathy may not be helpful for ophthalmologists as they already have that expertise [ 6 ]. Along similar lines, digital health stakeholders in Switzerland described that due to the complexities in navigating the health system, innovators may lose sight of the “ priorities and realities of patients and healthcare practitioners ” [ 11 ]. Our results reflect these findings, showing that balancing AI for different stakeholders is challenging. Creating frameworks and regulations that change the incentives of AI’s development may be an opportunity to answer stakeholders' priorities and healthcare needs. For example, to encourage the development of effective and ethical AI applications, reimbursement regulations could incentivize those solutions that offer considerable patient benefit or financial rewards when efforts have been put into bias mitigation [ 40 ].

Strengths and limitations

While research papers are abundant for theoretical discussions, there is limited empirical evidence on the practical challenges perceived by experts to develop AI for healthcare that is ethically aligned. Therefore, our results are important to provide evidence that may help bridge the gap between the theory and practice of AI ethics for healthcare. Given the thematic analysis methodology, we collected rich data and conducted an in-depth exploration of the views and insights of a wide variety of experts.

For the context of our interviews, AI is used as a general term that can lead to experts interpreting AI differently or focusing specifically on machine learning (and its black-box subtypes). However, consensus on the definition of AI remains elusive and a topic of academic and governmental discussion. While the European Commission has recently defined AI, Footnote 1 the definition is still broad. They included any software that can decide based on data the best course of action to achieve a goal [ 41 ]. While we clarified the focus on supportive AI as CDSS during the interview, some experts brought different understandings of AI to the discussion, delineating scenarios where it would be more autonomous and unsupervised. This challenge is not exclusive to our research or to healthcare, but it reflects the fact that AI is an ever-evolving topic currently under conceptual and practical construction and where multiple open questions remain. Given that our research aims to be exploratory, identifying different interpretations of AI can be considered part of our results, and signals a broader challenge in which research and ethics guidelines may need to define and study AI as application-, subject-, and context-specific. While our study demonstrates how practical challenges during AI’s development may need ethical reflection, as qualitative research, our results cannot be generalized outside the study population, and more research is needed to explore whether similar insights can be obtained in other areas. For example, future quantitative research could investigate whether participants from different healthcare models (commodity vs social service) may have different views or fears regarding AI’s development for healthcare.

Moreover, the chosen recruitment strategy of a purposive sample may have introduced bias in the selection of participants, given the dominance of researchers who are men or come from high-income countries. While we actively invited participants from non-dominant backgrounds (women and researchers of the global south), only a few accepted participation. Therefore, our results widely represent the views of those in Western countries, emphasizing Europe. The subject of our study must be further researched in different technological, socio-economical, and international systems.

This research paper explored the critical ethical considerations highlighted by experts for developing AI in healthcare. Our main findings suggest the importance of building AI with a clear purpose that aligns with the ethical frameworks of healthcare and the interests of doctors and patients. Beyond the allure of innovation, experts emphasized that ensuring AI genuinely benefits healthcare and its stakeholders is essential. The existing tensions between the incentives of commercial success or benefit demonstrated the importance of guiding the development of AI and its business practices. In that sense, experts considered context awareness vital to understanding the systemic implications of AI in healthcare. In contrast to a narrow product-focused approach, AI guidance may need a systemic perspective for ethical design. This study brings attention to these systemic practical ethical considerations (objectives, actors, and context) and the prominent role these have in shaping AI ethics for healthcare.

Developing practical solutions to the identified concerns may have a high impact. While there is yet to be an answer to addressing these challenges and further research is needed, our findings demonstrate the intricate interplay between AI, ethics, and healthcare as well as the multifaceted nature of the journey toward ethically sound AI.

Availability of data and materials

All data extracts analyzed during this study are included in this published article (and its Supplementary materials ). However, the complete datasets used during the current study cannot be made publicly available but can be shared by the corresponding author upon reasonable request.

“Software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structure or unstructured data, reasoning on the knowledge, or process the information, derived from this data and deciding the best actions(s) to take to achieve the given goal” [ 41 ].

Abbreviations

Artificial Intelligence

Clinical Decision Support Systems

General Data Protection Regulation in Europe

Machine Learning

Responsible Research and Innovation

Ethics Committee for Northwestern and Central Switzerland

Human Research ACT of Switzerland

Thematic Analysis

Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: Addressing ethical challenges. PLOS Med. 2018;15(11):e1002689.

Article   Google Scholar  

Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99.

Hagendorff T. The ethics of AI ethics: an evaluation of guidelines. Minds Mach. 2020;30(1):99–120.

Morley J, Floridi L, Kinsey L, Elhalal A. From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci Eng Ethics. 2020;26(4):2141–68.

Mittelstadt B. Principles alone cannot guarantee ethical AI. Nat Mach Intell. 2019;1(11):501–7.

Hickok M. Lessons learned from AI ethics principles for future actions. AI Ethics. 2021;1(1):41–7.

Coravos A, Goldsack JC, Karlin DR, Nebeker C, Perakslis E, Zimmerman N, et al. Digital medicine: a primer on measurement. Digit Biomark. 2019;3(2):31–71.

Epley N, Tannenbaum D. Treating ethics as a design problem. Behav Sci Policy. 2017;3(2):72–84.

Stahl BC. Who is responsible for responsible innovation? Lessons from an investigation into responsible innovation in health. Int J Health Policy Manag. 2019;8(7):447–9.

European Comission. Ethics By Design and Ethics of Use Approaches for Artificial Intelligence. DG Research & Innovation RTD.03.001- Research Ethics and Integrity Sector; 2021.

Landers C, Vayena E, Amann J, Blasimme A. Stuck in translation: Stakeholder perspectives on impediments to responsible digital health. Front Digit Health. 2023;5. Available from: https://www.frontiersin.org/articles/10.3389/fdgth.2023.1069410 .

Arnold MH. Teasing out artificial intelligence in medicine: an ethical critique of artificial intelligence and machine learning in medicine. J Bioethical Inq. 2021;18(1):121–39.

Fukuda-Parr S, Gibbons E. Emerging consensus on ‘ethical AI’: human rights critique of stakeholder guidelines. Glob Policy. 2021;12(S6):32–44.

Munn L. The uselessness of AI ethics. AI Ethics. 2022. https://doi.org/10.1007/s43681-022-00209-w .

McNamara A, Smith J, Murphy-Hill E. Does ACM’s code of ethics change ethical decision making in software development? In: Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. New York: Association for Computing Machinery; 2018. p. 729–33. (ESEC/FSE 2018). https://doi.org/10.1145/3236024.3264833 .

Vakkuri V, Kemell KK, Kultanen J, Siponen M, Abrahamsson P. arXiv.org. Ethically Aligned Design of Autonomous Systems: Industry viewpoint and an empirical study. 2019. Available from: https://arxiv.org/abs/1906.07946v1 .

Ibáñez JC, Olmeda MV. Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study AI Soc. 2022;37(4):1663–87.

Google Scholar  

Gogoll J, Zuber N, Kacianka S, Greger T, Pretschner A, Nida-Rümelin J. Ethics in the software development process: from codes of conduct to ethical deliberation. Philos Technol. 2021;34(4):1085–108.

Vaid S, Puntoni S, Khodr A. Artificial intelligence and empirical consumer research: A topic modeling analysis. J Bus Res. 2023;1(166):114110.

Scott IA, Carter SM, Coiera E. Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health Care Inform. 2021;28(1):e100450.

Tang L, Li J, Fantus S. Medical artificial intelligence ethics: a systematic review of empirical studies. Digit Health. 2023;1(9):20552076231186064.

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med J Assoc Am Med Coll. 2014;89(9):1245–51.

Etikan I. Comparison of Convenience Sampling and Purposive Sampling. Am J Theor Appl Stat. 2016;5(1):1.

Jenkins N, Bloor M, Fischer J, Berney L, Neale J. Putting it in context: the use of vignettes in qualitative interviewing. Qual Res. 2010;10(2):175–98.

Murphy J, Hughes J, Read S, Ashby S. Evidence and practice: a review of vignettes in qualitative research. Nurse Res. 2021;29(3):8–14.

Braun V, Clarke V, Hayfield N, Terry G. Thematic Analysis. In: Liamputtong P, editor. Handbook of Research Methods in Health Social Sciences. Singapore: Springer; 2019. p. 843–60. https://doi.org/10.1007/978-981-10-5251-4_103 .

Finlay L. Thematic analysis: : The ‘Good’, the ‘Bad’ and the ‘Ugly.’ Eur J Qual Res Psychother. 2021;20(11):103–16.

Cath C, Zimmer M, Lomborg S, Zevenbergen B. Association of Internet Researchers (AoIR) Roundtable Summary: Artificial Intelligence and the Good Society Workshop Proceedings. Philos Technol. 2018;31(1):155–62.

Hoffmann DE. Evaluating ethics committees: a View from the Outside. Milbank Q. 1993;71(4):677–701.

Morley J, Elhalal A, Garcia F, Kinsey L, Mökander J, Floridi L. Ethics as a Service: a pragmatic operationalisation of AI ethics. Minds Mach. 2021;31(2):239–56.

Elish MC, Watkins EA. Repairing Innovation: A Study of Integrating AI in Clinical Care. 2020.

WHO, World Health Organization. Ethics and governance of artificial intelligence for health. 2021. Available from: https://www.who.int/publications-detail-redirect/9789240029200 . [Cited 2022 Aug 18].

Hallamaa J, Kalliokoski T. AI Ethics as Applied Ethics. Front Comput Sci. 2022;4. Available from: https://www.frontiersin.org/articles/10.3389/fcomp.2022.776837 .

Silva HP, Lefebvre AA, Oliveira RR, Lehoux P. Fostering Responsible Innovation in Health: an Evidence informed assessment tool for innovation stakeholders. Int J Health Policy Manag. 2020;10(4):181–91.

Vakkuri V, Kemell KK. Implementing AI Ethics in Practice: An Empirical Evaluation of the RESOLVEDD Strategy. In: Hyrynsalmi S, Suoranta M, Nguyen-Duc A, Tyrväinen P, Abrahamsson P, editors. Software Business. Cham: Springer International Publishing; 2019. p. 260–75 (Lecture Notes in Business Information Processing).

Chapter   Google Scholar  

McLennan S, Fiske A, Celi LA, Müller R, Harder J, Ritt K, et al. An embedded ethics approach for AI development. Nat Mach Intell. 2020;2(9):488–90.

Morley J, Kinsey L, Elhalal A, Garcia F, Ziosi M, Floridi L. Operationalising AI ethics: barriers, enablers and next steps. AI Soc. 2023;38(1):411–23.

Panch T, Mattie H, Celi LA. The “inconvenient truth” about AI in healthcare. Npj Digit Med. 2019;2(1):1–3.

Gerke S, Babic B, Evgeniou T, Cohen IG. The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. Npj Digit Med. 2020;3(1):1–4.

Parikh RB, Helmchen LA. Paying for artificial intelligence in medicine. Npj Digit Med. 2022;5(1):1–5.

European Commission, Joint Research Centre, Samoili S, López Cobo M, Gómez E, De Prato G, Martínez-Plumed F, Delipetrev B. AI watch: defining artificial intelligence: towards an operational definition and taxonomy of artificial intelligence. Publications Office; 2020. Available from: https://doi.org/10.2760/382730 . [Cited 2023 Aug 22].

Download references

Acknowledgements

We would like to thank Dr. Tenzin Wangmo for her support during the initial coding of the interviews.

Open access funding provided by University of Basel The Swiss National Research Foundation enabled this work with the National Research Program “Digital Transformation” framework, NRP 77 [project number 187263, Grant No:407740_187263 /1, the recipient: Prof. Bernice Simone Elger]. Die Freiwillige Akademische Gesellschaft (FAG) in Basel provided additional funding to the first author (L.A.O) to complete this publication.

Author information

Authors and affiliations.

Institute for Biomedical Ethics, University of Basel, Basel, Switzerland

Laura Arbelaez Ossa, Giorgia Lorenzini, Stephen R. Milford, David Shaw, Bernice S. Elger & Michael Rost

Care and Public Health Research Institute, Maastricht University, Maastricht, Netherlands

Center for Legal Medicine (CURML), University of Geneva, Geneva, Switzerland

Bernice S. Elger

You can also search for this author in PubMed   Google Scholar

Contributions

Two researchers conducted the interviews (L.A.O. n = 21; G.L. n = 20). Two authors (L.A.O., M.R.) led the analysis, and all the co-authors supported the process. L.A.O. and G.L. coded all the interviews with coding sessions supported by M.R., S.M., and D.S. The first two authors L.A.O. and M.R. developed overarching themes reviewed and agreed upon by the entire research team later. All authors reviewed and edited the manuscript and approved the final version of the manuscript.

Corresponding author

Correspondence to Laura Arbelaez Ossa .

Ethics declarations

Ethics approval and consent to participate.

All methods were approved by The Ethics Committee of Northwest and Central Switzerland (EKNZ), under Switzerland’s Human Research ACT (HRA) Art. 51. The methods were carried out in accordance with the relevant HRA guidelines and regulations. After revision, the EKNZ concluded that interviewing AI professionals falls outside the HRA and requires only verbal consent at the beginning of an interview (declaration of no objection: AO_2021-00045).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Interview guideline.

Additional file 2.

Additional data extracts per theme.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Arbelaez Ossa, L., Lorenzini, G., Milford, S.R. et al. Integrating ethics in AI development: a qualitative study. BMC Med Ethics 25 , 10 (2024). https://doi.org/10.1186/s12910-023-01000-0

Download citation

Received : 18 October 2023

Accepted : 28 December 2023

Published : 23 January 2024

DOI : https://doi.org/10.1186/s12910-023-01000-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Qualitative research
  • AI development
  • AI guidance
  • Implementation

BMC Medical Ethics

ISSN: 1472-6939

software ethics research paper

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Ethical, Legal and Social Implications of Emerging Technology (ELSIET) Symposium

Evie kendal.

School of Health Sciences and Biostatistics, Swinburne University of Technology, John St, Hawthorn, Victoria Australia

Establishing ethical guidelines for the development and release of emerging technologies involves many practical challenges. Traditional methods of evaluating relevant ethical dimensions, such as Beauchamp and Childress’s ( 2001 ) Principlist framework, are often not fit for purpose: after all, how can one give autonomous, informed consent to the use of novel technologies whose effects are unknown? How can cost-benefit analyses be conducted in cases where there is a high degree of scientific uncertainty about the severity and likelihood of different risks, and potential benefits have not yet been demonstrated? Nevertheless, it is necessary to promote consideration of the ethical, legal, and social implications of emerging technologies to avoid them being released into what some commentators label a moral, policy, and/or legal vacuum (Moor 2005 ; Edwards 1991 ). Consequently, various methods for approaching the ethics of emerging technologies have arisen over the last few decades, some of the more common of which are summarized below.

Precautionary Approaches and the Precautionary Principle

Moor ( 2005 ) claims that the rapid emergence of new technologies “should give us a sense of urgency in thinking about the ethical (including social) implications” of these technologies (111), noting that when technological developments have significant social impact this is when “technological revolution occurs” (112). He notes, however, that these technological revolutions “do not arrive fully mature,” and their unpredictability yields many ethical concerns (112). For this reason, Wolff ( 2014 ) advocates for a precautionary approach to regulating emerging technologies with unknown risks, noting historical excitement over the benefits of new technologies that were far outweighed by harms that materialized later: asbestos used to fireproof buildings that led to high costs for removal and loss of human life, and chlorofluorocarbons used in refrigerants that caused significant damage to the ozone layer being his main examples (S27). He claims a precautionary approach to any new technology would always ask four questions: 1) is the technology known to have “intolerable risks”, 2) does it yield substantial benefits, 3) do these benefits “solve important problems”, and 4) could these problems be “solved in some other, less risky way” (S28)? According to this approach, unless the answers to questions 1 and 4 are no, and 2 and 3 are yes, technological development should not be permitted to proceed.

The Precautionary Principle (PP) formalizes a precautionary approach to emerging technologies, particularly those involving potential harms to human health and the environment (Hansson 2020 ). Bouchaut and Asveld ( 2021 ) note the PP originated in German domestic law in the 1970s before being adopted as the dominant approach to regulating new biotechnologies throughout Europe from the 1990s onward. One of the main regulatory areas where the PP is discussed is in European legislation surrounding genetically-modified foods, with many arguing this precedent will likely impact the treatment of more novel gene-editing techniques, such as CRISPR-Cas9 (Bouchaut and Asveld 2021 ). At its core, the PP translates any potential for harm in the face of scientific uncertainty into a positive duty for stakeholders to act to prevent or mitigate this harm (Guida 2021 ). Incorporating risk assessment, management, and communication, for Hansson ( 2020 ) the PP represents a “pattern of thought, namely that protective measures against a potential danger can be justified even if it not known for sure that the danger exists” (250). It is for this reason that use of the PP is often criticized for unreasonably blocking technological developments, including those that could yield significant health benefits for the population (Hester et al. 2015 ). However, in a study of nine jurisdictions with different levels of regulatory restrictions on biotechnological developments, Gouvea et al. ( 2012 ) found research productivity was not enhanced through the “absence of structural or ethical impediments,” but rather the presence of transparent guidelines (562):

While one might argue that an environment that lacks all constraints (including ethical barriers) may allow for rapid product development through the provision of an environment where anything goes, the opposite is found to be the case. It is likely that the presence of clearly defined rules, higher levels of disclosure, greater levels of trust, and reduced costs (associated to lower levels of corruption) results in the appropriate set of ethical rules and guidelines providing the best outcomes for development of commercial products (562–563).

This study drew comparisons between the European model applying the PP to advances in nanotechnology and the U.S. wait-and-see approach, which treats these new products the same as their more traditional counterparts (554). Applying Hester et al.’s ( 2015 ) logic to this situation, the European approach imposes restrictions and a requirement to avoid potential harms, while the United States takes a more conventional legal approach that would merely award damages if a harm is sustained.

Hansson ( 2020 ) claims “no other safety principle has been so vehemently contested” as the PP, with many arguing it “stifles innovation by imposing unreasonable demands on the safety of new technologies” (245). However, applying precautionary measures to novel situations with uncertain risks has proven essential in the global response to the COVID-19 crisis, where preventive actions had to be put in place while scientific data were still being collected (Guida 2021 ). For Hansson ( 2020 ), and many others, what the PP lacks is a method of adjusting to new knowledge as it becomes available. Wareham and Nardini ( 2015 ) similarly note that in its strongest formulation, the PP might ban entire research projects going ahead, due to risk, however small, of a significant harm. Mittelstadt, Stahl, and Fairweather ( 2015 ) also note that if the purpose of the PP is to avoid harm, and preventing scientific progress and the development of new technologies can be considered a harm, this results in a precautionary paradox where the principle “would instruct us to refrain from implementing itself” (1034). In all the above cases, the authors advocate instead for an iterative process that allows progressive steps of experimentation, proportional to their associated risk, with regulations adapting as scientific uncertainty “gives way to new scientific knowledge” (Hansson 2020 , 253). This relates to the next approach to be covered here: design ethics.

Design Ethics Versus the Collingridge Dilemma

Design ethics takes into account the social and ethical dimensions of the context in which a product is designed and will be used. One of the more common methods is “value-sensitive design,” which tries to identify relevant human values during technology research and development phases to ensure they are “promoted and respected by the design” (Umbrello and van de Poel 2021 , 283). These might include respect for privacy, environmental sustainability, accountability, and many other values that are at stake in the interaction between people, technology, and the environment (Friedman et al. 2021 ). When it comes to new technologies with unknown risks, this model would support adopting preventive measures to mitigate harm, but as Bouchaut and Asveld ( 2021 ) claim, could also allow for “controlled learning” experiments, where step-by-step potential risks can be explored as the technology develops. These authors refer to this process as responsible learning , noting it would also require a degree of regulatory flexibility that the PP does not currently support. In this way, it is a method of proceeding in the face of uncertainty, reflecting on the ethical and safety concerns of each new stage in development before a technology is fully realized. One such model is the “Safe-by-Design” approach, which these authors note is “associated with learning processes that aim for designing specifically for the notion of safety by iteratively integrating knowledge about the adverse effects of materials” (Bouchaut and Asveld 2021 ). Other models include “participative design,” where the views and values of end-users are sought during the design phase, so designers can incorporate knowledge of the consequences of new technologies on those impacted by them (Mumford 1993 ). The acronym ETHICS was used in the formulation of this approach when considering the impact of new computing technologies on workers’ experiences, referring to Effective Technical and Human Implementation of Computer-based Systems (Mumford 1993 ). At its core, the purpose of participative design is to make new technologies fit-for-purpose and people-friendly.

While intuitively a system that progressively learns about risks as they manifest and adapts accordingly may seem superior to one that might ban an emerging technology from the outset due to unknown risks, one problem with this iterative approach is what has been dubbed the Collingridge dilemma. Mittelstadt, Stahl, and Fairweather ( 2015 ) explain this dilemma as follows:

it is impossible to know with certainty the consequences of an emerging technology at an early stage when it would be comparatively simple to change the technology’s trajectory. Once the technology is more established and it becomes clear what its social and ethical consequences are going to be, it becomes increasingly difficult to affect its outcomes and social context. (1028)

So, while a “Safe-by-Design” approach might be able to pivot easily if issues are discovered early in the process, once the technology has progressed to a certain stage, it is too late to intervene. To prevent the creation and release of potentially dangerous technologies requires a more speculative approach, as is present in the next three models to be discussed.

Technology Assessment (TA) to Ethical Technology Assessment (eTA)

Alongside the PP, technology assessment (TA) is one of the best-known methods of dealing with uncertainty (Mittelstadt, Stahl, and Fairweather 2015 ). Grunwald ( 2020 ) notes that because TA is not focused on technologies that actually exist yet, it is a method that “creates and assesses prospective knowledge about the future consequences of technology,” through evaluating and scrutinizing “ideas, designs, plans, or visions for future technology” (Grunwald 2020 , 97; Grunwald 2019 ). He further notes that participatory versions of this speculative evaluative process are less about imagining technologies per se, and more about envisaging future technologies as situated in a specific “societal environment” (Grunwald 2019 ). The inputs for analysis are thus “models, narratives, roadmaps, visions, scenarios, prototypes” etc. (Grunwald 2020 , 99). Mittelstadt, Stahl and Fairweather ( 2015 ) note traditional TA arose in response to “undesirable or unintentional side effects of emerging technologies” with a primary focus on considering the impact of technology on “the environment, industry and society” (1035). Its goal is to foster responsible regulation to maximize benefit and prevent harm caused by advances in technology.

While TA has been highly influential, particularly for establishing environmental impact assessments and other forms of risk analysis, Palm and Hansson ( 2006 ) claim ethical and social dimensions of emerging technologies have often been neglected (546). They propose the ethical technology assessment (eTA) approach that adjusts development in line with ethical concerns and guides decision-making (551). Their ethical “checklist” contains nine items that, if implicated in an emerging technology, indicate an eTA should be conducted. Examples include if the proposed technology might be expected to affect concepts of “privacy” or “gender minorities and justice” (551). Brey ( 2012 ) states the purpose of eTA is to “provide indicators of negative ethical implications at an early stage of technological development … by confronting projected features of the technology or projected social consequences with ethical concepts and principles” (3–4). Palm and Hansson ( 2006 ) note that current obstacles to this process include fear of the unknown, assumptions about the self-regulation of technological development, and citizens feeling ill-equipped to engage in discussions of technologies that are becoming increasingly complex (547). They describe the result in terms of W.F. Ogburn’s concept of “cultural lag,” where technology, as an instance of material culture, is now released into society before “non-material culture has stabilized its response to it” (547). In other words, social, ethical, legal, religious, and cultural systems have not yet grappled with the implications of technologies before they are unleased on society. Some of these challenges are met by scenario-based approaches to emerging technologies, as demonstrated below.

Scenario Approaches

While many scenario-based approaches to emerging technology ethics overlap methodologically with eTA, there are some features that are worth discussing separately. Brey’s ( 2012 ) account of the techno-ethical scenario approach describes it as ethical assessment that helps policymakers “anticipate ethical controversies regarding emerging technologies” through analyzing hypothetical scenarios (4). He notes a unique feature of the method is that it not only tries to predict what moral issues will arise with the advent of new technologies but also how those very technologies will impact morality and “the way we interpret moral values” (4). Boenink, Swierstra, and Stemerding’s ( 2010 ) framework breaks this process up into three distinct steps: 1) “Sketching the moral landscape,” which provides a baseline narrative from which the introduction of the new technology can be compared; 2) “Generating potential moral controversies” using “New and Emerging Science and Technology” (NEST) ethics, with the aim of predicting realistic ethical arguments and issues regarding emerging technologies; and 3) “Constructing closure by judging plausibility of resolutions,” where arguments and counterarguments are considered in the light of the most likely resolution to the issues raised in step 2 (11–13). The process can draw analogies to existing or historical examples of technological change, and the ethical consequences involved, or construct specific controversies and “alternative futures” (14). The most important step to consider here is the second, of which Brey ( 2012 ) states:

The NEST-ethics approach performs three tasks. First, it identifies promises and expectations concerning a new technology. Second, it identifies critical objections that may be raised against these promises, for example regarding efficiency and effectiveness, as well as many conventionally ethical objections, regarding rights, harms and obligations, just distribution, the good life, and others. Third, it identifies chains of arguments and counter-arguments regarding the positive and negative aspects of the technology, which can be used to anticipate how the moral debate on the new technology may develop. (4–5)

In this way, scenario analysis can consider how technology and ethics change in tandem when new technologies emerge.

Socio-technical scenario approaches are similar to the techno-ethical approach outlined above; however, according to Schick ( 2019 ), they owe their origins to utopian studies and traditional philosophical thought experimentation (261). Claiming they are now used “as a form of moral foresight; an attempt to keep the ethical discourse ahead of the technological curve,” Schick ( 2019 ) suggests the goal of socio-technical speculation is to “guide society toward morally sound decisions regarding emerging technologies” (261). Thus, the scenarios being discussed are deeply embedded in hypothetical future societies. The Collingridge dilemma is also noted as a potential pitfall for this method, which the final technique covered here tries to avoid through engaging anticipatory models of ethics and governance.

Anticipatory Technology Ethics/Governance

It is well recognized that governing emerging technologies is difficult due to uncertainty regarding their impact on human health, the environment, and society. Hester et al. ( 2015 ) suggest one method of addressing this is to develop regulatory systems that rely on “anticipatory ethics and governance, future-oriented responsibility, upstream public engagement and theories of justice” (124). These would be forward-looking and flexible, allowing cautious development of technology instead of enforcing bans or merely being used to impute responsibility for harm after the fact, as is often seen in current legal systems. Noting that existing ethico-legal approaches “tend to be reactive and static,” these authors promote a “future-care oriented responsible innovation” that protects public trust in science and technology (125, 131). Brey ( 2012 ) notes that most anticipatory ethics frameworks apply one of two approaches: restricting discussion to “generic qualities” of technology and their likely ethical ramifications or speculating on possible future devices and their impact on society (2–3). The latter relies on future studies and forecasting techniques to allow ethical reflection on technologies that are yet to materialize. When discussing the European Commission’s Ethical Issues of Emerging ICT Applications (ETICA) approach, Brey ( 2012 ) claims multiple such techniques were used in the aggregate in an attempt to circumvent any individual weaknesses in methodology (5). However, his own theory of “anticipatory technology ethics” (ATE) tries to overcome the limited capability of forecasting by separating ethical evaluation into three levels: “the technology, artifact and application level” (7). Technologies are considered collections of techniques with a common function, and thus the technology level of ATE just focuses on what the technology is, and the general ethical concerns arising from this. At the artifact level, the “functional artifacts, systems and procedures” developed from the technology of interest are ethically evaluated (8). Brey ( 2012 ) provides the example of nuclear technology yielding such artifacts as nuclear reactors , x-ray imaging , and bombs . The artifact level of ATE thus considers what a technology is likely to bring into being and the relevant consequences of this. The application level then focuses on the use and purpose of artifacts in practice. The latter two levels of ATE are included in Brey’s “ responsibility assignment stage ,” where moral actors are assigned responsibility for the impact of emerging technologies (12).

Other variations on ATE can be found in Nestor and Wilson’s ( 2020 ) anticipatory practical ethics methodology incorporating stakeholder analysis and intuitionism, which allows for ethical consideration of not just future technologies but also future stakeholders, for example, children produced using CRISPR technology (134). These authors distinguish between anticipatory ethics, where ethical theories are applied to novel situations impacting various stakeholders with the goal of providing policy recommendations, and anticipatory governance, which develops policies in line with predictions regarding human behaviour. They claim the two can be combined to produce “future-oriented legal analysis based on theories of justice for rapidly emerging technologies” (134). They suggest such an analysis should include 1) specific ethical principles, including common sense intuitions; 2) “intermediate” principles, such as harm minimisation, utility, justice, etc.; 3) normative ethical theories, such as consequentialism, deontology, social contract theory, etc.; 4) relevant professional ethics codes, e.g. medical ethics; and 5) “the possibility of emergent ethical principles arising due to the uniqueness and rapid pace of development of new technologies” (137). For Nestor and Wilson ( 2020 ), these are all considered legitimate sources for ethical decision-making and can be used in conjunction with stakeholder analysis to produce ethical guidance and policy recommendations (139).

Anticipatory ethical systems are also subject to criticisms, including that because they speculate on future technologies they might waste time conducting analyses on things that never come to pass. Schick ( 2019 ) also claims it is often unclear what constitutes success in anticipatory ethics, as the goal of settling all ethical concerns and establishing appropriate regulatory systems before a technology is released may be unrealistic (265). Further, in their attempt to pre-empt future applications of new technologies, Schick ( 2019 ) claims speculative ethical models may miss crucial stages in the process, as demonstrated by the example of genetic engineering:

the mainstream bioethics discourse on human genetic engineering (i.e. primarily in the US and the UK) was not indexed to the current state of science or slightly ahead of it, but instead took up questions entangled with more distant anticipated future developments. Keeping the discourse well ahead of the curve of emerging biomedical technologies probably generated interesting discussions, but it may also have contributed to the weakness of the consensus-based norms that were thought to be keeping human germline genetic engineering in check. In effect, the forward-looking discourse subjected them to what might be called “anticipatory obsolescence” by asking whether to maintain a distinction between somatic and germline therapies—long before there was a technique up to the task of altering the genome of a human embryo with sufficient efficacy to begin considering preclinical human embryonic interventions. (264)

Once human embryonic gene editing became possible, Schick ( 2019 ) claims “the newly urgent question of whether germline interventions were ethically permissible was no longer where the discussion was centered” as speculations regarding human enhancement had started to dominant bioethical debate on the subject (264). Schick ( 2019 ) continues: “[i]n retrospect, it seems almost inevitable that once germline engineering was accomplished, the ‘old’ question of whether it should be undertaken at all would suddenly become obsolete” (264). Thus, there is a risk that by focusing too much on future applications, ethicists will miss the opportunity to intervene in foundational stages of technological revolution.

While anticipatory ethics and governance systems are becoming a popular way of dealing with the uncertain risks of emerging technologies, Mittelstadt, Stahl, and Fairweather ( 2015 ) claim such prophetic decision-making aids “cannot be given the same epistemic status as facts and norms concerning existing phenomena” (1044). They note some technologies are so novel even the most basic risk data is unavailable when decisions need to be made about their development. This applies to several of the emerging technologies under discussion in this symposium issue.

The Ethical, Legal and Social Implications of Emerging Technologies (ELSIET) Symposium

The Ethical, Legal and Social Implications of Emerging Technologies (ELSIET) research group was established with support from Deakin University’s Science and Society Network in 2018. Over the next two years the group recruited forty members from eighteen academic institutions in six different countries and hosted three seminars focused on the ethics of emerging technologies. This special issue highlights some of the work arising from these meetings. The purpose of the group is to foster collaborations among specialists working in emerging technologies, including ethicists, scientists, lawyers, and artists. The group went on hiatus at the beginning of the COVID-19 pandemic but has resumed regular activities in 2022 under the auspices of the Iverson Health Innovation Research Institute, Swinburne University of Technology. In 2019, ELSIET was awarded a Brocher Foundation symposium grant in conjunction with members of the University of Melbourne’s School of Population and Global Health, Western Australia’s Department of Health, and the Gen(e)quality Network. Originally planned for 2020, the symposium was rescheduled to May 2022, with an online version occurring in May 2021. 1

The papers included in this symposium issue address emerging technologies and situations that would trigger Palm and Hansson’s ( 2006 ) ethical checklist, as they pertain to “dissemination and use of information” and “privacy,” particularly for genetic information, “human reproduction” in the form of artificial womb technology, and “impact on human values,” with particular focus on the potential commodification of human DNA. Each paper also engages with one or more of the practices outlined above for ethically evaluating emerging technologies.

The collection begins with Wise and Borry’s ( 2022 ) discussion of the ethical issues surrounding the use of CRISPR-based technologies for eliminating Anopheles gambiae mosquitoes, the dominant vector for malaria throughout sub-Saharan Africa. These authors consider ethical debates regarding whether the species possesses any intrinsic worth, moral status, or instrumental value in terms of increasing biodiversity. The significance of the CRISPR-based technologies under debate relate to the new-found ability to modify the genes and eventually eradicate this entire species of mosquitoes, rather than just eliminating some of them. The competing demands of minimizing human suffering and avoiding unintended side effects to natural ecosystems are recognized throughout. This paper considers the utility of the PP in addressing these ethical issues, as well as the environmental and risk assessment elements intrinsic to TA.

The second paper, by Ferreira ( 2022 ), considers the ethical implications of artificial womb technologies through the lens of utopian fiction, namely Helen Sedgwick’s The Growing Season (2017) and Rebecca Ann Smith’s Baby X (2016). Viewed as feminist rewritings of Aldous Huxley’s dystopian classic Brave New World (1932), these texts consider the emancipatory potential of ectogenesis for women. For Palm and Hansson ( 2006 ), advances in reproductive technologies represent the site of some of “the most blatant clashes” between “social norms and moral values” in society, influencing perceptions of family and human reproduction (553). The use of utopian fiction to guide ethical evaluation aligns with various elements of the socio-technical scenario approach to emerging technologies.

The third paper, by Koplin, Skeggs, and Gyngell ( 2022 ), similarly falls into one of Palm and Hansson’s ( 2006 ) key criteria for eTA, as these authors propose allowing a commercial market for the sale and purchase of human DNA. For Palm and Hansson ( 2006 ), such a proposal would require ethical evaluation to prevent the “negative consequences of commodification” leading to “reduced respect for human personhood” (554–555). Koplin, Skeggs, and Gyngell ( 2022 ) anticipate these objections when outlining how an ethical market in human DNA might be created, considering related concerns regarding exploitation and undue inducement. This analysis includes various stages of the techno-ethical scenario approach, particularly the sketching of the current moral landscape of gene banking, and exploration of arguments and counterarguments to the hypotheticals presented.

The fourth paper, by Delgado et al. ( 2022 ), provides a scoping review of academic literature focused on biases in artificial intelligence algorithms for predicting COVID-19 risk, triaging, and contact tracing. These authors identify issues with data collection, management, and privacy, as well as a lack of regulation for the use of these programmes as key practical and ethical concerns. With their focus on the impacts of these biases and the social determinants of health on various reported health disparities, these authors highlight a role for Brey’s ( 2012 ) ATE framework, which considers the social application of emerging technologies, and Hester et al.’s ( 2015 ) anticipatory ethics and governance.

The final paper in the collection is Benston’s ( 2022 ) protocol for developing policy recommendations regarding heritable gene editing. In this, potential benefits and harms are identified and evaluated in a way that guides the proposed study design. The focus on anticipatory ethics and governance incorporates several elements present in Nestor and Wilson’s ( 2020 ) anticipatory practical ethics methodology, particularly Benston’s focus on detailed stakeholder analysis.

Technological developments involve uncertainty and carry with them the potential for both significant benefit and harm. While we cannot know the future, various methods for ethically evaluating and regulating emerging technologies have arisen that aim to promote discovery while protecting safety. The more revolutionary a new technology is, the greater its potential impact on society and thus the ethical issues it might generate. The articles in this symposium issue all take a proactive, rather than reactive, approach to discussing such issues in advance of these technologies being fully realized in society.

Funding source

Deakin University Science and Society Network seed grant

1 Talks are available on demand here: https://mspgh.unimelb.edu.au/centres-institutes/centre-for-health-policy/research-group/evaluation-implementation-science/elsi-genomics-symposium .

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Beauchamp TL, Childress JF. Principles of biomedical ethics . 5. Oxford: Oxford University Press; 2001. [ Google Scholar ]
  • Benston, S. 2022. Walking a fine germline: Synthesizing public opinion and legal precedent to develop policy recommendations for heritable gene-editing. Journal of Bioethical Inquiry 19(3): doi:10.1007/s11673-022-10186-8 [ PubMed ]
  • Boenink M, Swierstra T, Stemerding D. Anticipating the interaction between technology and morality: A scenario study of experimenting with humans in bionanotechnology. Studies in Ethics, Law, and Technology. 2010; 4 (2):1–41. doi: 10.2202/1941-6008.1098. [ CrossRef ] [ Google Scholar ]
  • Bouchaut B, Asveld L. Responsible learning about risks arising from emerging biotechnologies. Science and Engineering Ethics. 2021; 27 (2):22. doi: 10.1007/s11948-021-00300-1. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brey PAE. Anticipatory ethics for emerging technologies. NanoEthics. 2012; 6 (1):1–13. doi: 10.1007/s11569-012-0141-7. [ CrossRef ] [ Google Scholar ]
  • Delgado et al. 2022. Bias in algorithms of AI systems developed for COVID-19: A scoping review. Journal of Bioethical Inquiry 19(3) [ PMC free article ] [ PubMed ]
  • Edwards J. New conceptions: Biosocial innovations and the family. Journal of Marriage and Family. 1991; 53 (2):349–360. doi: 10.2307/352904. [ CrossRef ] [ Google Scholar ]
  • Ferreira, A. 2022. The (un)ethical womb: The promises and perils of artificial gestation. Journal of Bioethical Inquiry 19(3): doi:10.1007/s11673-022-10184-w [ PubMed ]
  • Friedman B, Harbers M, Hendry D, van den Hoven J, Jonker C, Logler N. Eight grand challenges for value sensitive design from the 2016 Lorentz workshop. Ethics and Information Technology. 2021; 23 (1):5–16. doi: 10.1007/s10676-021-09586-y. [ CrossRef ] [ Google Scholar ]
  • Gouvea R, Linton J, Montoya M, Walsh S. Emerging technologies and ethics: A race-to-the-bottom or the top? Journal of Business Ethics. 2012; 109 (4):553–567. doi: 10.1007/s10551-012-1430-3. [ CrossRef ] [ Google Scholar ]
  • Grunwald A. The objects of technology assessment. Hermeneutic extension of consequentialist reasoning. Journal of Responsible Innovation. 2020; 7 (1):96–112. doi: 10.1080/23299460.2019.1647086. [ CrossRef ] [ Google Scholar ]
  • Grunwald A. Technology assessment in practice and theory . Abingdon, Oxon: Routledge; 2019. [ Google Scholar ]
  • Guida A. The precautionary principle and genetically modified organisms: A bone of contention between European institutions and member states. Journal of Law and the Biosciences. 2021; 8 (1):lsab012. doi: 10.1093/jlb/lsab012. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hansson SO. How extreme is the precautionary principle? NanoEthics. 2020; 14 (3):245–257. doi: 10.1007/s11569-020-00373-5. [ CrossRef ] [ Google Scholar ]
  • Hester K, Mullins M, Murphy F, Tofail S. Anticipatory ethics and governance (AEG): Towards a future care orientation around nanotechnology. NanoEthics. 2015; 9 (2):123–136. doi: 10.1007/s11569-015-0229-y. [ CrossRef ] [ Google Scholar ]
  • Koplin, J., J. Skeggs, and C. Gyngell. 2022. Ethics of buying DNA. Journal of Bioethical Inquiry 19(3): [ PMC free article ] [ PubMed ]
  • Mittelstadt, B. D., Stahl, B. C., and Fairweather, N. B. 2015. How to shape a better future? Epistemic difficulties for ethical assessment and anticipatory governance of emerging technologies. Ethical Theory and Moral Practice 18: 1027–47. 10.1007/s10677-015-9582-8
  • Moor JH. Why we need better ethics for emerging technologies. Ethics and Information Technology. 2005; 7 (3):111–119. doi: 10.1007/s10676-006-0008-0. [ CrossRef ] [ Google Scholar ]
  • Mumford E. The participation of users in systems design: An account of the origin, evolution, and use of the ETHICS method . Florida: CRC Press; 1993. [ Google Scholar ]
  • Nestor MW, Wilson RL. Beyond Mendelian genetics: Anticipatory biomedical ethics and policy implications for the use of CRISPR together with gene drive in humans. Journal of Bioethical Inquiry. 2020; 17 (1):133–144. doi: 10.1007/s11673-019-09957-7. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Palm E, Hansson SO. The case for ethical technology assessment (eTA) Technological Forecasting and Social Change. 2006; 73 (5):543–558. doi: 10.1016/j.techfore.2005.06.002. [ CrossRef ] [ Google Scholar ]
  • Schick A. What counts as “success” in speculative and anticipatory ethics? Lessons from the advent of germline gene editing. NanoEthics. 2019; 13 (3):261–267. doi: 10.1007/s11569-019-00350-7. [ CrossRef ] [ Google Scholar ]
  • Umbrello S, van de Poel I. Mapping value sensitive design onto AI for social good principles. AI and Ethics. 2021; 1 (3):283–296. doi: 10.1007/s43681-021-00038-3. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wareham C, Nardini C. Policy on synthetic biology: Deliberation, probability, and the precautionary paradox. Bioethics. 2015; 29 (2):118–25. doi: 10.1111/bioe.12068. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wise, I.J. and P. Borry. 2022. An ethical overview of the CRISPR-based elimination of Anopheles gambiae to combat malaria. Journal of Bioethical Inquiry 19(3): doi:10.1007/s11673-022-10172-0 [ PMC free article ] [ PubMed ]
  • Wolff J. The precautionary attitude: Asking preliminary questions. Hastings Center Report. 2014; 44 (S5):S27–S28. doi: 10.1002/hast.393. [ PubMed ] [ CrossRef ] [ Google Scholar ]

IMAGES

  1. FREE 10+ Research Ethics Samples & Templates in MS Word

    software ethics research paper

  2. (PDF) RESEARCH AND PUBLICATION ETHICS

    software ethics research paper

  3. Research Ethics

    software ethics research paper

  4. (PDF) Review Paper on Computer Ethics and Related Research Models

    software ethics research paper

  5. FREE 11+ Research Ethics Templates in PDF

    software ethics research paper

  6. (PDF) Benefits of effective ethics training

    software ethics research paper

VIDEO

  1. ethics and Values question paper 2022

  2. Ethical Considerations in Research

  3. Documentation and ethics: Ch6 (part 1) : Introduction to Ethics in Technology

  4. Paper 4 research and publication ethics(pre PhD course work syllabus)

  5. Ethics in Research

  6. Characteristics

COMMENTS

  1. Ethics in the Software Development Process: from Codes of ...

    Software systems play an ever more important role in our lives and software engineers and their companies find themselves in a position where they are held responsible for ethical issues that may arise. In this paper, we try to disentangle ethical considerations that can be performed at the level of the software engineer from those that belong in the wider domain of business ethics. The ...

  2. (PDF) Software Engineering Ethics

    Abstract. Software engineering ethics can be approached from three directions: ( 1 ) it can describe the activity of software engineers making practical choices that affect other people in ...

  3. Ethical Tools, Methods and Principles in Software ...

    Resent research state that at least 84 public-private AI ethics principles and values initiatives were identified by Mittelstadt [].Also, a scoping review by Jobin et al. [], with an analysis including a search for grey literature containing principles and guidelines for ethical AI, has been made with academic and legal sources excluded [].The results present that there was a massive increase ...

  4. It's about power: What ethical concerns do software engineers have, and

    In 2018 26th ACM joint meeting on european software engineering conference and symposium on the foundations of software engineering. 729-733. Jacob Metcalf, Emanuel Moss, et al. 2019. Owning ethics: Corporate logics, silicon valley, and the institutionalization of ethics. Social Research: An International Quarterly 86, 2 (2019), 449-476.

  5. PDF Incorporating ethics in software engineering: challenges and opportunities

    decisions. This paper reports on some early work in identifying the challenges of ethical decision making and opportunities for addressing these challenges in the context of software engineering. Index Terms—software ethics, ethical deliberation, software engineering I. INTRODUCTION As software becomes more pervasive and critical for the

  6. Professional Ethics of Software Engineers: An Ethical Framework

    The purpose of this article is to propose an ethical framework for software engineers that connects software developers' ethical responsibilities directly to their professional standards. The implementation of such an ethical framework can overcome the traditional dichotomy between professional skills and ethical skills, which plagues the engineering professions, by proposing an approach to ...

  7. A framework for managing ethics in data science projects

    Thus, data science ethics can be grouped under the ethics of data, algorithms, and practices. 6 In this paper, we focus primarily on data ethics (data collection, building a data model, its evaluation, and deployment). The paper is organized as follows: Previous research on ethical issues in data science initiatives is discussed in Section 2.

  8. Ethical Software: Integrating Code of Ethics into Software Development

    This paper also provides a preliminary framework to suggest means for research and practicing ethics within the actual practice in software engineering through Software Development Life Cycle (SDLC). The findings indicate that, despite a good number of research on SWECOE, the fact is code of ethics is rather difficult and complicated to ...

  9. (PDF) Ethics and the Practice of Software Design

    The paper offers an analysis of the problem of integrating ethical principles into the practice of software design. The approach is grounded on a review of the relevant literature from Computer ...

  10. Software Engineering Ethics

    Software engineering ethics can be approached from three directions: (1) it can describe the activity of software engineers making practical choices that affect other people in significant ways; (2) it can be used to describe a collection of principles, guidelines, or ethical imperatives that guide or legislative action; and (3) it can be used to name a discipline that studies the relationship ...

  11. Software

    Context: Ethics have broad applications in different fields of study and different contexts. Like other fields of study, ethics have a significant impact on the decisions made in computing concerning software artifact production and its processes. Hence, in this research, ethics is considered in the context of requirements engineering during the software development process. Objective: The aim ...

  12. Towards Ethical Data-Driven Software: Filling the Gaps in Ethics

    More and more, data is being used to drive automated decision making through the integration of machine learning technologies into software. With these advances comes new potential for unexpected, undesirable, and possibly dangerous outcomes for end users. This has led to an increased focus on ethics in technology in both research and practice. Much of the work in ethical practices has been ...

  13. Using the Software Engineering Code of Ethics in Professional Computing

    This paper uses examples of realistic, ethically-charged decisions that computing professionals face, and explore how the the software engineering code of ethics is useful in making wise and creative decisions. We illustrate the ethical impacts of choice of software process model, architecture, and design patterns using real world examples including examples from Grady Booch. We practice some ...

  14. Improving software quality: an ethics based approach

    Computer Science, Philosophy. Encyclopedia of Information Ethics and Security. 2007. TLDR. A proposal for a quality-based framework for addressing ethics, and software quality treatment of a software engineering code of ethics is presented, and avenues and directions for future research are outlined. Expand.

  15. Ethical behavior issues in software use: An analysis of public and

    Therefore, the objective of this paper is to draw inferences regarding such practices currently in these sectors. The research results indicate a significant correlation between the code of ethics and the attitude of professionals towards the unethical use of software in government and private sector organizations.

  16. (PDF) Software Design Ethics

    This paper argues that the present status of software design is ethically reprehensible due to its hijacking of human psychological mechanisms with the intent of maximizing user engagement ...

  17. Teaching Software Ethics to Future Software Engineers

    The importance of teaching software ethics to software engineering (SE) students is more critical now than ever before as software-related ethical issues continue to impact society at an alarming rate. Traditional classroom methods, vignettes, role-play games, and quizzes have been employed over the years to teach SE students about software ethics.

  18. PDF An Introduction to Software Engineering Ethics

    ethics is also taught in dedicated courses, such as business ethics. It can also be infused into courses such as this one. What is ethics doing in a course for software engineers? Like medical, legal and business ethics, engineering ethics is a well-developed area of professional ethics in the modern West. The first codes of engineering ethics were

  19. SEthics 2021

    Now in the second edition, SEthics 2021, we aim to shed light on current practices and research results and also identify emerging trends and research directions in software engineering. SEthics 2021 will be a one-day discussion-oriented workshop, with theme-based tracks of paper presentations, a panel aimed at stimulating discussions, and ...

  20. Code of Ethics for Software Engineers

    2. CLIENT AND EMPLOYER - Software engineers shall act in a manner that is in the best interests of their client and employer consistent with the public interest. 3. PRODUCT - Software engineers shall ensure that their products and related modifications meet the highest professional standards possible. 4.

  21. Advancing ethics review practices in AI research

    The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated ...

  22. Integrating ethics in AI development: a qualitative study

    This research paper explores the development of AI and the considerations perceived by experts as essential for ensuring that AI aligns with ethical practices within healthcare. ... Gogoll J, Zuber N, Kacianka S, Greger T, Pretschner A, Nida-Rümelin J. Ethics in the software development process: from codes of conduct to ethical deliberation ...

  23. Ethical, Legal and Social Implications of Emerging Technology (ELSIET

    With their focus on the impacts of these biases and the social determinants of health on various reported health disparities, these authors highlight a role for Brey's ATE framework, which considers the social application of emerging technologies, and Hester et al.'s anticipatory ethics and governance. The final paper in the collection is ...

  24. How Science Sleuths Track Down Bad Research

    Software such as Imagetwin and Proofig helps publishers and universities spot copied scientific images.