- Machine Learning
- Español – América Latina
- Português – Brasil
- Tiếng Việt
Introduction to Responsible AI
How do we build AI systems responsibly, at scale? Learn about Responsible AI, relevant concepts and terms, and how to implement these practices in products.
Introduction
Artificial intelligence (AI) powers many apps and services that people use in daily life. With billions of users of AI across fields from business to healthcare to education, it is critical that leading AI companies work to ensure that the benefits of these technologies outweigh the harms, in order to create the most helpful, safe, and trusted experiences for all.
Responsible AI considers the societal impact of the development and scale of these technologies, including potential harms and benefits. The AI Principles provide a framework that includes objectives for AI applications, and applications we will not pursue in the development of AI systems.
Responsible AI Dimensions
As AI development accelerates and becomes more ubiquitous, it is critical to incorporate Responsible AI practices into every workflow stage from ideation to launch. The following dimensions are key components to Responsible AI, and are important to consider throughout the product lifecycle.
Fairness addresses the possible disparate outcomes end users may experience as related to sensitive characteristics such as race, income, sexual orientation, or gender through algorithmic decision-making. For example, might a hiring algorithm have biases for or against applicants with names that are associated with a particular gender or ethnicity?
Read about how products such as Search and Photos improved diversity of skin tone representation .
For more terms related to ML Fairness, please see Machine Learning Glossary: Fairness | Google for Developers . To learn more, the Fairness module of the Machine Learning Crash Course provides an introduction to ML Fairness.
People + AI Research (PAIR) offers interactive AI Explorables including Measuring Fairness and Hidden Bias to walk through these concepts.
Accountability
Accountability means being held responsible for the effects of an AI system. This involves transparency , or sharing information about system behavior and organizational process, which may include documenting and sharing how models and datasets were created, trained, and evaluated. Model Cards and Data Cards are examples of transparency artifacts that can help organize the essential facts of ML models and datasets in a structured way.
Another dimension of accountability is interpretability , which involves the understanding of ML model decisions, where humans are able to identify features that lead to a prediction. Moreover, explainability is the ability for a model's automated decisions to be explained in a way for humans to understand.
Read more about building user trust in AI systems in the Explainability + Trust chapter of the People + AI Guidebook , and the Interpretability section of Google's Responsible AI Practices .
AI safety includes a set of design and operational techniques to follow to avoid and contain actions that can cause harm, intentionally or unintentionally. For example, do systems behave as intended, even in the face of a security breach or targeted attack? Is your AI system robust enough to operate safely even when perturbed? How do you plan ahead to prevent or avoid risks? Is your system reliable and stable under pressure?
The Safety section of Google's Responsible AI Practices outlines recommended practices to protect AI systems from attacks, including adversarial testing. Learn more about our work in this area and lessons learned in the Keyword blog post, Google's AI Red Team: the ethical hackers making AI safer .
Privacy practices in Responsible AI (see Privacy section of Google Responsible AI Practices ) involve the consideration of potential privacy implications in using sensitive data. This includes not only respecting legal and regulatory requirements, but also considering social norms and typical individual expectations. For example, what safeguards need to be put in place to ensure the privacy of individuals, considering that ML models may remember or reveal aspects of the data that they have been exposed to? What steps are needed to ensure users have adequate transparency and control of their data?
Learn more about ML privacy through PAIR Explorables' interactive walkthroughs:
- How randomized response can help collect sensitive information responsibly
- How Federated Learning Protects Privacy
- Why Some Models Leak Data
Responsible AI in Generative Models/LLMs
The advent of large, generative models introduces new challenges to implementing Responsible AI practices due to their potentially open-ended output capabilities and many potential downstream uses. In addition to the AI Principles, Google has a Generative AI Prohibited Use Policy and Generative AI Guide for Developers .
Read more about how teams at Google use generative AI to create new experiences for users at Google Generative AI . On this site, we also offer guidance on Safety and Fairness , Prompt Engineering , and Adversarial Testing for generative models. For an interactive walkthrough on language models, see the PAIR Explorable: What Have Language Models Learned?
Additional Resources
Why we focus on AI – Google AI
Google AI Review Process
Responsible AI Toolkit | TensorFlow
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-08-08 UTC.
Published: 6 February 2024 Contributor: Cole Stryker
Responsible artificial intelligence (AI) is a set of principles that help guide the design, development, deployment and use of AI—building trust in AI solutions that have the potential to empower organizations and their stakeholders. Responsible AI involves the consideration of a broader societal impact of AI systems and the measures required to align these technologies with stakeholder values, legal standards and ethical principles. Responsible AI aims to embed such ethical principles into AI applications and workflows to mitigate risks and negative outcomes associated with the use of AI, while maximizing positive outcomes.
This article aims to provide a general view of responsible AI. To learn more about IBM’s specific point of view, see our AI ethics page .
The widespread adoption of machine learning in the 2010s, fueled by advances in big data and computing power, brought new ethical challenges, like bias, transparency and the use of personal data. AI ethics emerged as a distinct discipline during this period as tech companies and AI research institutions sought to proactively manage their AI efforts responsibly.
According to Accenture research: “Only 35% of global consumers trust how AI technology is being implemented by organizations. And 77% think organizations must be held accountable for their misuse of AI.” 1 In this atmosphere, AI developers are encouraged to guide their efforts with a strong and consistent ethical AI framework.
This applies particularly to the new types of generative AI that are now being rapidly adopted by enterprises. Responsible AI principles can help adopters harness the full potential of these tools, while minimizing unwanted outcomes.
AI must be trustworthy, and for stakeholders to trust AI, it must be transparent. Technology companies must be clear about who trains their AI systems, what data was used in that training, and, most importantly, what went into their algorithm’s recommendations. If we are to use AI to help make important decisions, it must be explainable.
Accelerate responsible, transparent and explainable AI workflows.
Subscribe to the IBM newsletter
IBM has developed a framework to make these principles clear. Let's look at the properties that make up the “Pillars of Trust.” Taken together, these properties answer the question, “What would it take to trust the output of an AI model?” Trusted AI is a strategic and ethical imperative at IBM, but these pillars can be used by any enterprise to guide their efforts in AI.
Machine learning models such as deep neural networks are achieving impressive accuracy on various tasks. But explainability and interpretability are ever more essential for the development of trustworthy AI. Three principles comprise IBM’s approach to explainability.
Accuracy is a key component of how successful the use of AI is in everyday operation. By running simulations and comparing AI output to the results in the training data set, the prediction accuracy can be determined. The most popular technique used for this is Local Interpretable Model-Agnostic Explanations (LIME), which explain the prediction of classifiers by the machine learning algorithm.
Traceability is a property of AI that signifies whether it allows users to track its predictions and processes. It involves the documentation of data and how it is processed by models. Traceability is another key technique for achieving explainability, and is accomplished, for example, by limiting the way decisions can be made and setting up a narrower scope for machine learning rules and features.
This is the human factor. Practitioners need to be able to understand how and why AI derives conclusions. This is accomplished through continuous education.
Machine learning models are increasingly used to inform high stakes decision-making that relates to people. Although machine learning, by its very nature, is a form of statistical discrimination, the discrimination becomes objectionable when it places privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage, potentially causing varied harms. Biases in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias.
Diverse and representative data
Ensure that the training data used to build AI models is diverse and representative of the population it is meant to serve. Include data inputs from various demographic groups to avoid underrepresentation or bias. Regularly check and assess training data for biases. Use tools and methods to identify and correct biases in the dataset before training the model.
Bias-aware algorithms
Incorporate fairness metrics into the development process to assess how different subgroups are affected by the model's predictions. Monitor and minimize disparities in outcomes across various demographic groups. Apply constraints in the algorithm to ensure that the model adheres to predefined fairness criteria during training and deployment.
Bias mitigation techniques
Apply techniques like re-sampling, re-weighting and adversarial training to mitigate biases in the model's predictions.
Diverse development teams
Assemble interdisciplinary and diverse teams involved in AI development. Diverse teams can bring different perspectives to the table, helping to identify and rectify biases that may be overlooked by homogeneous teams.
Ethical AI review boards
Establish review boards or committees to evaluate the potential biases and ethical implications of AI projects. These boards can provide guidance on ethical considerations throughout the development lifecycle.
Robust AI effectively handles exceptional conditions, such as abnormalities in input or malicious attacks, without causing unintentional harm. It is also built to withstand intentional and unintentional interference by protecting against exposed vulnerabilities. Our increased reliance on these models and the value they represent as an accumulation of confidential and proprietary knowledge, are at increasing risk for attack. These models pose unique security risks that must be accounted for and mitigated.
Users must be able to see how the service works, evaluate its functionality, and comprehend its strengths and limitations. Increased transparency provides information for AI consumers to better understand how the AI model or service was created. This helps a user of the model to determine whether it is appropriate for a given use case, or to evaluate how an AI produced inaccurate or biased conclusions.
Many regulatory frameworks, including GDPR, mandate that organizations abide by certain privacy principles when processing personal information. A malicious third party with access to a trained ML model, even without access to the training data itself, can still reveal sensitive personal information about the people whose data was used to train the model. It is crucial to be able to protect AI models that may contain personal information, and control what data goes into the model in the first place.
Implementing responsible AI practices at the enterprise level involves a holistic, end-to-end approach that addresses various stages of AI development and deployment.
Develop a set of responsible AI principles that align with the values and goals of the enterprise. Consider the key aspects described above in the “Pillars of Trust.” Such principles can be developed and maintained by a dedicated cross-functional AI ethics team with representation from diverse departments, including AI specialists, ethicists, legal experts and business leaders.
Conduct training programs to educate employees, stakeholders and decision-makers about responsible AI practices. This includes understanding potential biases, ethical considerations and the importance of incorporating responsible AI into business operations.
Embed responsible AI practices across the AI development pipeline, from data collection and model training to deployment and ongoing monitoring. Employ techniques to address and mitigate biases in AI systems. Regularly assess models for fairness, especially regarding sensitive attributes such as race, gender or socioeconomic status. Prioritize transparency by making AI systems explainable. Provide clear documentation about data sources, algorithms, and decision processes. Users and stakeholders should be able to understand how AI systems make decisions.
Establish strong data and AI governance practices and safeguards to protect end user privacy and sensitive data. Clearly communicate data usage policies, obtain informed consent and comply with data protection regulations.
Integrate mechanisms for human oversight in critical decision-making processes. Define clear lines of accountability to ensure responsible parties are identified and can be held responsible for the outcomes of AI systems. Establish ongoing monitoring of AI systems to identify and address ethical concerns, biases or issues that may arise over time. Regularly audit AI models to assess compliance with ethical guidelines.
Foster collaboration with external organizations, research institutions, and open-source groups working on responsible AI. Stay informed about the latest developments in responsible AI practices and initiatives and contribute to industry-wide efforts.
IBM’s multidisciplinary, multidimensional approach to trustworthy AI
Build, run and manage AI models. Prepare data and build models on any cloud using open source code or visual modeling. Predict and optimize your outcomes.
Build responsible, transparent and explainable AI workflows.
IBM has publicly defined its multidisciplinary, multidimensional approach to AI ethics, built upon principles for trust and transparency.
For more than a century, IBM has earned the trust of our clients by responsibly managing their most valuable data, and we have worked to earn the trust of society by ushering powerful new technologies into the world responsibly and with clear purpose.
Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.
Accelerate responsible, transparent and explainable AI workflows across the lifecycle for both generative and machine learning models. Direct, manage, and monitor your organization’s AI activities to better manage growing AI regulations and detect and mitigate risk.
1 Technology Vision 2022 (this link resides outside IBM.com), Accenture, 2022
Principles and approach
responsibly and in ways that warrant people’s trust.
The Microsoft Responsible AI Standard
Responsible ai in action.
Reliability and safety
Privacy and security
Inclusiveness
Transparency
Accountability
Machine learning for fair decisions
Fairlearn Python package on GitHub
Machine Learning and Fairness
Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI
How Programming Languages Quietly Run the World with Dr. Ben Zorn
Discovering Blind Spots in Reinforcement Learning
Life at the Intersection of AI and Society with Dr. Ece Kamar
Private AI: Machine Learning on Encrypted Data
Confidential AI
Cyber Signals: How Microsoft protects AI platforms against cyberthreats
Six ways to improve accessibility with Azure AI
A Human-Centered Agenda for Intelligible Machine Learning
Datasheets for Datasets
Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models
Announcing Microsoft’s AI Customer Commitments
3+ years of development and refinement
Authored by 30+ subject matter experts
How Microsoft implements responsible AI
Team enablement
Sensitive use cases
Public policy
Working groups
Building responsible AI tooling and systems
Engineering practices
Compliance mechanisms
Transparency Notes
Dynamics 365 Connected Spaces
Custom Neural Voice
Optical Character Recognition
Take the next step.
Get tools to support responsible AI practices
Explore AI policy and regulation
Explore Microsoft AI
InfoQ Software Architects' Newsletter
A monthly overview of things you need to know as an architect or aspiring architects.
View an example
We protect your privacy.
Facilitating the Spread of Knowledge and Innovation in Professional Software Development
- English edition
- Chinese edition
- Japanese edition
- French edition
Back to login
Login with:
Don't have an infoq account, helpful links.
- About InfoQ
- InfoQ Editors
- Write for InfoQ
- About C4Media
Choose your language
Discover transformative insights to level up your software development decisions. Register now with early bird tickets.
Get practical advice from senior developers to navigate your current dev challenges. Register now with early bird tickets.
Level up your software skills by uncovering the emerging trends you should focus on. Register now.
Your monthly guide to all the topics, technologies and techniques that every professional needs to know about. Subscribe for free.
InfoQ Homepage Presentations Operationalizing Responsible AI in Practice
Operationalizing Responsible AI in Practice
Mehrnoosh Sameki discusses approaches to responsible AI and demonstrates how open source and cloud integrated ML help data scientists and developers to understand and improve ML models better.
Mehrnoosh Sameki is a senior technical program manager and tech lead at Microsoft, responsible for leading the product efforts on operationalizing responsible AI in practices within the Open Source and Azure Machine Learning platform. She has co-founded Error Analysis, Fairlearn, and Responsible-AI-Toolbox and has been a contributor to the InterpretML offering.
About the conference
QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.
Sameki: My name is Mehrnoosh Sameki. I'm a senior program manager and technical lead of the Responsible AI tooling team at Azure Machine Learning, Microsoft. I'm joining you to talk about operationalizing Responsible AI in practice. First, I would like to start by debunking the idea that responsible AI is an afterthought, or responsible AI is a nice-to-have. The reality is, we are all on a mission to make responsible AI the new AI. If you are putting an AI there, and it's impacting people's lives in a variety of different ways, you have the responsibility to ensure the world that you have created is not causing harm to humans. Responsible AI is a must-have.
Machine Learning in Real Life
The reason why it matters a lot is, besides the very fact that in a traditional machine learning lifecycle, you have data, then you pass it to a learning phase where that learning algorithm takes out patterns from your data. Then that creates a model entity for you. Then you use techniques like statistical cross-validation, accuracy, to validate and evaluate that model and improve it, leading to a model that then spits out information such as approval versus rejection for loan scenarios. Despite the fact that this lifecycle matters, and you want to make sure that you're doing it as reliably as you could, there are lots of human personas in the loop that need to be informed in every single stage of this lifecycle. One persona, or many of you, ML professionals, or data scientists, they would like to know what is happening in their AI systems, because they would like to understand if their model is any good, whether they can improve their model, what features of their models should they use in order to make reliable decisions for humans? The other persona are business or product leaders. Those are people who would like to approve the model, should we put it out there? Is it going to put us on the first page of the news another day? They ask a lot of questions from data scientists regarding, is this model racist? Is this biased? Should I let it be deployed? Are these predictions matching some domain experts' insights that I've got from surgeons, doctors, financial experts, insurance experts?
The other persona is end-users, or solution providers. By that I mean either the banking person who works at a bank and is providing people with that end result of approved versus rejected on their loan, or a doctor who is looking at the AI results and is providing some diagnosis or insights to the end user or patient, in this case. Those are people who deal with the end user. Or they might be the end user themselves. They might ask, why did the model say this about me, or about my patient or my client? Can I trust these predictions? Can I make some actionable movements based on that or not? One persona that I'm not showing here, but is overseeing the whole process, are the regulators. We all have heard about the recent European regulations and GDPR's right to explanation, or California act. They're all adding lots of great lenses to the whole equation. There are risk officers, regulators who want to make sure that your AI is following the regulations as it should.
Microsoft's AI Principles
With all of these great personas in the loop, it is important to ensure that your AI is being developed and deployed responsibly. However, even if you are a systematic data scientist or machine learning developer and really care about this area, truth to be told is the path to deploying responsible and reliable machine learning is still unpaved. Often, I see people using lots of different fragmented tools, or a spaghetti of visualizations or visualization primitives together in order to evaluate their models responsibly. That's our team mission to help you operationalize responsible AI in practice. Microsoft have these six principles in order to inform your AI development and deployment. Those are fairness, reliability and safety, privacy and security, inclusiveness, underpinned by two more foundational ones: transparency and accountability. Our team specifically works on the items that are shown in blue, which are fairness, reliability and safety, inclusiveness, and transparency. The reason why we work on them is because they have a theme. All of these are supposed to help you understand your model better, whether through the lens of fairness or through the lens of how it's making its prediction, or through the lens of reliability and safety and its errors, or whether it's inclusive to everyone. Hopefully, help you build trust, improve it, debug it further, and make actionable insights.
Azure Machine Learning - Responsible AI Tools
Let's go through that ecosystem. In order to guide you through this set of tools, I would like to first start by a framing. Whenever you are having a machine learning lifecycle, or even just data, you would like to go through this cycle. First, you would like to take your model and identify all the issues, aka, fairness issues, errors, that are happening inside that. Without identification stage, you don't know exactly what is going wrong. Next, another important step is to diagnose why that thing is going wrong. The diagnosis piece might look like that, now I understand that there are some issues or errors in my data. Now I diagnose that the imbalance in my data is causing it. The diagnosis stage is quite important, because that discovers the root cause of the issue. That's how you can take more efficient, targeted mitigations in order to improve your model. Naturally, then you move to the mitigation stage where, thanks to your identification and diagnosis skills, now you can mitigate those issues that are happening. One last step that I would like to highlight is take action, sometimes you would like to inform a customer or a patient or a financial loan applicant about, for instance, what can they do, so next time they get a better outcome. Or, you want to inform your business stakeholders as what can you give some of the clients in order to boost sales. Sometimes you want to take real-world actions, some of them are model driven, some of them are data driven.
Identification Phase
Let's start with identify and the set of open source tools and Azure ML integrated tools that we provide for you to identify your model issues. Those two tools are error analysis and fairness tools. First, starting with error analysis, the whole motivation behind us putting this tool out there is the fact that we see people often use one metric to talk about their model's goodness, like they say, my model is 73% accurate. While that is a great proxy into identifying the model goodness and model health, it often hides this important information, that error is not uniformly distributed in your data. There might be the case that there are some erroneous packets of data, like this packet of data that is only 42% accurate. Versus, in contrast, this packet of data is getting all of the right predictions. If you go with one number, you're losing this very important information that my model has some erroneous packets, and I need to investigate why that cohort is getting more errors. We released a toolkit called error analysis, which is helping you to validate different cohorts, understand and observe how the error has been distributed across your dataset, and basically see a heat map of your errors as well.
Next, we worked on another tool called Fairlearn, which is also open source, it is to help you understand your model fairness issues. It is focusing on two different types of harms that AI often give rise to. One is harm of quality of service, where AI is providing different quality of service to different groups of people. The other one is harm of allocation where AI is allocating information opportunities or resources differently across different groups of people. An example for harm of quality of service is a voice detection system that might not work as well for say females versus males or non-binary people. An example of harm of allocation is a loan allocation AI or a job screening AI that might be better at picking candidates among white men compared to other groups. The whole hope behind our tool is to ensure that you are looking at the fairness metrics with the lens of group fairness, so how different groups of people are getting this treatment. We provide a variety of different fairness and performance metrics and rich visualizations, in order for you to observe the fairness issues as they occur in your model.
Both of these support a variety of different model formats, Python model using scikit predict convention, Scikit, or TensorFlow, PyTorch, Keras models. They also support both classification and regression. An example of a company putting our fairness tool into production is Philips Healthcare. They put fairness in production into their ICU models. They wanted to make sure that their ICU models that they have out there is performing uniformly across different patients with different ethnicities, gender identities. Another example is Ernst & Young in a financial scenario where they use this tool in order to understand how their loan allocation AI is providing this opportunity of getting a loan across different genders and different ethnicities. They were able to also use our mitigation techniques.
Diagnosis Phase
After the identification phase, now you know where the errors are occurring, and you know your fairness issues. You move on to the diagnosis piece. I cover two of the most important diagnosis capabilities, interpretability and perturbations and counterfactuals. One more to just like momentarily touch on is, we're also in the process of releasing a data exploration and data mitigation library. The diagnosis piece right now entails the more basic data explorer. I will show that to you in a demo. It also includes interpretability, that's the module we provide to you, which basically tells you what are the top key important factors impacting your model predictions. How your model is making its predictions. It covers both global explanation and local explanation. How overall the model is making its prediction, and how individual data points for them, how the model has made its predictions.
We do have different packages under Interpret ML capabilities that we have. It's a collection of black box interpretability techniques that can literally cover any model that you bring to us, no matter if it's Python, or Scikit, or TensorFlow, PyTorch, Keras. We also have a collection of glassbox models that are intrinsically interpretable models, if you have the flexibility of basically changing your model and training an interpretable model from scratch. An example of that is Scandinavian Airlines. They basically used our interpretability capabilities via Azure Machine Learning to build trust with their fraud detection model of their loyalty program. Of course, you can imagine that in such cases, you want to reduce and minimize and remove mistakes, because you don't want to tell a very loyal customer that they've done some fraudulent activity, or flag their activity by mistake. That is a very bad customer experience. They wanted to understand how their fraud detection model is making their predictions, and so they used interpretability capabilities to understand that.
Another important diagnosis piece is counterfactual and perturbations. You can do lots of freeform perturbations, do what-if analysis, change features of a data point, and see how the model predictions change for that. Also, you can look at counterfactuals and that is simply telling you what is the bare minimum changes to a data point's feature values that could lead into a different prediction. Say, Mehrnoosh's loan is getting rejected, what is the bare minimum change that I can apply to her features so that the AI predicts approved next time?
Mitigate, and Take Action Phase
Finally, we go to the mitigation stage, and also take action stage. We do cover a class of unfairness mitigation algorithms that could literally encompass any model. They have different flexibilities. Some of them are just post-processing methods and could adjust your model predictions in order to improve it. Some of them are more like reductions method, combination of pre-processing and in-processing. They can update your model objective function in order to retrain your model and not just minimize error, but also put control on a fairness criteria that you specify. We also do have pre-processing methods that will readjust your data in terms of better balancing it and better representing the underrepresented groups. Then, hopefully, the model that is trained on that augmented data is going to be a fairer model. Last, we realized that a lot of people are using our model, Responsible AI insights, for decision making in the real world. We all know models sometimes take on correlations rather than causation. We wanted to provide you with a tool that works on your data, just historic data, and uses a technique called double machine learning in order to understand whether there are any causal effects of a certain feature on the real-world phenomenon. Say, if I provide promotion to a customer, would that really increase the sales that that customer will generate for me? Causal inference is another capability we just released.
Looking forward, one thing that I want to mention is, while I went through different parts of this identify, diagnose, mitigate, we have brought every single tool I just represented under one roof, and that is called Responsible AI dashboard. The Responsible AI dashboard is a single pane of glass, bringing together a variety of these tools under one roof, same set of API, a customizable dashboard. You can do both model debugging and also responsible decision making with that, depending on how you're customizing and what you pass to it. Our next steps would be to expand the portfolio of Responsible AI tools to non-tabular data, enable Responsible AI reports for non-technical stakeholders. We do have some exciting work on PDF reports you can share with your regulators, risk officers, business stakeholders. We are working on enabling model monitoring at scoring time just to bring all these capabilities beyond evaluation time and bring it to scoring time, and make sure that as the model is seeing the unseen data, it can still detect some of these fairness issues, reliability issues, interpretability issues. We're also working on a compliance infrastructure, because we all know that there are nowadays so many stakeholders involved in development, deployment, and testing and approval of an AI system. We want to provide the whole ecosystem to you.
We believe in the potential of AI for improving and transforming our lives. We also know there is a need for tools to assist data scientists, developers, and decision makers to understand and improve their models to ensure AI is benefiting all of us. That's why we have created a variety of tools to help operationalize Responsible AI in practice. Data scientists tend to use these tools together in order to holistically evaluate their models. We are now introducing the Responsible AI dashboard, which is a single pane of glass, bringing together a number of Responsible AI tools. With this dashboard, you can identify model errors, diagnose why those errors are happening, and mitigate them. Then, provide actionable insights to your stakeholders and customers. Let's see this in action.
First, I have here a machine learning model that can predict whether a house will sell for more than median price or not, and provide the seller with some advice on how best to price it. Of course, I would like to avoid underestimating the actual price as an inaccurate price could impact seller profits and the ability to access finance from a bank. I turned into the Responsible AI dashboard to look closely at this model. Here is the dashboard. I can do, first, error analysis to find issues in my model. You can see it has automatically separated the cohorts with error counts. I found out that bigger old houses have a much higher error rate of 25% almost in comparison with large new houses that have error rates of only 6%. This is an issue. Let's investigate that further. First, let me save these two cohorts. I save them as new and old houses, and I go to the model statistics for further exploration. I can take a look at the accuracy, false positive rates, false negative rate across these two different cohorts. I can also observe the prediction probability distribution and observe that older houses have higher probability of getting predictions less than median. I can further go to the Data Explorer and explore the ground truth values behind those cohorts. Let me set that to look at the ground truth values. First, I will start from my new houses cohort. As you can see here, most of the newer homes sell for higher price than median. It's easy for the model to predict that and get a higher accuracy for that. If I switch to the older houses, as you can see, I don't have enough data representing expensive old houses. One possible action for me is to collect more of this data and retrain the model.
Let's now look at the model explanations and understand how the model has made its predictions. I can see that the overall finish quality, above ground living room area, and total basement square footage are the top three important factors that impact my model's prediction. I can further click on any of these like overall finish quality, and understand that a lower finish quality impacts the price prediction negatively. This is a great sanity check that the model is doing the right thing. I can further go to the individual feature importance, click on one or a handful of data points and see how the model has made predictions for them. Further, when I come to the what-if counterfactual, what I am seeing here is for any of these houses, I can understand what is the minimum change I can apply to, for instance, this particular house? Which has actually a high probability of getting the prediction of less than median, so that the model predicts the opposite outcome. Looking at the counterfactuals for this one, only if the house had a higher overall quality from 6 to 10, then the model would predict that this house would sell for more than median. To conclude, I learned that my model is making predictions based on the factors that made sense to me as an expert, and I need to augment my data on the expensive old house category, and even potentially bring in more descriptive features that help the model learn about an expensive old house.
Now that we understood the model better, let's provide house owners with insights as to what to improve in these houses to get a better price ask in the market. We only need some historic data of the housing market to do so. Now I go to the causal inference capabilities of this dashboard to achieve that. There are two different functionalities that could be quite helpful here. First, the aggregate causal effect which shows how changing a particular factor like garages, or fireplaces, or overall condition would impact the overall house price in this dataset on average. I can further go to the treatment policy to see the best future intervention, say switching it to screen porch. For instance, here I can see for some houses, if I want to invest in transforming a screen porch, for some houses, I need to shrink it or remove it. For some houses, it's recommending me to expand on it. Finally, there's also an individual causal effect capability that tells me how this works for a particular data point. This is a certain house. First, I can see how each factor would impact the actual price of the house in the market. I can even do causal what-if analysis, which is something like if I change the overall condition to a higher value, what boost I'm going to see in the housing price of this in the market.
We looked at how these tools help you identify and diagnose error in a house price prediction model and make effective data-driven decisions. Imagine if this was a model that predicted the cost of healthcare procedures or a model to detect potential money laundering behavior, identifying, diagnosing, or making effective data-driven decisions would have even higher consequences on people's lives there. Learn more about the tool on aka.ms/responsibleaidashboard, and try it on Azure Machine Learning to boost trust in your AI driven solutions.
Questions and Answers
Breviu: Ethics in AI is something I'm very passionate about. There's so much harm that can be done if it's not thought about. I think that showing the different tools and the different kinds of thought processes that you have to go through in order to make sure that you're making models that are going to not only predict well for accuracy, but also that they're not going to cause harm.
Sameki: That is absolutely true. I feel like the technology is not going to slow down. We're just starting with AI and we're expanding on its capabilities and including it in more aspects of our lives, from financial scenarios, to healthcare scenarios, to even retail, our shopping experience and everything. It's even more important to have technology that is accompanying that fast growth of AI and is taking care of all those harms in terms of understanding them, providing solutions or mitigations to them. I'm quite excited to build on these tools and help different companies operationalize this super complicated buzzword in practice, really.
Breviu: That's true. In so many companies, they might want to do it, but they don't really know how. I think it's cool that you showed some of the different tools that are out there. There was that short link that you provided that was to go look at some of the different tools. You also mentioned some new tooling that is coming out, some data tooling.
Sameki: There are a couple of capabilities. One is, we completely realized that the model story is incomplete without the right data tools, or data story. Data is always a huge part, probably the most important part of a machine learning lifecycle. We are also accompanying this with more sophisticated data exploration and data mitigation library, which is going to land under the same Responsible AI toolbox. That will help you understand your data balances, and that also provides lots of APIs that can rebalance and resample parts of your data that are underrepresented. Besides this, at Microsoft Build, we're going to release a variety of different capabilities of this dashboard integrated inside our Azure Machine Learning. If your team is on Azure Machine Learning, you will get easy access, not just to this Responsible AI dashboard and its platform, but also a scorecard, which is a report PDF, summarizing the insights of this dashboard for non-technical stakeholders. It was quite important for us to also work on that scorecard because there are tons of stakeholders involved in an end-to-end ML lifecycle. Many of those are not super data science savvy or super technical. There might be surgeons. There might be financial experts. There might be business managers. It was quite important for us to also create that scorecard to bridge the gap between super technical stakeholders and non-technical stakeholders in an ML lifecycle.
Breviu: That's a really good point. You have the people that understand the data and how to build the model, but they might not understand the business application side of it. You have all these different people that need to be able to communicate and understand how their model is being understood. It's cool that these tools can do that.
You talked about imbalanced data as well. What are some of the main contributing factors to ethical issues within models?
Sameki: Definitely, imbalanced data is one of them. That could mean many different things. You are completely underrepresenting a certain group in your data, or you are representing that group, but that group in the training data is associated with unfavorable outcomes. For instance, you have a certain ethnicity in your loan allocation AI dataset, however, all of the data points that you have from that ethnicity happen to have rejection on their loans. The model creates that association between that rejection and belonging to that ethnicity. Either not representing a certain group at all, or representing them but not checking whether they are represented well in terms of the outcome that is affiliated with them.
There are some other interesting things as well, after the data, which is probably the most important issue. Then there is the issue of problem definition. Sometimes you're rushing to train a machine learning model on a problem, and so you're using the wrong proxies as a predictor for something else. To give you a tangible example, to make it understandable, imagine you do have a particular model that you're training in order to assign different risk scores to neighborhoods, like security scores. Then you realize that, how is that model trained? That model is trained on a data that is coming from arrest records of the police, imagine. Just using arrest records as a proxy into the security score of a neighborhood is a very wrong assumption to make because we all know that policing practices at least in the U.S. is quite unfair. It might be the case that there are more police officers deployed to certain areas that have certain ethnicities, and way less police officers to some other areas where there are some other ethnicities residing. Just because there are more police officers there, there might be more reporting of certain even like misdemeanors, or something that that police officer didn't like, or whatever. That will bump up the number of arrest records. Using that purely for proxying to the safety score of that neighborhood, has that dangerous outcome of affiliation between the certain race residing in that neighborhood and the security of that neighborhood.
Breviu: When those kinds of questions come up, I think about, are we building a model that even should be built? Because there's two kinds of questions when it comes to ethics in AI. It's, is my model ethical? Then there's the opposite, is it ethical to build my model? When you're talking about arrest records, and that kind of thing, and using that, I start worrying about, what is that model going to actually do? What are they going to use that model for? Is there even a fair way to build the model on that type of data?
Sameki: I absolutely agree. A while ago, there was this project from Stanford, it was called Gaydar. It was a project, which was training machine learning models on top of bunch of photos that they had recorded and captured from the internet and from different public datasets. The outcome was to predict whether the person is belonging to the LGBTQ community or not, or gay or not. At that time, when I saw that I was like, who is supposed to use this and for what reason? I think that started getting a lot of attention in the media that, we know that maybe AI could do things like that, questionable, but maybe. What is the point of this model? Who is going to use it? How are we going to guarantee that this model is not going to be used to basically perpetuate biases, stuff like that, against the LGBTQ community that are historically marginalized? There are tons of deep questions that we have to ask that whether machine learning is an appropriate thing to do for a problem, and what type of consequences it could have. If we do have a legit case for AI, could be helpful to make processes more efficient, could be more helpful to expedite certain super lengthy processes. Then we have to accompany it with enough checks and balances, scorecards, and also terms and services as how people use that model. Make sure that we do have a means of hearing other people's feedback in case they observe this model being misused in bad scenarios.
Breviu: That's a really good example of one that just shouldn't have happened. It always tends to be the marginalized, or the oppressed society, or parts of society that are hurt the most, and oftentimes aren't necessarily the ones that are even involved in building it as well, which is one of the reasons why having a diverse set of engineering for these types of models. Because I guarantee you, if you had somebody that was part of that community building that model, they probably would have said, this is really offensive.
Sameki: They would catch it. I always knew about the focus of the companies on the concept of diversity and inclusion before I joined this Responsible AI effort, but now I understand it from a different point of view that, it matters, that we have representation from people who are impacted by that AI in the room to be able to catch these harms. This is an area where growth mindset is the most important. I am quite sure that even if we are systematic engineers that truly care about this area and put all of these checks and balances, stuff happens still. Because this is a very sociotechnical area where we cannot fully claim that we are debiasing a model. This is a concept that has been studied by philosophers and social scientists for centuries. We can't come up suddenly out of the tech world and say, we've found a solution for it. I think progress could be made to figure out these harms, catching it early on, diagnosing why those happen. Mitigating them based on your knowledge, and documenting what you could not resolve and put some diverse groups of people in the decision making to catch some of those mistakes. Then, have a very beautiful feedback loop where you capture some thoughts from the audience and you are able to act fast and also very solid monitoring lifecycle.
Breviu: That's actually a good point, because it's not only just the ideation of it, should I do this? Ok, I should. Now I'm building it, now, make sure that it's ethical. Then there's the data drift and models getting stale and needing to monitor what's happening in [inaudible 00:35:32], so make sure that it continues to be able to predict well, and do that.
Any of these AI tools that you've been showing, are they able to be used in a monitoring format as well?
Sameki: Yes. Most of these tools could be, for instance, the interpretability. We do have support of scoring time interpretability, which basically allows you to call the deployed model, get the model predictions, and then call the deployed explainer and get the model explanations for that prediction at runtime. The fairness error analysis pieces are a little trickier. Fairness, basically, you can also specify the favorable outcome, and you can keep monitoring that favorable outcome distribution across different ethnicities, different genders, different sensitive groups, whatever that means to you. For the rest of fairness metrics, or error analysis, and things like that, you might require, periodically upload some labeled data based on your new data, take a piece, maybe use crowdsourcing or human labelers to label that and then parse it. General answer is yes. There are some caveats. We're also working on a very strong monitoring story that goes around these caveats and helps you monitor that during runtime.
Breviu: Another example, I think of ones where I've seen that make me uncomfortable, and this happens, like machine learning models as part of the interview process. It's one that actually happens a lot. There's already so many microaggressions and unconscious biases, that using a model like this in the interview process, and I've read so many stories about it as well, where having, just even on resumes, how quickly it actually is biased. How do you feel about that particular type of use case? Do you think these tools can work on that type of problem? Do you think we could solve it enough to where it would be ethical to use it in the interviewing process?
Sameki: I have seen both with some external companies, they're using AI in candidate screening, and they have been interested in using the Responsible AI tools. LinkedIn is now also part of Microsoft family. I know LinkedIn is also very careful about how these models are trained, tested. I actually think these models could be great initial proxies to figure out some better candidates. However, it's quite important that if you want to trust the top ranked candidates, it's super important to understand how the model has picked that, and so look at the model explainability, because often, there has been this case of associations.
There are two examples that I can give you. I remember that once there was this public case study from LinkedIn, they had trained a model for job recommendations, how you go to LinkedIn and it says, apply for this and this. Then they realized early on that one of the ways that LinkedIn algorithm was using the profiles in order to match them with the job opportunities was the fact that the person was providing enough description about what they are doing, what are they passionate about? How you have a bio section and then you have your current position, which you can add text to. Then there was a follow-up study by LinkedIn which was mentioning that women tend to have less details shared there, so in a way women tend to market themselves in a less savvy way compared to men. That's why men were getting better quality recommendations and a lot more matches compared to women or females, non-male, basically: females, non-binary. That was a very great wake-up call for LinkedIn, that, ok, this algorithm is doing this matching, we have to change it in order to not put too much emphasis. It's great that they have this extra commentary and whatever. First of all, we have to maybe provide some recommendations to people who have not filled those sections as your profile is this much complete, how they give you signals as go and add more context.
Also, we have to revisit our algorithms to really look at the bare minimum stuff, like the latest position posted, experiences. Even then, women go on maternity leave and family care leaves all the time. I still feel like when we have these companies receiving so many candidates and resumes, there is some role that AI could play to bring some candidates up. However, before deploying it in production, we have to look at the examples. We have to also have a little team of diverse stakeholders in the loop to get those predictions and try to take a look at that from the point of view of diversity and inclusion, from the point of view of explainability of the AI, and interfere with some human rules in order to make sure it's not unfair to some underrepresented candidates.
Breviu: That talks to the interesting thing, I think that you said, one of the beginning things is how the errors are not evenly distributed throughout the data. This is an example where, your model might get a really great accuracy but it was looking at the holistic approach, and realizing that on the non-male, female non-binary side that it was at a very high error rate. That's like a really good example of that point that you made in the beginning, which I found really interesting. Because many times when we're building these models, we're looking at our overall accuracy rate and our validation and loss score. Those are looking at it as a holistic thing, not necessarily on an individual basis.
Sameki: It's very interesting, because many people use platforms like Kaggle to learn about applied machine learning. Even in those platforms, we often see scoreboards where one factor is used to pick the winner, like accuracy of the model, area under curve, whatever that might be. That implicitly gives out that impression that, ok, there are a couple of proxies, if it's good, great, go ahead and deploy. I think that's the mindset that we would love to change in the market through this type of presentations that it's great to look at your model goodness, accuracy, false positive rate, all those metrics that we're familiar with for different types of problems. However, they're not sufficient to tell you about the nuances of how that model is truly impacting the underrepresented groups. Or any blind spots, they're not going to give you the blind spots. It's not even always about fairness. Imagine, you realize that your model is 89% accurate or 95% accurate, but you realize that those 5% errors happen to happen for every single time we have this autonomous car AI, and the weather is foggy and dark and is rainy, and the pedestrian is a darker skin tone wearing dark clothes. Ninety-nine percent of the time the pedestrian is missed. That's a huge safety and reliability issue that your model has. That's a huge blind spot that is potentially killing people. If you go with one score about the goodness of the model, you're missing that important information that your model has these blind spots.
Breviu: I think your point about Kaggle too in the ethics thing, kind of just shows where this ethics was an afterthought in a lot of this, and that's why these popular platforms don't really necessarily have those tools built in, as Azure Machine Learning does. I think also as we progress and people realize more just about like data privacy as well, I think as data scientists, we've always understood the importance of data privacy. I think now it's becoming more mainstream. I think that part, and then understanding ethics more, I think it really will change how and the way that people build models and think about building models. I think AI is going to keep moving forward exponentially, in my opinion. It needs to move forward in an ethical, fully thought out way.
Sameki: We build all of these tools in the open source first, to help everyone explore these tools, augment it with us, build on it, and bring their own capabilities and components, and put it inside that Responsible AI dashboard. If you're interested, check out our open source offering and send us a GitHub issue, send us your request. We are quite active on GitHub, and we'd love to hear your thoughts.
See more presentations with transcripts
Recorded at:
Mar 31, 2023
Mehrnoosh Sameki
Related Sponsored Content
This content is in the ai, ml & data engineering topic, related topics:.
- AI, ML & Data Engineering
- QCon Plus May 2022
- Transcripts
- Artificial Intelligence
- QCon Software Development Conference
Related Editorial
Popular across infoq, building saas from scratch using cloud-native patterns: a deep dive into a cloud startup, yelp overhauls its streaming architecture with apache beam and apache flink, java news roundup: jobrunr 7.0, introducing the commonhaus foundation, payara platform, devnexus, production comes first - an outside-in approach to building microservices by martin thwaites, courtney nash discusses incident management, automation, and the void report, kubecon eu: mercedes-benz’s migration from pod security policies to validation admission policies.
- Data, AI, & Machine Learning
- Managing Technology
- Social Responsibility
- Workplace, Teams, & Culture
- AI & Machine Learning
- Diversity & Inclusion
- Big ideas Research Projects
- Artificial Intelligence and Business Strategy
- Responsible AI
- Future of the Workforce
- Future of Leadership
- All Research Projects
- AI in Action
- Most Popular
- The Truth Behind the Nursing Crisis
- Work/23: The Big Shift
- Coaching for the Future-Forward Leader
- Measuring Culture
The spring 2024 issue’s special report looks at how to take advantage of market opportunities in the digital space, and provides advice on building culture and friendships at work; maximizing the benefits of LLMs, corporate venture capital initiatives, and innovation contests; and scaling automation and digital health platform.
- Past Issues
- Upcoming Events
- Video Archive
- Me, Myself, and AI
- Three Big Points
Big Idea > Artificial Intelligence Responsible AI
In collaboration with
GUEST EDITOR
Elizabeth renieris.
Guest editor, MIT Sloan Management Review
The responsible AI initiative looks at how organizations define and approach responsible AI practices, policies, and standards. Drawing on global executive surveys and smaller, curated expert panels, the program gathers perspectives from diverse sectors and geographies with the aim of delivering actionable insights on this nascent yet important focus area for leaders across industry.
Latest from the 2024 RAI Panel
It governance & leadership, ai-related risks test the limits of organizational risk management.
Experts debate how effectively organizations are adjusting risk management practices to govern AI.
Elizabeth M. Renieris et al.
Meet the rai panel, join the discussion with our panelists on linkedin.
Continue the conversation with us on LinkedIn. Join the AI for Leaders group today.
Carolina Aguerre
Universidad católica del uruguay.
Ranjeet Banerjee
Cold chain technologies.
Teddy Bekele
Land o’lakes.
Richard Benjamins
Bruno Bioni
Data privacy brasil.
Pierre-Yves Calloc’h
Pernod ricard.
Ryan Carrier
Forhumanity.
Simon Chesterman
National university of singapore.
Automation Everywhere
Yasodara Cordova
Unico idtech.
Triveni Gandhi
Rashmi Gopinath
B capital group.
Sameer Gupta
Douglas Hamilton
David R. Hardoon
Aboitiz data innovation, union bank of the philippines.
Rainer Hoffmann
Kartik Hosanagar
The wharton school of the university of pennsylvania.
Ayelet Israeli
Harvard business school.
Johann Laux
Oxford internet institute.
Linda Leopold
Tshilidzi Marwala
United nations university.
Ellen Nielsen
Simone Oldekop
Carl zeiss ag.
Florian Ostmann
The alan turing institute.
David Polgar
All tech is human.
Shilpa Prasad
Rohan Rajput
University of Helsinki
Riyanka Roy Choudhury
Stanford university.
Idoia Salazar
Nanjira Sambuli
Carnegie endowment for international peace.
Sanjay Sarma
Massachusetts institute of technology.
Rolf Schumann
Schwarz group.
Instalily.ai
Var Shankar
Responsible ai institute.
Shamina Singh
Belona Sonna
Bel’s ai initiative; afroleadership.
Andrew Strait
Ada lovelace institute.
Mark Surman
Mozilla foundation.
Stefaan Verhulst
Katia Walsh
Elizabeth Anne Watkins
Franziska Weindauer
The 2023 Research and Panel
Building robust rai programs as third-party ai tools proliferate, ai & machine learning, are rai programs prepared for third-party and generative ai, david kiron and steven mills, financial management & risk, is your organization investing enough in responsible ai ‘probably not,’ says our data, managing rai requires a central team, are responsible ai programs ready for generative ai experts are doubtful, responsible ai at risk: understanding and overcoming the risks of third-party ai, the 2022 research and panel, to be a responsible ai leader, focus on being responsible, executives are coming to see rai as more than just a technology issue, mature rai programs can help minimize ai system failures, innovation strategy, rai enables the kind of innovation that matters, corporate social responsibility, should organizations link responsible ai and corporate social responsibility it’s complicated., boards & corporate governance, why top management should focus on responsible ai, david kiron et al..
Training Materials
Did you know you can Earn a Facilitator Badge from Credly, that you can put on your LinkedIn profile to show that you completed the Azure Responsible AI Workshop Training? Once you do that, you can then run events to help others skill up and Earn a Course Completion Badge for their professional profiles. Here are some resources to help.
Host a Workshop!
Want to host this Responsible AI Dashboard workshop with your community? Here are three things you can do to get ready:
- Share Your Event With Us - We can add that to our Calendar of events.
- Use the Discussion Board - Give the community a place to ask questions & share insights.
- Copy & Customize Slides - Change title slide to have your profile - then adapt slides.
Customize and Present
If you are a trained Facilitator you can now 🔻 Download The Training Powerpoint Deck and customize it to suit your audience and delivery style. Here are a few changes we suggest you make, to start with.
- Update the Title Slide - Replace the speaker name and image, with yours
- Add links to Your Resources - Share any articles, repos or links you found useful
- Customize The Use Case - Connect examples for responsible AI to stories in your region.
Share With Audience
The link below goes to a hosted version of the slides on SpeakerDeck that matches the video walkthrough that you see in the sidebar. This can be useful for the audience to review if they are working on the labs in a self-guided manner during the event:
- Hosted Slides - click for a downloadable PDF.
- Video Walkthrough - click for timestamped video.
- Host a Workshop!
- Customize and Present
- Share With Audience
Register to attend our May 15th virtual event, “ How Procurement Can Shape Responsible AI .”
Accelerating Responsible AI Adoption
Harnessing the power of AI comes with great responsibility
Emerging regulations for AI systems, such as the EU AI Act and the Canada Data and AI Act, are planning financial penalties of up to 6% of revenue and even criminal punishment for non compliant systems. The New York City Law on Automated Employment Decision Tools carries a penalty up to $1,500 per violation, per user, per day. Organizations not thinking about responsible AI are acquiring technical debt and increasing the risk of doing business or to cause irreversible harm.
Independent assessments giving you a much needed responsible AI benchmark
The RAI Institute’s independent and accredited conformity assessments provide assurance that AI systems are aligned with existing and emerging internal policies, regulations, laws, best practices and standards for the responsible use of technology. Available as self assessments, independently delivered assessments, or as a certification delivered by accredited auditors, our assessments provide a well needed layer of trust and assurance for all stakeholders.
Working together for AI we can trust
The RAI Institute is a member-driven non-profit organization focused on supporting leading organizations and AI practitioners. By becoming a member, organizations have the unique opportunity to demonstrate their leadership, to enhance their responsible AI practices with proven tools, processes, and policies, and to join an engaged community defining the future of AI at the intersection of industry, civil society, academia and government.
Featured News from RAI Institute
Discover more about RAI Institute’s work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!
Nicole McCaffrey
Managing the risks of generative ai.
Responsible AI Institute
Case study: altaml.
Responsible AI Institute Announces New Members, Launches Responsible AI Hub to Support AI Ecosystem
Michael Chapman
Insights from the “genai in healthcare” series: navigating responsibility and opportunity.
Amanda Lawson
Towards responsible ai in employment: insights from our employment working group.
Leaders in Responsible AI: A Member’s Story
Frequently Asked Questions
If you’re looking for answers to questions that others have already asked, you’re in the right place! If you don’t find the explanation you seek, feel free to communicate directly with us using the contact form at the bottom of this page.
More From Forbes
Six essential elements of a responsible ai model.
- Share to Facebook
- Share to Twitter
- Share to Linkedin
VP Data & AI at ECS , roles have included co-founder at a data analytics startup, VP AI at Booz Allen, and Global Analytics Lead at Accenture.
New ethical and moral questions continue to emerge as we expand how we use artificial intelligence in business and government. This is undoubtedly a good thing. Developing new technologies without incorporating ethics, morals or values would be careless at best, catastrophic at worst.
This is also a gray area. For years, I’ve used “ethical AI” as a catchall phrase for the standards and practices that principled organizations should build into their data science programs. But what exactly is ethical? What is moral? According to whom?
Ethics are principles of right and wrong, usually recognized by certain institutions, that shape individuals’ behavior. Morals are shared social or cultural beliefs that dictate what is right or wrong for individuals or groups. You don’t need to be moral to be ethical, and vice versa, though the two terms are often used interchangeably.
This quandary leads to the needful shift in framework, one that focuses on “responsible AI” to better capture these nuanced and evolving ideas.
What Is Responsible AI?
Rudy Giuliani And Mark Meadows Indicted In Arizona Fake Electors Case
Tupac shakur s estate challenges drake over ai vocals in kendrick lamar diss song, as russian troops broke through ukrainian lines panicky ukrainian commanders had no choice but to deploy one of their least prepared brigades.
Responsible AI is composed of autonomous processes and systems that explicitly design, develop, deploy and manage cognitive methods with standards and protocols for ethics, efficacy and trustworthiness. Responsible AI can’t be an afterthought or a pretense. It has to be built into every aspect of how you develop and deploy your AI, including in your standards for:
• Data
• Algorithms
• Technology
• Human Computer Interaction (HCI)
• Operations
• Ethics, morals and values
It’s easy to announce your values to the world, but it's much harder to take actions that require daily operational discipline needed to live your values. A responsible AI framework is rooted in big ideas, like what is ethical or moral, and everyday decisions, like how you treat your data and develop your algorithms.
Six Key Elements Of A Responsible AI Framework
I’m a firm believer in the maxim, often credited to Albert Einstein: “Everything should be made as simple as it can be, but not simpler.” This has been a guiding principle as I’ve been studying different AI models and developing a universal model for reference across industries and academia.
Within the proposed framework, responsible AI must be all of the following:
1. Accountable: Algorithms, attributes and correlations are open to inspection.
2. Impartial: Internal and external checks enable equitable application across all participants.
3. Resilient: Monitored and reinforced learning protocols with humans produce consistent and reliable outputs.
4. Transparent: Users have a direct line of sight to how data, output and decisions are used and rendered.
5. Secure: AI is protected from potential risks (including cyber risks) that may cause physical and digital harm.
6. Governed: Organization and policies clearly determine who is responsible for data, output and decisions.
Imagine the framework as a pie chart cut into six slices. You’re not aiming for perfection overnight, but you can scale from the center toward the edges, progressing toward gradually filling in more of each slice. You can ultimately right-size the capability of each wedge according to need and resources. For example, your transparency might only be at 15% now, but after a year of concentrated effort, it could go up to 45% with a goal state of 80%.
The Department of Defense’s framework has five components, the White House has nine and the intelligence community has 10. Various software, technology and other solutions providers have frameworks that range from three to 14. I recommend starting in the middle with this consolidated and focused list of six and subsequently fine-tuning it to the needs of your business. Always keep the model as simple as possible. If you want to expand it, examine your reasons first. Does the component you want to add not fit in any existing category? Are you tempted to grow your list due to some bias? For example, the intelligence community broke “goals and risks” and “legal and policy” into two separate items, whereas I think they could be combined in one governance category.
If the size, mission and application of AI warrants more oversight, I advise considering an additional step of establishing an AI ethics board. This isn’t necessary until you are ready to make a full investment and formalize a board to review what features characterize a bespoke responsible AI framework for your organization. Otherwise, it’s best to keep your responsible AI focused on the distilled and resilient six-part framework shared above. If you are considering creating an ethics board, ask what I call “salty questions” to take an honest look at your motivations and next steps:
• Is an AI ethics board appropriate or necessary?
• What should be our core ethical considerations?
• What kind of strategy do we need?
• How could we assess risk?
• Are there particular areas where we will need board oversight?
• How could we determine if the use of AI will result in discriminatory outcomes?
• How could we assess bias?
• Should we require our AI systems and algorithms (and those of our partners) to be open to inspection? How will we communicate resulting decisions?
• Who will be accountable for unintended outcomes of AI?
• Who will be responsible for making things right?
Responsible AI is the path forward to navigate how we counterbalance risk, earn trust and overcome bias as we take advantage of AI’s unlimited potential. Future AI, both humans and systems, must have strong and growing measures of accountability, impartiality, resiliency, transparency, security and governance.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
- Editorial Standards
- Reprints & Permissions
AI ethics & governance
Design and deploy Responsible AI solutions that are ethical, transparent, and trustworthy.
- Responsible AI: Scale AI With Confidence
Benefits of Responsible AI
- Enabling Trustworthy AI
- What We Think
The art of AI maturity: Advancing from practice to performance
Responsible AI: Scale AI with confidence
AI brings unprecedented opportunities to businesses, but also incredible responsibility. Its direct impact on people’s lives has raised considerable questions around AI ethics, data governance, trust and legality. In fact, Accenture’s 2022 Tech Vision research found that only 35% of global consumers trust how AI is being implemented by organizations. And 77% think organizations must be held accountable for their misuse of AI.
The pressure is on. As organizations start scaling up their use of AI to capture business benefits, they need to be mindful of new and pending regulation and the steps they must take to make sure their organizations are compliant . That’s where Responsible AI comes in.
So, what is Responsible AI?
Responsible AI is the practice of designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society—allowing companies to engender trust and scale AI with confidence.
From AI compliance to competitive advantage
Explore organizations' attitudes towards AI regulation and their readiness to embrace it.
With Responsible AI, you can shape key objectives and establish your governance strategy, creating systems that enable AI and your business to flourish.
Minimize unintended bias
Build responsibility into your AI to ensure that the algorithms – and underlying data – are as unbiased and representative as possible.
Ensure AI transparency
To build trust among employees and customers, develop explainable AI that is transparent across processes and functions.
Create opportunities for employees
Empower individuals in your business to raise doubts or concerns with AI systems and effectively govern technology, without stifling innovation.
Protect the privacy and security of data
Leverage a privacy and security-first approach to ensure personal and/or sensitive data is never used unethically.
Benefit clients and markets
By creating an ethical underpinning for AI, you can mitigate risk and establish systems that benefit your shareholders, employees and society at large.
Responsible AI in HR
Responsible AI practices can be applied to any industry or function. Take Human Resources (HR) as an example. When done correctly, AI systems can allow organizations to make more ethical, effective and efficient talent decisions by eliminating potential sources of bias. Explore more in our interactive report.
Enabling trustworthy AI
Principles and governance.
Define and articulate a Responsible AI mission and principles, while establishing a transparent, governance structure across the organization that builds confidence and trust in AI technologies.
Risk, policy and control
Strengthen compliance with current laws and regulations while monitoring future ones, develop policies to mitigate risk and operationalize those policies through a risk management framework with regular reporting and monitoring.
Technology and enablers
Develop tools and techniques to support principles such as fairness, explainability, robustness, traceability and privacy, and build them into the AI systems and platforms that are used.
Culture and training
Empower leadership to elevate Responsible AI as a critical business imperative and require training to provide all employees with a clear understanding of Responsible AI principles and criteria for success.
Identify AI bias before you scale
The Algorithmic Assessment is a technical evaluation that helps identify and address potential risks and unintended consequences of AI systems across your business, to engender trust and build supportive systems around AI decision making.
Use cases are first prioritized to ensure you are evaluating and remediating those that have the highest risk and impact.
Once priorities are defined, they are evaluated through our Algorithmic Assessment, involving a series of qualitative and quantitative checks to support various stages of AI development. The assessment consists of four key steps:
- Set goals around your fairness objectives for the system, considering different end users.
- Measure & discover disparities in potential outcomes and sources of bias across various users or groups.
- Mitigate any unintended consequences using proposed remediation strategies.
- Monitor & control systems with processes that flag and resolve future disparities as the AI system evolves.
What we think
A new era of generative AI for everyone
The technology underpinning ChatGPT will transform work and reinvent business.
Cloud data: A new dawn for dormant data
As the data landscape grows more complex, companies can unleash its intrinsic value by first building a modern data platform on cloud.
Getting AI results by "going pro"
Responsible ai: from principles to practice, building trustworthy systems, the eu’s ai act: an initial assessment, case studies.
Creating a sense of belonging
A global retailer and Accenture co-created a multiyear inclusion and diversity strategy to facilitate a greater sense of belonging for...
Evolving financial services
The Monetary Authority of Singapore and Accenture established the Veritas industry consortium to provide groundbreaking...
Related capabilities
No two clients are alike. Whether it's using a proven turnkey solution or a custom program, we'll bring our deep industry expertise to solve your unique needs, backed by a team spanning more than 125 countries. Some of our offerings include:
Artificial Intelligence
Data-led transformation, solutions.ai, frequently asked questions, what is ethical ai.
Responsible AI enables the design, development and deployment of ethical AI systems and solutions. Ethical AI acts as intended, fosters moral values and enables human accountability and understanding. Organizations may expand or customize their ethical AI requirements, but fundamental criteria include soundness, fairness, transparency, accountability, robustness, privacy and sustainability.
What are concerns involving AI ethics?
AI—if built without the right algorithmic considerations, if trained on data that has inherent bias in it, or if left ungoverned—has the potential to perpetuate unintended consequences and not perform the task it was designed and intended to perform. All of which puts customer privacy and safety at risk, and weakens trust in the technology (and the company using it) in the process. Any company that has an intention of scaling AI needs to think about the ethical implications of using AI to make decisions that will impact not just the business, but its employees and customers.
What are the key principles of responsible AI?
The key principles of Responsible AI are:
- Soundness : Comprehend context as well as uphold data quality and model performance
- Fairness : Identify and remove discrimination and support diversity and inclusion
- Transparency : Provide explainability, understandability and traceability
- Accountability : Manage oversight, redress and auditability
- Robustness : Ensure security and resilience of systems from breaches or tampering as well as readiness of a response plan
- Privacy : Safeguard personally identifiable information, data ethics and human rights as well as comply with data owner consents
- Sustainability : Consideration of human-centred ethics as well as societal and environmental well-being
What are steps to ensure AI is ethical?
Organizations should use the four pillars of Responsible AI to apply AI ethically and responsibly:
- Principles and governance : define and articulate a Responsible AI mission and principles and establish a cross-organization governance structures that builds confidence in AI technologies.
- Risk, policy and control : strengthen compliance with current laws and regulations, while monitoring for developments; develop policies to mitigate risk that you operationalize through a risk management framework.
- Technology and enablers : develop tools and techniques to support ethical AI principles and build them into AI systems and platforms.
- Culture and training : empower leadership to elevate Responsible AI as a critical business imperative; and require training to give all employees a clear understanding of ethical AI principles success criteria.
- Please enable javascript in your browser settings and refresh the page to continue.
- Technology Research
- Industry Coverage
- Healthcare Services
Implementing Responsible AI Leadership Presentation
Author(s): Neal Rosenblatt
Speak With A Representative
Request content access.
Select Social Platform:
This presentation tool uses sample business capabilities from the public health and healthcare business capability maps to provide examples of candidate use cases for AI applications. With customization, the final leadership presentation should highlight the value-based initiatives driving AI applications, the benefits and risks involved, how the proposed AI use cases align to the organization’s strategy and goals, the success criteria for the proofs of concept, and the project roadmap.
Related Content
Please confirm the appointment time and click Schedule.
Your call is being booked. A representative will be available to assist you if needed.
Main Navigation
- Contact NeurIPS
- Code of Ethics
- Code of Conduct
- Create Profile
- Journal To Conference Track
- Diversity & Inclusion
- Proceedings
- Future Meetings
- Exhibitor Information
- Privacy Policy
Invited Talk
The many faces of responsible ai, hall e (level 1).
Conventional machine learning paradigms often rely on binary distinctions between positive and negative examples, disregarding the nuanced subjectivity that permeates real-world tasks and content. This simplistic dichotomy has served us well so far, but because it obscures the inherent diversity in human perspectives and opinions, as well as the inherent ambiguity of content and tasks, it poses limitations on model performance aligned with real-world expectations. This becomes even more critical when we study the impact and potential multifaceted risks associated with the adoption of emerging generative AI capabilities across different cultures and geographies. To address this, we argue that to achieve robust and responsible AI systems we need to shift our focus away from a single point of truth and weave in a diversity of perspectives in the data used by AI systems to ensure the trust, safety and reliability of model outputs.
In this talk, I present a number of data-centric use cases that illustrate the inherent ambiguity of content and natural diversity of human perspectives that cause unavoidable disagreement that needs to be treated as signal and not noise. This leads to a call for action to establish culturally-aware and society-centered research on impacts of data quality and data diversity for the purposes of training and evaluating ML models and fostering responsible AI deployment in diverse sociocultural contexts.
RAI Summit 2023
April 26 – 28, 2023.
Summit of Responsible AI
A global, industry/academic bilateral summit focused on Responsible AI approaches that can lead to principled, practical, and scalable solutions, thereby enabling AI to do good and prevent harm.
Preliminary Tracks
Global perspective.
– State of play of research and industry collaborations – Outlook and opportunities
Application areas
– In HR and future of workforce skilling – In Health – In media, information delivery, creativity and new content creation – In automation, industrial, and logistics
– LLM impact on AI adoption and democratization of solutions – Safety and trustworthiness of early solutions – Playbook for implementation
Collaboration pathways
– Needed input from policy and governance – Standards, open knowledge, and collaboration opportunities – Common safety and risk databases
Our Speakers
AI scientists, engineers and product leaders working on RAI, as well as experts in legal, policy and ethical aspects of RAI
Amir Banifatemi
AI Commons/Xprize Co-Chair
Eric Horvitz
Microsoft Chief Scientist
James Manyika
Google SVP of Technology& Society
Pascale Fung
HKUST, CAiRE Co-Chair
April 26th – 28th, 2023
Time in PST
Setup: Registration
Opening intro and general welcome.
HKUST, CAiRE, Co-Chair
Welcome on behalf of WEF
Short Keynote on RAI and why Generative AI
4x lighting talks: 3 views of the future, panel: state of play in rai(global view), break: cocktail, registration, opening intro, keynote: state of play in generative ai, panel: outlook and opportunities for gen ai application, morning break, workshop: gen ai applications.
Workshop 1. Workforce skilling with Gen AI
Workshop 2. Media and creative content applications
Workshop 3. Health and wellbeing support
Workshop 4. Automation, industrial and logistics
Workshop 1: Master room
Workshop 2: Room 1
Workshop 3: Room 2
Workshop 4: Room 3
Panel: Reporting and discussion
Workshop: gen ai applications safety and scalling.
Workshop 1. Safety and Trustworthiness in Gen AI
Workshop 2. LLM impact on AI adoption and democratization of future solutions
Workshop 3. Playbook approaches for implementing GenAI solutions
Afternoon Break
Keynote/presentation: safety and risk framework needed to scale responsible generative ai, leaving for dinner, kick off day2 – intro, keynote: challenges for responsible ai at scale, panel: misinformation risk, parallel sessions – global support for responsible gen ai.
Session 1: Needed input from Policy and Governance for Generative AI
Session 2: Standards, Open Knowledge and Collaboration Opportunities for Generative AI
Session 3: Academic and Industrial Collaboration around Gen AI
Session 1: Main room
Session 2: Room 1
Session 3: Room 2
Reporting from parallel sessions
Panel: towards common collaboration and path forward, discussion on the establishment of the special interest group in rai, farewell and closing address.
Summit Location
Express your interests.
Invitation-only event, your request will be reviewed and invitations will be sent out shortly.
Top searches
Trending searches
teacher appreciation
11 templates
tropical rainforest
29 templates
46 templates
pediatrician
27 templates
spring season
34 templates
23 templates
Create your presentation
Writing tone, number of slides.
AI presentation maker
When lack of inspiration or time constraints are something you’re worried about, it’s a good idea to seek help. Slidesgo comes to the rescue with its latest functionality—the AI presentation maker! With a few clicks, you’ll have wonderful slideshows that suit your own needs . And it’s totally free!
Generate presentations in minutes
We humans make the world move, but we need to sleep, rest and so on. What if there were someone available 24/7 for you? It’s time to get out of your comfort zone and ask the AI presentation maker to give you a hand. The possibilities are endless : you choose the topic, the tone and the style, and the AI will do the rest. Now we’re talking!
Customize your AI-generated presentation online
Alright, your robotic pal has generated a presentation for you. But, for the time being, AIs can’t read minds, so it’s likely that you’ll want to modify the slides. Please do! We didn’t forget about those time constraints you’re facing, so thanks to the editing tools provided by one of our sister projects —shoutouts to Wepik — you can make changes on the fly without resorting to other programs or software. Add text, choose your own colors, rearrange elements, it’s up to you! Oh, and since we are a big family, you’ll be able to access many resources from big names, that is, Freepik and Flaticon . That means having a lot of images and icons at your disposal!
How does it work?
Think of your topic.
First things first, you’ll be talking about something in particular, right? A business meeting, a new medical breakthrough, the weather, your favorite songs, a basketball game, a pink elephant you saw last Sunday—you name it. Just type it out and let the AI know what the topic is.
Choose your preferred style and tone
They say that variety is the spice of life. That’s why we let you choose between different design styles, including doodle, simple, abstract, geometric, and elegant . What about the tone? Several of them: fun, creative, casual, professional, and formal. Each one will give you something unique, so which way of impressing your audience will it be this time? Mix and match!
Make any desired changes
You’ve got freshly generated slides. Oh, you wish they were in a different color? That text box would look better if it were placed on the right side? Run the online editor and use the tools to have the slides exactly your way.
Download the final result for free
Yes, just as envisioned those slides deserve to be on your storage device at once! You can export the presentation in .pdf format and download it for free . Can’t wait to show it to your best friend because you think they will love it? Generate a shareable link!
What is an AI-generated presentation?
It’s exactly “what it says on the cover”. AIs, or artificial intelligences, are in constant evolution, and they are now able to generate presentations in a short time, based on inputs from the user. This technology allows you to get a satisfactory presentation much faster by doing a big chunk of the work.
Can I customize the presentation generated by the AI?
Of course! That’s the point! Slidesgo is all for customization since day one, so you’ll be able to make any changes to presentations generated by the AI. We humans are irreplaceable, after all! Thanks to the online editor, you can do whatever modifications you may need, without having to install any software. Colors, text, images, icons, placement, the final decision concerning all of the elements is up to you.
Can I add my own images?
Absolutely. That’s a basic function, and we made sure to have it available. Would it make sense to have a portfolio template generated by an AI without a single picture of your own work? In any case, we also offer the possibility of asking the AI to generate images for you via prompts. Additionally, you can also check out the integrated gallery of images from Freepik and use them. If making an impression is your goal, you’ll have an easy time!
Is this new functionality free? As in “free of charge”? Do you mean it?
Yes, it is, and we mean it. We even asked our buddies at Wepik, who are the ones hosting this AI presentation maker, and they told us “yup, it’s on the house”.
Are there more presentation designs available?
From time to time, we’ll be adding more designs. The cool thing is that you’ll have at your disposal a lot of content from Freepik and Flaticon when using the AI presentation maker. Oh, and just as a reminder, if you feel like you want to do things yourself and don’t want to rely on an AI, you’re on Slidesgo, the leading website when it comes to presentation templates. We have thousands of them, and counting!.
How can I download my presentation?
The easiest way is to click on “Download” to get your presentation in .pdf format. But there are other options! You can click on “Present” to enter the presenter view and start presenting right away! There’s also the “Share” option, which gives you a shareable link. This way, any friend, relative, colleague—anyone, really—will be able to access your presentation in a moment.
Discover more content
This is just the beginning! Slidesgo has thousands of customizable templates for Google Slides and PowerPoint. Our designers have created them with much care and love, and the variety of topics, themes and styles is, how to put it, immense! We also have a blog, in which we post articles for those who want to find inspiration or need to learn a bit more about Google Slides or PowerPoint. Do you have kids? We’ve got a section dedicated to printable coloring pages! Have a look around and make the most of our site!
IMAGES
VIDEO
COMMENTS
Accountability. Accountability means being held responsible for the effects of an AI system. This involves transparency, or sharing information about system behavior and organizational process, which may include documenting and sharing how models and datasets were created, trained, and evaluated. Model Cards and Data Cards are examples of ...
Responsible AI. Jul 31, 2019 •. 7 likes • 1,842 views. Neo4j. Speaker: Amy Hodler, Graph Analytics and AI Program Manager, Neo4j. Technology. Responsible AI - Download as a PDF or view online for free.
Responsible artificial intelligence (AI) is a set of principles that help guide the design, development, deployment and use of AI—building trust in AI solutions that have the potential to empower organizations and their stakeholders. Responsible AI involves the consideration of a broader societal impact of AI systems and the measures required ...
Responsible AI! Responsible AI is a framework that guides how we should address the challenges around artificial intelligence from both an ethical, technical and legal point of view[1] We must resolve ambiguity for where responsibility lies if something goes wrong! This framework relies on fundamental principles[2]:
Responsible AI Mitigations, and Responsible AI Tracker Now that we saw the demo, I just want to introduce two other tools as well. We released two new tools as a part of the Responsible AI Toolbox.
damage exists if Responsible AI isn't included in an organization's approach. In response, many enterprises have started to act (or in other words, to Professionalize their approach to AI and data). Those that have put in place the right structures from the start, including considering Responsible AI, are able to scale with confidence,
The four pillars of Responsible AI. Organizations need to tackle a central challenge: translating ethical principles into practical, measurable metrics that work for them. To embed these into everyday processes, they also need the right organizational, technical, operational, and reputational scaffolding. Based on our experience delivering ...
At Microsoft, we put responsible AI principles into practice through governance, policy, and research. Learn more. Policy . Advancing AI policy. Discover the latest perspectives on AI policy from Microsoft experts. Skip Advancing AI policy section. Previous Slide. Next Slide. Blog . Governing AI: A Blueprint for the Future.
When teams have questions about responsible AI, Aether provides research-based recommendations, which are often codified into official Microsoft policies and practices. Members Aether members include experts in responsible AI and engineering, as well as representatives from major divisions within Microsoft.
The Responsible AI dashboard is a single pane of glass, bringing together a variety of these tools under one roof, same set of API, a customizable dashboard. You can do both model debugging and ...
Introduction to Responsible AI in the Generative AI Era. •. This is a microlearning course with a single module. At the end of this unit, you will be able to discuss the challenges posed by Generative AI, the need for Responsible AI, and the principles that form the foundation of Responsible AI. What's included. 3 videos 2 readings 2 assignments.
Create a Responsible AI Strategy. Responsible AI makes artificial intelligence a positive force, rather than a threat to society and to itself. Responsible AI is an umbrella term for many aspects of making the right business and ethical choices when adopting AI that organizations often address independently. These include business and societal ...
The responsible AI initiative looks at how organizations define and approach responsible AI practices, policies, and standards. Drawing on global executive surveys and smaller, curated expert panels, the program gathers perspectives from diverse sectors and geographies with the aim of delivering actionable insights on this nascent yet important focus area for leaders across industry.
If you are a trained Facilitator you can now 🔻 Download The Training Powerpoint Deck and customize it to suit your audience and delivery style. Here are a few changes we suggest you make, to start with. Customize The Use Case - Connect examples for responsible AI to stories in your region.
Accelerating Responsible AI Adoption Artificial Intelligence offers remarkable benefits but can also create significant new risks. The journey towards responsible AI is complex. The Responsible AI Institute is a global non-profit dedicated to equipping organizations and AI professionals with tools and knowledge to create, procure and deploy AI systems that are safe and trustworthy. Become […]
Within the proposed framework, responsible AI must be all of the following: 1. Accountable: Algorithms, attributes and correlations are open to inspection. 2. Impartial: Internal and external ...
At its core, artificial intelligence (AI) relies on algorithms and data. While AI has made great strides in recent years and offers a host of practical applications, there is growing recognition that ethical considerations must play a role in its development and use. Present the results of your research on the responsible use of AI and its ...
Responsible AI: Scale AI with confidence. AI brings unprecedented opportunities to businesses, but also incredible responsibility. Its direct impact on people's lives has raised considerable questions around AI ethics, data governance, trust and legality. In fact, Accenture's 2022 Tech Vision research found that only 35% of global consumers ...
Implementing Responsible AI Leadership Presentation. Access this content by contacting one of our representatives for assistance. This presentation tool uses sample business capabilities from the public health and healthcare business capability maps to provide examples of candidate use cases for AI applications. With customization, the final ...
The Many Faces of Responsible AI Lora Aroyo Hall E (level 1) [ Abstract ... critical when we study the impact and potential multifaceted risks associated with the adoption of emerging generative AI capabilities across different cultures and geographies. To address this, we argue that to achieve robust and responsible AI systems we need to shift ...
Summit of. Responsible AI. A global, industry/academic bilateral summit focused on Responsible AI approaches that can lead to principled, practical, and scalable solutions, thereby enabling AI to do good and prevent harm. High-level workshops. Governance. Practices. AI experts. Innovative solutions. Opportunities.
AI presentation maker. When lack of inspiration or time constraints are something you're worried about, it's a good idea to seek help. Slidesgo comes to the rescue with its latest functionality—the AI presentation maker! With a few clicks, you'll have wonderful slideshows that suit your own needs. And it's totally free!