1、Responsible AI:Tools and Frameworks for Developing AI SolutionsMrinal KarvirSenior Cloud Software Engineering ManagerIntel Corporationhttps:/incidentdatabase.ai/AI Incidents in the News2Harm to human lifeLoss of trustFines in compliance®ulations Introduction of systemic biasMisinformationBreach o
2、f privacyCost of AI Incidents3Defining Responsible AI Principles4Respect Human RightsEnable Human OversightTransparencyPersonal PrivacySecurity,Safety,SustainabilityEquity and InclusionFocus on data used for training and the algorithm development process to help prevent bias and discrimination.Under
3、stand and explain where the data came from and how the model works.Human oversight of AI solutions to ensure they positively benefit society and do no harm.Ethical review and enforcement of end-to-end AI safety.Low-resource implementation of AI algorithms Maintaining personal privacy and consent.Foc
4、using on protecting the collected data.AI solutions should not support or tolerate usages that violate human rights.Definition Does AI add value?Who are the indented users of the system?Identify unintended potential harm and plan for remediationsTranslate user needs into data needsDevelopment Source
5、 high-quality unbiased data responsibly Get inputs from domain expertsEnable human oversightBuilt-in safety measuresDeployment Provide ways for users to challenge the outcome Provide manual controls when AI failsOffer high-touch customer supportMarketingFocus on the benefit,not the technologyTranspa
6、rently share the limitations of the system with the usersBe transparent about privacy and data settingsAnchor on familiarity Designing with a Human Centric Approach 5Historical bias Can arise even if data is perfectly measured and sampled The world as it is or was leads to a model that produces harm