Walmart released its “Responsible AI Pledge” Tuesday, laying out six commitments when using the technology: transparency, security, privacy, fairness, accountability and customer-centricity. / Photo courtesy: Shutterstock
Artificial intelligence has dominated headlines across the globe over the past year and become a source of trepidation for many, prompting retail giant Walmart to assure its customers it intends to use the technology ethically.
Walmart released its “Responsible AI Pledge” Tuesday, laying out six commitments when using the technology: transparency, security, privacy, fairness, accountability and customer-centricity.
The pledge follows an AI summit in July sponsored by the Biden-Harris Administration, where leading AI companies voluntarily committed to responsible management of the emerging technology. Companies that have signed the White House pledge include Amazon, Google, Microsoft, Meta, OpenAI, Anthropic, Inflection, Adobe, Cohere, Palantir, Salesforce, Scale AI, and Stability.
Walmart said in a press release that its AI Pledge aims to ensure that “those we serve feel confident and comfortable with the ways we use technology.”
Walmart’s six commitments include the following:
Transparency: We commit to helping customers, members and associates understand how data and technology, including AI, are being used by our company and what our goals are as we use it.
Security: We will use advanced security measures to protect your data. We commit to continuously reviewing security practices aimed at mitigating current and emerging threats.
Privacy: We commit to evaluating AI systems so that the sensitive or confidential information we store is used in ways that protect privacy.
Fairness: We will evaluate AI tools for bias that have the potential to affect the lives of our customers, members and associates. We seek to mitigate bias and commit to regular evaluations.
Accountability: We will use AI managed by people. We commit to holding ourselves accountable for its impact.
Customer-centricity: We will measure customer satisfaction with AI interactions and listen to feedback. We commit to continual reviews of our AI tools to ensure the technology is accurate, relevant and helping those we serve live better.
“It comes down to this: While technology and shopping habits evolve, our purpose and values stay the same,” the company said. “The Walmart Responsible AI Pledge is about more than just AI. It is a moment in time for us to speak directly to our customers, members and associates; be transparent and address the concerns they may have with the rapid pace of technological innovation; and reinforce our commitment to using technology in ways that are safe and beneficial to them.”
The Biden-Harris Administration’s pledge includes the following:
Ensuring Products are Safe Before Introducing Them to the Public
The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.
The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.
Building Systems that Put Security First
The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.
The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly.
Earning the Public’s Trust
The companies commit to developing robust technical mechanisms to ensure that users know when content is AI-generated, such as a watermarking system. This action enables creativity and productivity with AI to flourish but reduces the dangers of fraud and deception.
The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. These reports will cover both security risks and societal risks, such as the effects on fairness and bias.
The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the potential magnitude and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.
The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.