Artificial Intelligence
Intel advocates for risk-based, principles-driven, and interoperable AI policy and regulatory measures.
Intel plays a vital role in AI. Intel’s products, both hardware and software, help to solve today’s most complex challenges. For example, we accelerate research and patient outcomes in healthcare and life sciences with faster, more accurate analysis across precision medicine, medical imaging, lab automation, and more. For manufacturing, we transform data into insights that help our customers optimize plant performance, minimize downtime, improve safety, and drive profitability. In research, we work with academics across the world to address global challenges with AI innovations.
To continue leveraging AI to solve some of the world’s most complex challenges, Intel supports a regulatory and policy environment that facilitates the responsible adoption of Artificial Intelligence (AI). Intel advocates for risk and principles-based AI policy measures, reducing the compliance burden to the strictly necessary, and leveraging internationally accepted standards.
Key Issues
Overall Approach
Intel supports a risk-based, multi-stakeholder approach to trustworthy AI informed by international standards (e.g., ISO/IEC) and frameworks such as the National Institute for Standards and Technology (NIST) AI Risk Management Framework. These provide key guidance to address important requirements underpinning the trust and safety of AI, such as data governance, transparency, accuracy, robustness, and bias.
Sector Specific
Numerous existing laws or regulations apply to the deployment and use of AI technology, such as privacy and consumer financial laws. Regulatory agencies should evaluate the use and impact of AI in relevant, specific use cases to clarify how existing laws apply to AI and how AI can be used in compliance with existing laws. If necessary, regulatory agencies may consider the development of appropriate requirements in collaboration with industry and stakeholders to address additional concerns.
Openness
Intel encourages AI policy measures that permit an open ecosystem. An open AI ecosystem drives accessibility for all actors in the AI value chain and promotes a level playing field where innovation can thrive.
Secure AI
In the era of digital transformation, AI is reshaping industries and improving aspects of our lives.
Yet the history of cybersecurity is a constant battle of technological innovation to stay ahead of evolving threats. As AI becomes increasingly capable of performing complex calculations, bad actors can use AI tools to exploit security vulnerabilities. It is crucial to enable cybersecurity mechanisms to meet this evolving threat rather than rely on traditional security measures.
Secure AI is the basis of future digital interactions, protecting data with advanced security technologies, embedded directly into hardware. Any approach to Secure AI will need to account for both Security for AI, which strengthens AI infrastructure against cyber threats, and AI for Security, which uses AI to enhance cybersecurity. This dual approach means that as AI systems advance, they can be equipped with sophisticated hardware and software defenses against digital vulnerabilities.
One central tenant of AI policies should be to enable innovation without compromising trust. This balance should be reflected by: 1) promoting secure AI standardization and adoption; 2) investing in cybersecurity and R&D in AI; 3) developing cybersecurity skills in AI; and 4) fostering international cooperation, not only between governments but also between different groups of stakeholders, such as the Coalition for Secure AI (CoSAI) and the Open Platform for Enterprise AI (OPEA).
Standards
Intel emphasizes the importance of voluntary AI standards to ensure responsible development and global adoption, and to guide emerging regulations. These standards help reduce trade barriers by serving as a basis for technical regulations while supporting market competition and innovation.