Together, We Create a World of Opportunities
With industry-leading software resources and capabilities, Intel drives exponential innovation through co-engineering, collaboration, and open source contributions.
Intel Leadership in Software
With more than 15,000 software engineers, Intel has optimized more than 100 different operating systems. We’re the #1 contributor to the Linux* kernel, a top-3 contributor to Chromium* OS and a top-10 contributor to OpenStack*.
Millions of developers use Intel® Software development tools and libraries to take advantage of Intel® hardware and software, with impressive “before and after” performance gains:
- Up to 30x Inference Throughput Improvement with Intel® Deep Learning Boost (Intel® DL Boost)1
- Up to 24.8x Performance Gains on Data Warehousing Queries2
- Up to 4x More VMs with Intel® Optane™ DC persistent memory3
Co-Engineering Partnerships
Jointly designing, building, and validating new products with software industry leaders accelerates mutual technology advancements and helps new software and hardware work better together.
Technology Collaboration
Intel® software tools allow developers to optimize, test, and tune applications to enable faster performance, greater capacity, and improved security; and to innovate more advanced features.
Open Source Contributions
As the leading Linux* kernel contributor, Intel also delivers a steady stream of open source code and optimizations for projects across virtually every platform and usage model.
- Linux* kernel (#1 contributor)
- Apache* Hadoop and Spark (unified AI & analytics pipeline)
- Kubernetes (container orchestration)
- Kerberos (network security)
- Celadon (Android* on Intel)
- ...and many, many, more
免責事項
30x Inference Throughput Improvement on Intel® Xeon® Platinum 9282 processor with Intel® Deep Learning Boost (Intel® DL Boost): Tested by Intel as of 2/26/2019. Platform: Dragon rock 2 socket Intel® Xeon® Platinum 9282 processor (56 cores per socket), HT ON, turbo ON, Total Memory 768 GB (24 slots/ 32 GB/ 2933 MHz), BIOS: SE5C620.86B.0D.01.0241.112020180249, CentOS* 7 Kernel 3.10.0-957.5.1.el7.x86_64, Intel® Deep Learning Framework: Intel® Optimization for Caffe* version: https://github.com/intel/caffe d554cbf1, ICC 2019.2.187, MKL DNN version: v0.17 (commit hash: 830a10059a018cd2634d94195140cf2d8790a75a), model: https://github.com/intel/caffe/blob/master/models/intel_optimized_models/int8/resnet50_int8_full_conv.prototxt, BS=64, No datalayer synthetic Data: 3x224x224, 56 instance/2 socket, Datatype: INT8 vs Tested by Intel as of July 11th 2017: 2S Intel® Xeon® Platinum 8180 processor CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via Intel_pstate driver, 384 GB DDR4-2666 ECC RAM. CentOS* Linux release 7.3.1611 (Core), Linux* kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD Data Center S3700 Series (800 GB, 2.5in SATA 6 GB/s, 25nm, MLC). Performance measured with: Environment variables: KMP_AFFINITY='granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, synthetic dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50). Intel® C++ Compiler ver. 17.0.2 20170213, Intel® Math Kernel Library (Intel® MKL) small libraries version 2018.0.20170425. Caffe run with “numactl -l“.
Analytics 1 Up to 24.8x Performance Gains on Data Warehousing Queries on the new 2nd Gen Intel® Xeon® Platinum 8280 processor with Windows* Server 2016 vs. 4-year old legacy server with old hardware and software. 1-node, 2x Intel® Xeon® processor E5-2699 v3 on Wildcat Pass with 768 GB (24 slots / 32 GB / 2666) total memory (workload uses 691 GB), ucode 0x3D on Windows* Server 2008 R2, 1 x S710 (200 GB), 1 x S3500 (1.6 TB), 2 x P4608 (6.4 TB), SQL Server 2008 R2 SP1 (Enterprise Edition), HT on, Turbo on, result: Queries per hour at 1 TB =33681, test by Intel on 12/21/2018. 1-node, 2x Intel® Xeon® Platinum 8280 processor on Wolf Pass with 1536 GB (24 slots / 64 GB / 2666 (1866)) total memory (workload uses 691 GB), ucode 0xA on Windows* Server 2016 (RS1 14393), 1 x S710 (200 GB), 1 x S3500 (1.6 TB), 4 x P4610 (7.6 TB), SQL Server 2017 RTM - CU13 (Enterprise Edition), HT on, Turbo on, result: queries per hour at 1 TB =836261, test by Intel on 3/13/2019.
Up to 4X more VMs when Quadrupling Memory Capacity with Intel® Optane™ DC Persistent Memory Module (DCPMM) running Redis+Memtier: 1-node, 2x Intel® Xeon® Platinum 8280L processor on Intel Reference Platform with 768 GB (12 slots / 32 GB / 2666) total memory, ucode 0x400000A on Fedora-27, 4.20.4-200.fc29.x86_64, 2x40 GB, Redis 4.0.11, memtier_benchmark-1.2.12, KVM, 1 45 GB instance/VM, CentOS*-7.0, ww06'19 BKC, HT on, Turbo on, score VM=14, test by Intel on 2/21/2019. 1-node, 2x Intel® Xeon® Platinum 8280L processor on Intel Reference Platform with 192 GB DDR, 3072 GB Intel® Optane™ DC Persistent Memory Module (DCPMM) (12 slots / 16 GB / 2666 DDR + 12 slots / 256 GB/ 2666) total memory, ucode 0x400000A on Fedora-27, 4.20.4-200.fc29.x86_64, 2x40 GB, Redis 4.0.11, memtier_benchmark-1.2.12, KVM, 1 45 GB instance /VM, CentOS*-7.0, ww06'19 BKC, AEP firmware 5346, HT on, Turbo on, score VM=56, test by Intel on 2/21/2019.