Intel Labs: il primo modello di AI generativa che crea immagini 3D da un testo thumbnail

Bringing AI Everywhere with Intel: The Opportunities and Challenges for Developers

On the occasion of the third edition of Intel Innovationthe company presented a series of technologies for bring AI everywhere. The goal is to make it more accessible across all workloads, from the client to the edge to the network and cloud.

Intel: AI can and should be used everywhere

In the keynote addressed to developers at the opening of the event, gelsinger showed how Intel is bringing artificial intelligence capabilities into its hardware products. Making them accessible through open and multi-architecture software solutions. He also highlighted how artificial intelligence is helping to promote “siliconomy”, “a phase of economic development made possible by the magic of silicon and software”. Today, silicon powers a $574 billion industry that in turn powers a global tech economy worth nearly $8 trillion.

Intel is AI: new advances in silicon technology, packaging and multi-chiplet solutions

Gelsinger also showed off an Intel 20A wafer with the first test chips for il processore Arrow Lake of Intel. Targeted for the client computing market in 2024. Intel 20A will be the first process node to include PowerVia, Intel’s technology for delivering power from the back of the chip. The new gate-all-around transistor design called RibbonFET. Intel 18A, which also uses PowerVia and RibbonFET, is on schedule to reach production in the second half of 2024.

Intel also showed off a test chip package made with Universal Chiplet Interconnect Express (UCIe). The next wave of Moore’s Law will come in multi-chiplet packages, Gelsinger said. It will come sooner if open standards can reduce the challenges of integrating intellectual properties.

Created last year, the UCIe standard will allow chiplets from different vendors to work together, enabling new designs for expanding different AI workloads. This open specification is supported by more than 120 companies.

The test chip brings together an Intel UCIe IP chiplet fabricated on Intel 3 and a Synopsys UCIe IP chiplet fabricated on the process node TSMC N3E. The chiplets are connected using advanced EMIB (embedded multi-die interconnect bridge) packaging technology. The demonstration highlights the commitment of TSMC, Synopsys and Intel Foundry Services to support an open standards-based chiplet ecosystem with UCIe.

Increase performance and bring AI everywhere

Gelsinger highlighted the range of AI technologies available to developers today on Intel platforms and how that range will increase dramatically over the next year. Recent achievements in inference performance AI MLPerf further strengthen Intel’s commitment to addressing every stage of the AI ​​continuum, including generative AI and large language models.

The results also show how the Intel Gaudi2 accelerator represents the only viable alternative on the market for artificial intelligence computing needs. Gelsinger announced that a large AI supercomputer will be based entirely on processors Intel Xeon and 4,000 Intel hardware accelerators Gaudi2 AI, with Stability AI as the primary user.

Zhou Jingren, chief technology officer di Alibaba Cloud, explained how Alibaba applies fourth-generation Intel Xeon processors with integrated AI acceleration to “our large-scale generative AI and language model, Alibaba Cloud’s Tongyi Foundation models.” Intel technology, he said, results in “dramatic improvements in response times, with an average acceleration of 3x.”

Intel also previewed the next generation of Intel Xeon processors, revealing that the fifth-generation Intel Xeon processors will offer a combination of performance improvements and faster memory, while using the same amount of power, when they launch on December 14.

intel siliconomy

Developers become protagonists of Siliconomy

To help developers unlock this future, Intel announced:

  • General availability of Intel Developer Cloud: Intel Developer Cloud helps developers accelerate AI using the latest Intel hardware and software innovations, including Intel Gaudi2 processors for deep learning, and provides access to the latest Intel hardware platforms, such as Intel Xeon Scalable processors fifth generation and Intel Data Center GPU Max 1100 and 1550 series. Using Intel Developer Cloud, developers can build, test and optimize AI and HPC applications.
  • The launch of the OpenVINO toolkit in 2023: OpenVINO is the AI ​​inference and deployment runtime of choice for developers on client and edge platforms. The release includes pre-trained models optimized for integration between operating systems and different cloud solutions, including many generative AI models, such as Meta’s Llama 2 model. At Intel Innovation, companies including ai.io and Fit:match demonstrated how they use OpenVINO to accelerate their applications: ai.io to analyze any athlete’s performance; Fit:match to revolutionize the retail and wellness sectors by helping consumers find the most suitable garments.
  • Project Strata and the development of a native edge platform: The platform will launch in 2024 with modular building blocks, premium services and additional offerings. This is a horizontal approach to scale the infrastructure needed for the Intelligent Edge and hybrid AI, and will bring together an ecosystem of Intel and third-party vertical applications. The solution will enable developers to build, deploy, run, manage, connect and secure distributed edge infrastructures and applications.
Walker Ronnie is a tech writer who keeps you informed on the latest developments in the world of technology. With a keen interest in all things tech-related, Walker shares insights and updates on new gadgets, innovative advancements, and digital trends. Stay connected with Walker to stay ahead in the ever-evolving world of technology.