Pushing the AI Barrier, NVIDIA Israel Launches Platform To Bolster Generative AI Services
On Monday, computing behemoth NVIDIA announced the launch of a new platform designed to scale out generative artificial intelligence services at the COMPUTEX conference in Taiwan. The NVIDIA Spectrum X network platform, developed in the company’s facilities in Israel, promises to empower developers to construct software-defined, cloud-native AI applications.
Spectrum X, uniquely built from the ground up specifically for AI, is touted by the company as “an accelerated Ethernet platform designed to improve the performance and efficiency of Ethernet-based AI clouds,” according to a statement. It further stated that the platform would deliver “1.7x better overall AI performance and power efficiency, along with consistent, predictable performance in multitenant environments.”
This holiday season, give to:
Truth and understanding
The Media Line's intrepid correspondents are in Israel, Gaza, Lebanon, Syria and Pakistan providing first-person reporting.
They all said they cover it.
We see it.
We report with just one agenda: the truth.


Gilad Shainer, senior vice president of networking at NVIDIA, emphasized that this innovation will boost Israel’s standing in the AI revolution, thereby bolstering local development in the field.
“Transformative technologies such as generative AI are forcing every enterprise to push the boundaries of data center performance in pursuit of competitive advantage,” said Shainer. “NVIDIA Spectrum-X is a new class of Ethernet networking that removes barriers for next-generation AI workloads that have the potential to transform entire industries,” he added.
As a blueprint for this groundbreaking platform, NVIDIA is erecting ‘Israel-1’, a hyper-scale generative AI supercomputer set to be among the world’s fastest. Israel-1, with a valuation running into several hundred million dollars, is anticipated to enter early production by the end of 2023.
The company stated that NVIDIA Spectrum-X “enables an unprecedented scale of 256 200Gb/s ports connected by a single switch, or 16,000 ports in a two-tier leaf-spine topology to support the growth and expansion of AI clouds while maintaining high levels of performance and minimizing network latency.”