About
31 March 2025
Marriott Marquis
San Francisco, California USA
Optica Executive Forum at OFC 2025
Don't miss the most important annual event in for leaders in optical networking and communications. This one-day event features C-level panelists in an informal, uncensored setting discussing the latest issues facing the industry and your business. Leaders from top companies will discuss critical technology advancements and business opportunities that will shape the network in 2025 and the future.
This is your opportunity to:
- Spend a full day with senior executives
- Connect with the leaders and decision-makers in the industry
- Participate in high-level networking
- Ask your challenging questions
- Leave with critical information
Who Should Attend?
- Service Provider Network Executives
- Service Provider Technology Evaluators
- Data Center Managers
- Enterprise Network Managers
- Business Development Executives
- Communications Technology Development Managers
- Optical Systems Developers and Managers
- Core Router Developers
- Test Equipment Vendors
- Data Center Switch and Router Vendors
- Semiconductor Manufacturers
- Venture Investors
- Financial and Market Analysts
It’s a Hyperscaler AI Word at Executive Forum 2025
By Sterling Perrin, Senior Principal Analyst, Heavy Reading
We are in the midst of the 5th epoch of distributed computing, according to Google keynoter Amin Vahdat, VP and GM, Machine Learning, Systems, and Cloud AI, who kicked off Optica’s 2025 Executive Forum in San Francisco. The machine learning and data center computing epoch, which began in 2020, is characterized by ultra-low latency between computers (10µs), terabit-per-second networks, gigawatt-scale infrastructure, and specialized compute hardware like TPUs and GPUs.
According to Vahdat, several key themes are driving the evolution of hyperscaler optics for intra-DC communications and, to a lesser extent, inter-DC communications, including:
- Adding more compute beats any other approach to scaling AI (citing Rich Sutton’s 2019 paper “The Bitter Lesson”). This means that
exponential increases in GPU/TPU processing will drive future AI improvements.
- Networking is the number one bottleneck that Google encounters (across all distances from centimeters in the rack to 100s of km in DCI).
- Power, not dollars, is now the limiting factor in scaling infrastructure. Google has shifted from designing for performance per TCO to performance per watt.
The hyper focus on lowest power consumption per bit is at the heart of trends in linear pluggable optics (LPO) and co-packaged optics (CPO). Speaking on an Executive Forum panel, Arista Networks Chairman and Chief Development Officer Andy Bechtolsheim presented metrics for power consumption at 1.6 Tbps, showing 3nm DSPs consuming 25 watts versus CPO and LPO modules consuming 10 watts (i.e., 60% less). While Bechtolsheim continued to champion the role of LPO, other participants made the case for CPO adoption, including Nvidia Networking Director Ashkan Seyedi, who highlighted Nvidia’s Spectrum-X Photonics as the world’s first 200G/SerDes co-packaged optics.
Looking beyond the data center walls, power has also become an increasingly crucial metric in optics but for somewhat different reasons. Cisco/Acacia VP Tom Williams did an excellent job of explaining why coherent pluggable optics have succeeded and will continue to do so, noting that just a few years ago the industry was debating whether pluggables would even be adopted at all.
Maturation of optics and DSPs is one factor driving coherent pluggables’ success. The newest generation of 3nm-based 800G pluggables from Acacia, and others, will deliver 800 Gbps connectivity at 1,000km+. Perhaps equally significant, these new modules have 400G ultra long-haul (ULH) modes, allowing services providers to transmit traffic at 400 Gbps to reach 3,000km and even longer. New coherent pluggables will meet 80%+ of global market need in terms of capacities and reach. And each new generation improves on the last.
The Shannon Limit is another driving factor. Beginning with 600 Gbps optics introduced around 2017, spectral efficiency gains have largely plateaued. Higher data rates per channel require larger channel widths, reducing the total number of channels contained in the C band. As a result, adding capacity per fiber is no longer the main driver in the majority of applications. In its place, standardization, lower costs, and lower power consumption have risen to the top of the priorities list. These requirements all favor coherent pluggable optics versus embedded optics.
Another trend evident at the Executive Forum – and during OFC itself – is that the major coherent optics suppliers are moving into the data center, chasing volumes and dollars. “Coherent Lite” optics at 1.6T remove high-end DSP processing (and power requirements) to deliver high data rates in the 2-20km range, making them a viable alternative to direct detect optics for campus DCI, particularly at 1.6T and higher data rates.
Going one step further in their strategic evolution, coherent vendors are moving into the data center itself with non-coherent options. Infinera (now part of Nokia) launched this trend with its ICE-D photonic integrated circuits (PICs) for data center optics, announced at OFC 2024. At a media briefing at this year’s OFC, Nokia executives made clear that ICE-D very much remains part of the combined company’s hyperscaler strategy. Additionally, at OFC 2025, Cisco/Acacia and Ciena both announced their own entries into direct detect optics for datacom applications, for 1.6T and higher. Both companies announced they are developing DSPs for pulse amplitude modulation (PAM), aimed at hyperscaler data centers.
Lastly, I will touch on the connectivity role of communications service providers in the new AI-driven era. At the Executive Forum and at OFC this year, I had hoped to glean new insights into how CSPs will profit, but I came up largely short. Lumen’s $8 billion in custom fiber deals with hyperscalers is primarily based on dark fiber builds. Asked about the future of hyperscaler connectivity, Lumen SVP of Product Segments James Feger said he expects dark fiber to remain the dominant requirement for hyperscalers in the coming years.
As noted by Nokia’s CTO of Systems, Walid Wakim, and others throughout the week, inferencing could very well open up big opportunities in AI for CSPs. But AI inferencing was not the theme of Optica’s Executive Forum or OFC 2025. While the hyperscalers and those supplying them are riding high in 2025, the CSP ecosystem will need to fight harder for prime seating at the AI table
Getting Reading for Optica’s Executive Forum and OFC 2025
By Sterling Perrin, Senior Principal Analyst, Optical Networks & Transport, Heavy Reading
Like many of my colleagues in the optical communications industry, I’m getting ready to attend OFC 2025 in San Francisco – the industry’s biggest annual get together, now celebrating its historic 50th year. I’m particularly interested in Optica’s Executive Forum, which takes place on March 31st, the day before OFC exhibits open. The Executive Forum always serves as a great barometer for the big trends that will define the rest of the week – and the rest of the year.
This year’s Executive Forum focuses (not surprisingly, but rightly) on optical networks to support AI. There are several ways to segment the optics for AI opportunity. One of the main segmentations is: optics in the data center (i.e., intra-data center) and optics between data centers (i.e., data center interconnect, or DCI).
At OFC 2024, the focus was squarely on intra-data center optics. Within the data center power, capacity, and latency are the key metrics for hyperscalers operating AI clusters to train their models. GPUs are the workhorse of the AI datacenter. They are indispensable, they consume massive power, and they are prioritized for power budgets. Reducing power used for interconnection within the data center, frees power budget for more GPUs.
Power reduction is the key driver for linear pluggable optics (LPO) modules that eliminate the DSP inside the module, as well as the less-complex compromise option of half-retimed optics. Looking at AI bandwidth needs, hyperscalers are quickly moving beyond 400G within the data center. Omdia forecasts a strong year for 800G optics in datacom, with 1.6T optics on the horizon. And planning has begun in earnest for 3.2T. The afternoon panel session titled “Photonic Interconnects in AI Clusters,” which includes Arista founder Andy Bechtolsheim, is not to be missed.
Moving beyond the walls of the data center, the impacts of AI traffic on data center interconnection has gained increasing attention over the past 12 months. A year ago, questions about AI and DCI were largely met with shrugs and blank stares. This year, at the Executive Forum and at OFC, I expect meaningful discussions and debate about DCI to support AI. Lumen fired the starting gun last July when it announced a strategic partnership with Microsoft to supply fiber capacity to support the hyperscaler’s AI DC connectivity needs. By November, Lumen reported that it had secured more than $8 billion in custom fiber deals with hyperscalers including Microsoft, Google, Amazon, and Meta, as well as securing 10% of Corning’s global fiber capacity for each of the next two years.
Notably, Lumen’s SVP Product Segments James Feger, Microsoft’s General Manager of Cloud Network Engineering, Azure Fiber, Colin Wallace, and Corning’s VP Technology Development Aleksandra Boskovic will share a panel at the Executive Forum.
Lastly, and of particular interest to me this year, is the question: what are the roles in AI connectivity for others in the communications ecosystem including telecom operators, cloud service providers, content neutral/data center operators, and enterprises?
The early hypercaler focus has been on obtaining fiber, but managed optical fiber networks (MOFNs), high-capacity wavelength services, and IP routed capacity are all options for DCI connectivity in age of AI. The evolution from model training to inference moves AI to the edge and brings new requirements for connecting enterprises with their AI applications, including low latency. Who will be the winners in inference? And what new revenue models and partnerships will emerge? I expect increased attention on the optical connectivity implications of AI inference.
In all, there will be a lot to discuss, debate, and learn.