伯恩斯坦-中国半导体:科技未来:AI数据中心网络入门-China Semiconductors Future of Tech AI Datacenter Networking Primer_第1页
伯恩斯坦-中国半导体:科技未来:AI数据中心网络入门-China Semiconductors Future of Tech AI Datacenter Networking Primer_第2页
伯恩斯坦-中国半导体:科技未来:AI数据中心网络入门-China Semiconductors Future of Tech AI Datacenter Networking Primer_第3页
伯恩斯坦-中国半导体:科技未来:AI数据中心网络入门-China Semiconductors Future of Tech AI Datacenter Networking Primer_第4页
伯恩斯坦-中国半导体:科技未来:AI数据中心网络入门-China Semiconductors Future of Tech AI Datacenter Networking Primer_第5页
已阅读5页,还剩59页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

PRIMER23March2026

ChinaSemiconductors

FutureofTech:AIDatacenterNetworkingPrimer

QingyuanLin,Ph.D.

+85221232654

qingyuan.lin@

StacyA.Rasgon

,Ph.D.

+12135595917

stacy.rasgon@

FrancisMa

+85221232626

francis.ma@

ZhengCui

+85221232694

zheng.cui@

ArpadvonNemes

+19173448461

arpad.vonnemes@

AlrickShaw

+19173448454

alrick.shaw@

AsAImodelsizesandcomputationalrequirementsscaleexponentially,singlechipsarenolongersufficient.ModernAIworkloadsdemandmassiveclustersofacceleratorsthatmustoperateasaunifiedcomputingfabric,makingAIdatacenter(AIDC)networkingacritical

determinantofsystemlevelefficiency.Asaresult,weseeAIDCnetworkingevolvingintooneofthefastestgrowingdomainsthatcouldlastformultipleyears.

DemandforAIDCnetworkingchipsisexplodingduetothecompoundbandwidth

effect,withaTAMroughlyestimatedtoreachUSD~100Bnin2030with~30%

CAGR.Formulti-tiernetworkingstructuresneededforlargescaleclusters,addinga

singleacceleratorincreasesnotonlypointtopointbandwidth,butalsomultipliestraffic

acrosshighertiersoftheclusterandneedstoaddalotmorenetworkingcomponents.

Whenthenumberofchipsexceedscertainthreshold,itisalsorequiredtoaddmorelayersofconnections.Thiscompoundingbehaviormeansthattotalnetworkthroughputrises

exponentiallyrelativetothenumberofchipsdeployed.Ashyperscalersacceleratethebuildoutof100k+GPUclusters,networkingcomponentsbecomethesecond-largestcost,

positioningtheAIDCnetworkingTAMtogrowfasterthanxPUs.

AIDCnetworkingcanbecategorizedintothreemajorconnectiontypes.DC-DC

connectionsfocusonwideareabandwidthandreliabilityacrossmultipleDCs;CPU-

centricconnectionsmanagedataflowsbetweenCPUandaccelerators/NICs/SSD

etc.;andxPU-to-xPUconnections(GPU/TPU/NPU)deliverthehighbandwidthandlowlatencypathwaysneededtoformlargeAIcomputeclusters.WithinxPU-to-xPU,scale-upnetworksconnectmultiplechipsandsharethecomputingpowersotheyperformlogicallylikeonechip(or‘node’),whilescale-outnetworksstitchthousandsof‘nodes’acrossa

cluster.Theseneedshavegivenrisetomultipleprotocols—NVLink,PCIe,Ethernet,UALink,andChina’sUB—eachtailoredtotrade-offsinbandwidth,latency,costetc.

Competitioninthescale-upnetworkingdomainremainsintenseandfarfrom

settled.Nvidia’sNVLinkstillsetstheperformancebenchmarkwithtighthardware-

softwareintegrationandprovenperformance,butindustryplayersarepushingalternativeapproachesasit’saclosedsystem.UALinkandEthernetbasedSUEarchitecturesaimtochallengeNvidiabypromotingopenecosystems,reducedvendorlock-in,andlowercoststructures.PCIecontinuestogaintractionwithcloudserviceprovidersseekingmature,

inexpensivesolutionsforcertainworkloads.Meanwhile,Chinamaypursueadistinctpath:

Huawei’sproprietaryUnifiedBus(UB),designedasaunifiedarchitectureacrossmultiplenetworklayers,reflectingastrategicemphasisonbiggerclusterscaleassinglechip

performanceislagging.Thecompetitivelandscapeisfluid,andtheeventualwinnerscoulddiffermeaningfullybyregionandworkloadtype.

Fornetworkingvendors,thesectoroffersstrongindustrybetaandstructurally

attractivemargins.Thetechnologicalandcapitalbarriersinhigh-performance

interconnectsareimmense,limitingnewentrantsandresultinhighmargins.Nvidia/

Huaweirepresentaclosedecosystemthatbenefitsfromafullyintegratedend-to-end

fabricacrosscomputingandnetworking.Broadcom/Marvellstandstogainsharefrom

CSPadoptionofopenecosystems(suchasEthernet).MontageandothervendorsinChinaarewellpositionedasdomesticecosystemslocalizecorenetworkinghardware.

SeetheDisclosureAppendixofthisreportforrequireddisclosures,analystcertificationsandotherimportantinformation.Alternatively,visitourGlobalResearchDisclosureWebsite.

FirstPublished:22Mar202621:00UTCCompletionDate:22Mar202620:11UTC

QingyuanLin,Ph.D.+85221232654

qingyuan.lin@

23March2026

INVESTMENTIMPLICATIONS

WerateHygonandCambriconOutperform,withTPatCNY280andCNY2,000,respectively.

NVDA(Outperform,$300PT):Thedatacenteropportunityisenormous,andstillearly,withmaterialupsidestillpossible.

AVGO(OP,$525PT):Astrong2025AItrajectoryseemssettoaccelerateinto2026andbeyond,bolsteredbysoftware,cashdeployment,andsuperbmargins&FCF.

CHINASEMICONDUCTORSBERNSTEINlsocIeTecENeRAlECROUP2

QingyuanLin,Ph.D.+85221232654

qingyuan.lin@

23March2026

CHINASEMICONDUCTORSBERNSTEINlsocIeTecENeRAlECROUP3

TableOfContents

NetworkingbecomingakeypillarinAIinfrastructure 3

SizingtheTAMforAIDCnetworkingchips 3

Thecompoundbandwidtheffect 6

ArchitectureofAIDCnetwork 9

ThreeconnectivitylayersinAIDCnetwork 9

Protocolsapplicabletoeachconnectivitylayer 11

SiliconhardwareenablingAIDCnetwork 13

AIDCnetworkingchipscompetitivedynamics 16

Keysuppliersfornetworkingchips 16

CompetitionwithinScale-Upnetwork 18

CompetitionwithinScale-outnetwork 29

Appendix:MoELLMarchitectureincreasesdemandtonetworking 30

DETAILS

NETWORKINGBECOMINGAKEYPILLARINAIINFRASTRUCTURE

SIZINGTHETAMFORAIDCNETWORKINGCHIPS

IntheGenAIera,networkinghastranscendeditslegacyroleasaperipheralutilitytobecomeoneofthecorebottlenecksintheAIinfrastructure,onparwithcomputeacceleratorsandmemory.AsLLMsscaletowardtrillionsofparameters,theindustryhashita“computewall”,wheresingle-chipcomputingpowerisnolongerthesoledeterminantofperformance.RoadmapsofbothGPGPUandAIASICvendorsreflectthisshift:theindustryfocusismovingfrommaximizingindividualacceleratorperformancetooptimizingtheefficiencyofalarge-scalecluster.Inthis“AIfactory”paradigm,theinterconnectfunctionsasthecluster’s

centralnervoussystem.Ifthenetworkcannotmaintainlow-latency,high-bandwidthcommunicationacrossthousandsor

evenmillionsofnodesinthecluster,themostexpensivexPUsriskremainingunderutilized,constrainedbydatastarvationorcommunicationoverhead.

GiventhatmanyAIDCnetworkingtechnologiesremainearlyintheirdeploymentcycleandcontinuetoevolverapidly,

forecastingtheabsolutesizeofthemarketcarriesarelativelywidemarginofuncertainty.Nevertheless,severalindustryestimatesprovideausefulreferencerangeforthepotentialscaleofthisopportunity.

Accordingtothelatest2026forecastfromDell’OroGroup,awidelycitedresearchfirmspecializinginnetworkinganddatacenterinfrastructuremarkets,spendingonswitchesdeployedinAIdatacenternetworksaloneisprojectedtoexceedUSD100billionby2030.Notably,thisrepresentsasignificantupwardrevisionfromDell’Oro’spriorprojections.Inits2025report,thefirmestimatedAIDCswitchspendingwouldreachUSD48billionby2029.Thesubstantialincreaseinitslatest

forecastunderscorestherapidaccelerationinAIinfrastructurebuild-outsandthegrowingimportanceofnetworkingwithinAIclusters.

Anotherwaytoframetheopportunityisthroughatop-downviewofserverspending.Bernstein’sglobalsemiteamprojectsthatglobalAIserverspendingcouldreachapproximatelyUSD800billionby2030.Basedonourestimates,networkingcomponentsaccountforroughly20%oftotalrack-levelCAPEXinNVIDIAGPU-basedsystems,comparedwithapproximately37%inASIC-basedracks,reflectingthelowermarginstypicallychargedbyASICvendorsvs.byNVIDIA.Onaweighted-averagebasis,weestimatethatnetworkingcomponentsrepresentaround25%ofthetotalracksystemcost.ThisimpliesAI

networkingspendcouldreachapproximatelyUSD200billionby2030,andweestimatethatabouthalfofitisthe

QingyuanLin,Ph.D.+85221232654

qingyuan.lin@

23March2026

CHINASEMICONDUCTORSBERNSTEINlsocIeTecENeRAlECROUP4

networkingchipvalue(roughlyUSD100bn,Exhibit1).

Withinthismarket:

•Atthemodulelevel(Exhibit2),weexpectConnectivity/optics(transceivers,cables/fibers,andretimers)andSwitchestorepresentthetwolargestsegments,eachaccountingforapproximately35–45%oftotalnetworkingTAM.NIC/DPUtaketheremaining~20%.

•Withinswitchsystems,switchASICchipstypicallyrepresent~30%oftotalswitchmodulerevenue,reflectingthehighvalueofmerchantandproprietaryswitchingsilicon.

•ForConnectivity/optics,twoprimarytechpathsarepluggableopticaltransceiverandCPO.Inpluggableoptical

transceivermodules,activesemiconductorcomponents,includingDSPs,laserdrivers,andopticalengines,represent

roughly40%oftotalmodulevalue.InemergingCPO-basedswitcharchitectures,theopticalengineandswitchASICtogethermayrepresentroughly30%oftheCPOswitchmoduleASP,reflectingboththehighermargincapturedby

switchsystemvendors(e.g.,NVIDIA)andtheinclusionofadditionalnon-siliconcomponentsinCPO’sBOM.

•WithinNIC/DPUmodule,chipsaccountsfor90%+ofthetotalvalue,withonlyasmallportioncomingfromothermodulecomponents.Consequently,NIC/DPUsiliconexpandsitsweightinthenetworkingchipsTAMcomposition.

Overall,therapidscalingofAIclusters,combinedwithincreasinglybandwidth-intensivetrainingworkloadsandevolving

networkarchitectures,suggeststhatAInetworkingwillbecomeoneofthefastest-growingsegmentswithinthebroaderAI

infrastructurestack.Broadcom(AVGO,notcovered)projectsthattotalbandwidthofAIclusterswilldoubleevery2years(Exhibit3),highlightingthespeedofgrowth.

EXHIBIT1:WeprojecttheglobalAIDCnetworkinghardwareTAMwillreachUSD~200Bnin2030,with’25-’30CAGRat~30%.Withinthehardware,chipsaccountforroughlyhalfofthevalue

Switch

chips

Connectivity/opticschips

NIC/DPU

GlobalTAM

ofAIDCnetworkinghardware

(USD)

AInetworkingchips

Othernetworkinghardware

2030AIDCnetworkingchipsmix

24%

36%

36%

202548%

~$50Bn

~30%CAGR

2030

~$200Bn

50%

50%

52%

2030

2025~25%2030

CAGR

~$250Bn$800Bn

GlobalAI

server

spending

(USD)

1.Chipsincludebothelectronicandphotonicdies.Source:Bernsteinanalysisandestimates

QingyuanLin,Ph.D.+85221232654

qingyuan.lin@

23March2026

CHINASEMICONDUCTORSBERNSTEINlsocIeTecENeRAlECROUP5

EXHIBIT2:Illustrationofhownetworkinghardwareareconnectedandthechipscontentinside

Transceivers/opticsNPU/DPUmodule

Hardware/module

Chipinhardware

NIC/DPU

Retimer

Laserdriverchip

Laserdiode

Opticalreceiverchip

ConnectivitySwitchdevice

Ethernetswitchchip

DSP

Source:Bernsteinanalysis

EXHIBIT3:BroadcomprojectstotalbandwidthofAIclusterswilldoubleevery2years

Source:Broadcominvestordaypresentation,Bernsteinanalysis

QingyuanLin,Ph.D.+85221232654

qingyuan.lin@

23March2026

CHINASEMICONDUCTORSBERNSTEINlsocIeTecENeRAlECROUP6

THECOMPOUNDBANDWIDTHEFFECT

WeexpecttotalbandwidthdeployedinAIDCnetworkstogrowfasterthanacceleratorcomputecapacity,drivenbywhatwedescribeasacompoundbandwidtheffect.EachadditionalacceleratoraddedtoaclusterincreasesnotonlyitsdirectI/O

demandwithinthefirst-layer(L1)network,butalsogeneratesincrementaltrafficacrosshighernetworklayers(L2/L3)asdistributedworkloadsscale.Asclustersexpand,communicationpatternsincreasinglyinvolvemulti-hopexchangesacrossthenetworkfabric.Asaresult,totalnetworkbandwidthgrowsatasuper-linearraterelativetothenumberofacceleratorsdeployed,drivingAInetworkingdevicesvolumetoexpandfasterthanacceleratorunitvolumes.

Toillustratethisdynamic,weusethemostwidelyadoptedScale-OuttopologyinAIclusters,thefat-treearchitecture.Asthenumberofacceleratorswithinaclusterincreases,networkconfigurationstypicallyevolvefromtwo-layer(Exhibit4)tothree-

layer(Exhibit5)fat-treetopologyinordertomaintainfullbisectionbandwidth.Importantly,thethree-layerarchitecturerequiresahigherswitch-to-xPUratiothanatwo-layerdesign.AssumingCSPsdeploy8-portswitchesintheScale-Outnetwork,Exhibit4showsthattheswitch-to-xPUratioina2-layerfat-treeis12:32(or0.375).Whenthenetworkexpandstoa3-layerfat-tree

architecture,Exhibit5showstheratioincreasesto80:128(or0.625),a67%increasecomparedto2L,reflectingthegreaternumberofswitchdevicesrequiredtomaintainbidirectionalAll-to-Allcommunicationacrossthecluster.

Infact,researchershavealreadyrealizedthatthemaximumnumberofendnodes(AIacceleratorsorNICendpoints)supportedbyafat-treenetwork,aswellasthetotalswitchcount,isdeterminedbytheportdensityofeachswitch.IfKrepresentsthe

numberofportsperswitch:

•Atwo-layerfat-treerequires3K/2switchesandsupportsupto2(K/2)²endnodes.

•Athree-layerfat-treerequires5(K/2)²switchesandsupportsupto2(K/2)³endnodes.

Asmorelayersadded,theswitch-to-xPUratiocontinuestogrow,resultinginfastergrowthinnetworkingthancomputing.

EXHIBIT4:2-layerfat-treetopologynetwork,interconnecting32endnodesthrough12switcheswith8portsperswitch

XPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPUXPU

L2

SW

L1

SW

L2

SW

L1

SW

L2

SW

L1

SW

L2

SW

L1

SW

L1

SW

L1

SW

L1

SW

L1

SW

Source:Bernsteinanalysis

QingyuanLin,Ph.D.+85221232654

qingyuan.lin@

23March2026

CHINASEMICONDUCTORSBERNSTEINlsocIeTecENeRAlECROUP7

EXHIBIT5:3-layerfat-treetopologynetwork,interconnecting128endnodesthrough80switcheswith8portsperswitch

Source:A.Minakhmetov,C.Ware,L.Iannone."HybridandOpticalPacketSwitchingSupportingDifferentServiceClassesinDataCenterNetwork."InternationalIFIP

ConferenceonOpticalNetworkDesignandModeling(ONDM).Bernsteinanalysis

Beyondswitchcount,aggregateswitchbandwidthwithinthenetworkalsoscalesfasterthanthetotalI/Obandwidthof

acceleratorsthemselves.Sincenetworkhardwaredollarvalueislargelyanchoredtoswitchbandwidth,thisstructuralshift

meaningfullyexpandsthedirectrevenueopportunityfornetworkingvendors.Themigrationfromtwo-layertothree-layer

fat-treearchitecturesincreasestheswitch-to-xPUbandwidthratiofromroughly3:1to5:1,andincreasestransceiver-to-xPUbandwidthratiofrom4:1to6:1,furtheramplifyingnetworkspendingasclustersgrowinscale.

Takentogether,thesedynamicssuggestthatAInetworkinghardwareTAMislikelytoexpandatafasterpacethantheTAMforAIacceleratorsoverthecomingdecade.

QingyuanLin,Ph.D.+85221232654

qingyuan.lin@

23March2026

CHINASEMICONDUCTORSBERNSTEINlsocIeTecENeRAlECROUP8

EXHIBIT6:Migrationfrom2-layerswitcharchitectureto3-layer,theswitch-to-XPUbandwidthratioincreasesfrom3:1to5:1,andthetransceiver-to-GPUbandwidthratioincreasesfrom4:1to6:1

2-layerarchitecture

L2switch

400G400G

400G400G

L1switchL1switch400G400G

400G400G

XPUXPU

Switches

total

bandwidth

=400G*6

3:1

XPUtotal

bandwidth

=400G*2

transceiverscount=4*2

3-layerarchitecture

L3switch

400G400G

400G400G

L2switchL2switch400G400G

400G400G

L1switchL1switch400G400G

400G400G

XPUXPU

Switches

total

bandwidth

=400G*10

5:1

XPUtotal

bandwidth

=400G*2

transceiverscount=6*2

Totalswitchesbandwidth:

•AssumebandwidthperXPU/NICis400G

•Inaclusterwith2XPUsin2-layerswitcharchitecture,totalbandwidthofswitchesis

400G*6=2400G

•Inaclusterwith2XPUsin3-layerswitcharchitecture,totalbandwidthofswitchesis

400G*10=4000G

•Hence,themigrationfrom2-layerto3-layerfat-tree

architecturesincreasesthe

switch-to-XPUbandwidthratiofrom3:1to5:1

Totaltransceiversbandwidth:

•4piecesofopticfiberswith8pluggablemodulesina2-layerarchitecture

•6piecesofopticfiberswith12pluggablemodulesina3-layerarchitecture

•Hence,themigrationfrom2-

layerto3-layerarchit.IncreasestheTransceiver-to-XPU

bandwidthratiofrom4:1to6:1

Source:companiesreports,Bernsteinanalysis

QingyuanLin,Ph.D.+85221232654

qingyuan.lin@

23March2026

CHINASEMICONDUCTORSBERNSTEINlsocIeTecENeRAlECROUP9

ARCHITECTUREOFAIDCNETWORK

THREECONNECTIVITYLAYERSINAIDCNETWORK

ThearchitectureofanAIdatacenter(AIDC)networkcanbeorganizedintothreeprimaryconnectivitylayers:

Datacenter-to-Datacenterconnectivity,xPU-to-xPUconnectivity,andCPU-centricconnectivity.Eachlayerdiffers

markedlyinphysicalreach,performancerequirements,anditsfunctionalimportanceinenablinglarge-scaleAItrainingclusters.

Atthebroadestscope,Datacenter-to-Datacenter(DC-to-DC)connectivity,oftenreferredtoastheFrontendnetworkintheindustry,orscale-acrossforAIDC,linksmultiplecampusesorzonesthroughlong-haulormetro-scaleoptics.

DC-to-DCconnectivitysupportsmulti-regionworkloaddistribution,datareplication,andcommunicationwithendusers.TheyrelyonCoherentDWDMopticsandtoleratehigherlatency,asmosttrafficconsistsofinter-regioncommunicationratherthanlatency-sensitivetrainingoperations.Thedistancecanbetenstohundredsofkilometers;thus,opticalfiberisrequired,withbandwidthperwavelengthrangingfrom400Gto800Gandscalingto1.6T.

Thesecondlayer,xPU-to-xPUconnectivityisthemoststrategicallyimportantlayerintheAIera.ThislayerconnectsGPUs,TPUs,NPUs,orotheracceleratorstightlyacrossmultiplephysicalscopes,withinacomputetray,intrarack,orincreasinglyacrossadjacentracks.ThislayerdirectlydeterminestheefficiencyandscaleatwhichAImodelscanbetrained.Asworkloadsgrowmorecomplexandrequiretightersynchronizationacrossalargernumberofaccelerators,networking

technologiesatthislayerareevolvingrapidly.TheindustrynowcategorizesxPUconnectivityintothreesub-layers,inheritedfromNVIDIA’sterminologyofScale-UpandScale-Outdomains:

•Intra-Trayconnectivityconnectsacceleratorsdirectlywithinaserverormulti-acceleratormodule.Itcarriesthemostlatency-sensitiveandbandwidth-intensivecommunicationindistributedtraining,suchasfine-grainedtensoroperations.Vendorsthereforedeploycustomized,AI-nativeprotocols—NVIDIA’sNVLink,AMD’sUALink,andGoogle’sICI—todelivermulti-terabyte-per-secondbidirectionalthroughputatsub-microsecondlatency.Intra-TrayconnectivityisthecoreoftheScale-Updomain,anditsefficiencyisoftenthesinglebiggestdeterminantofoverallmodel-trainingperformance.

•Tray-to-Trayconnectivitylinksacceleratortrayswithinarackor,increasingly,acrossadjacentracks.Thislayerhistoricallyfacedphysicalbarriersbecausecoppercablesandthermalconstraintslimitedreachtoasingleserverenclosure.However,theadoptionofshort-reachoptics,activeelectricalcables(AEC),andadvancesinhigh-qualitydirect-attachedcable(DAC)materialshaveallowedthisintra-podnetworktoextendbeyondtherackwithoutcompromisingbandwidthorlatency.The

abilitytostretchScale-Upconnectivityacrossmultipleracksisamajorarchitecturalbreakthrough,enablingmuchlargerpodsizesandpushingtheperformancefrontierofAIDCdesigns.

•Rack-to-RackconnectivityformstheScale-Outnetworkthatlinksmultipleracksorpodsintoacoherenttrainingcluster.Thislayerreliesalmostentirelyonopticaltransceiversoperatingat400G,800G,andsoon1.6Tspeeds,andemploys

hierarchicaltopologiessuchas2-layerand3-layerfattrees.Itsprimarychallengesincludecongestioncontrol,cost

efficiency,andthecoordinationofuptotensofthousandsofaccelerators.Theperformanceofthislayerdirectlyinfluencesoveralltrainingthroughput,jobcompletiontimes,anddata-parallelscalingefficiency.

Lastly,thebottomlayerconsistsofCPU-centricconnectivity.ThislayercoordinatescommunicationbetweenCPUs,accelerators,NICs,SSDs,andotherperipherals,widelyadoptedinbothgeneralpurposeserversandAIservers.

Whileitistheshortest-reachlayerphysically,itsroleinmemorycoherence,deviceenumeration,andCPU-GPUcoordinationisfoundationaltotheentiresystem.PeripheralsareconnectedthroughtracesonPCBtoCPUindistanceofcentimeters,withlatencyextremelylowinonlynanoseconds.

CHINASEMICONDUCTORSBERNSTEINlsocIeTecENeRAlECROUP10

DC-to-DCconnectivity/ScaleAcross:

•connectLAN(localareanetwork)oflocal

datacentertotheoutsideworldthroughWAN(wideareanetwork),includingother

datacenters,endusers,andclients'devices.

XPU-to-XPUconnectivity

•Intra-trayconnectivity/ScaleUp:

LinkxPUswithinacomputetray,inultra-highbandwidth

•Tray-to-Trayconnectivity/ScaleUp:

VerticallyconnectxPUsacross

computetrayswithinthesameracktoformapod

•Rack-to-Rackconnectivity/ScaleOut:

Horizontallyconnectmanyracks/

serverstogetherviafat-treetopology,toformacluster

CPU-centricconnectivity:

•InterconnectbetweenCPUsandperipheraldevices,suchasxPUs,NICs,andSSD,inatray

Rack1

CPUSSD

SSD

Computetray1

CPUSSD

SSD

ComputetrayM

EXHIBIT7:ThearchitectureofAIDCnetworkcanbeorganizedintothreeprimaryconnectivitylayers:Datacenter-to-Datacenterconnectivity,xPU-to-xPUconnectivity,andCPU-centricconnectivity.

QingyuanLin,Ph.D.+85221232654

qingyuan.lin@

23March2026

SwitchSwitch

RackN

Computetray1

Frontendnetwork/WAN

ComputetrayM

XPU

XPU

XPU

XPU

NIC

NIC

NIC

NIC

Source:companyreports,Bernsteinanalysis

EXHIBIT8:AerialviewofAIDCnetworking:“Scale-up”isnarrowlydefinedinthischart;inpractice,scale-upconnectivitynowextendsacrossadjacentracks.

Source:Cienawebsite,Bernsteinanalysis

QingyuanLin,Ph.D.+85221232654

qingyuan.lin@

23March2026

CHINASEMICONDUCTORSBERNSTEINlsocIeTecENeRAlECROUP11

EXHIBIT9:Connectivityprotocolsolutionsforeachlayer

AIDCconnectivitylayer

Global

China

DC-to-DC(ScaleAcross)

Ethernet

Ethernet

XPU

-to-

XPU

Intra-tray(ScaleUp)

NVLink

SUE

UALink

PCIe

InfinityFabricICI

Ethernetupgrade

UALinkPCIe

UB

Tray-to-Tray(ScaleUp)

Rack-to-Rack(ScaleOut)

InfiniBandEthernet

EthernetUB

CPU-centric

NVLinkC2CPCIe

PCIeUB

Source:Bernsteinanalysis

PROTOCOLSAPPLICABLETOEACHCONNECTIVITYLAYER

ThelandscapeofnetworkingprotocolsinAIDCinfrastructureremainsfarfromconverged,particularlyintheScale-UpandScale-Outdomainswhereperformance,cost,andvendorstrategiesdivergesignificantly.Awiderangeoftechnologies,

includingNVLink,InfiniBand,Ethernet,UALink,PCIe,CXL,andHuawei’sUBprotocol,existbecauseeachoneoptimizesadifferentcombinationofbandwidth,latency,openness,andcoststructure.CSPsselectivelyadopttheseprotocolstomatchtheirarchitecturalprioritiesandbudgetconsiderations.

NVLinkrepresentstheindustrygoldstandardforScale-Upnetworking.DesignednativelyforNVIDIAGPUs,NVLinkprovidesextremelyhighbandwidthandultra-lowlatency,deliveringunmatchedperformanceforintra-podcommunication.However,NVLinkisproprietaryandcommandsasubstantialcostpremium,whichhasraisedconcernsamonghyperscalersseekingtomitigatevendordependence.

EthernetisthedominantprotocolforScale-Outconnectivityingen

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论