1、5G-A Ignites the Three-Types of New Intelligent Services5G-A Ignites the Three-Types ofNew Intelligent ServicesVersionV1.0.0Deliverable Type Procedural DocumentWorking DocumentConfidential Level Open to GTI Operator Members Open to GTI PartnersOpen to PublicWorking Group5G Technology and Product Pro
2、gramSource membersChina Mobile,Huawei,Leju RoboticsSupport membersZTE,Nokia,CICT Mobile,DEEPRobotics,HONOR,Qualcomm,MTK,Fibocom,TD Tech,ASR,Quectel,UNISOCLast Edit Date14-06-2025Approval Date5G-A Ignites the Three-Types of New Intelligent ServicesConfidentiality:This document may contain information
3、 that is confidential and access to thisdocument is restricted to the persons listed in the Confidential Level.This document may not beused,disclosed or reproduced,in whole or in part,without the prior written authorization of GTI,and those so authorized may only use this document for the purpose co
4、nsistent with theauthorization.GTI disclaims any liability for the accuracy or completeness or timeliness of theinformation contained in this document.The information contained in this document may besubject to change without prior notice.DocumentDocument HistoryHistoryDateMeeting#Version#Revision C
5、ontents06-14-2025V1.0.05G-A Ignites the Three-Types of New Intelligent ServicesContentIntroduction.1Chapter 1 Synergistic Progression of Technology Revolution&Mobile Communication.11.1.Evolutionary Journey of Mobile Communication Services.11.2.The Technological Revolution and the Synergistic Evoluti
6、on of Mobile Communications.3Chapter 2 Typical Scenarios and Requirements.52.1.Overview of New Information Consumption Services.52.1.1.The Three-Types of New Intelligent Services Incubated by Mobile Communications.52.1.2.Development Drivers and Policies of New Information Consumption Services.92.2.T
7、ypical Scenarios and Requirements of Intelligent Robots.122.2.1.Typical Scenarios.122.2.2.The Workflow.142.2.3.Service Requirements.162.3.Typical Scenarios and Requirements of AI Agent.202.3.1.Introduction of AI Agent.202.3.2.Typical Scenarios.222.3.3.The Workflow.232.3.4.Service Requirements.272.4.
8、Typical Scenarios and Requirements of AI Glasses.322.4.1.Typical Scenarios.322.4.2.The Workflow.332.4.3.Service Requirements.342.5.Typical Scenarios and Requirements of Intelligent Connected Vehicles.372.5.1.Typical Scenarios.372.5.2.The Workflow.382.5.3.Service Requirement.41Chapter 3 5G-A Empowers
9、 New Information Consumption Services.433.1.5G-A Technology Scope.433.2.Enhanced Connectivity Guarantee.443.3.Computing Power Resources Opening and Sharing.513.4.Data Acquisition and Storage.54Chapter 4 Summary and Outlook.55Appendix:The Theoretical Analysis of Embodied Intelligence Service Requirem
10、ent.585G-A Ignites the Three-Types of New Intelligent Services1IntroductionAt present,the world is accelerating towards a new era of digitization andintelligence in which everything is connected,and the pattern of information servicesushering in structural changes driven by 5G-Advanced(5G-A)technolo
11、gies.As a keystage in the evolution from 5G to 6G,5G-A is becoming the core engine forinformation services through a tenfold upgrade of technical capabilities andmulti-dimensional integration and innovation.This white paper focuses on the three-types of new intelligent servicesintelligent robots,int
12、elligent devices,andintelligent connected vehiclesand systematically compiles their typical scenariorequirements,with the aim of providing forward-looking guidance for the intelligentupgrading of the industry.In addition,this white paper also focuses on the scaledevelopment of the three-types of new
13、 intelligent services enabled by 5G-A.Throughstandardization,cross-area collaboration,and ecological co-construction,5G-A willaccelerate the transition from capability enhancement to value creation,injectingvalue into the digital economy.In the future,the in-depth 5G-AAI convergence willfurther unle
14、ash the potential of“Human-Vehicles-Device”interconnection and opena new era of intelligent society.Chapter 1 SynergisticProgressionofTechnologyRevolution&Mobile Communication1.1.Evolutionary Journey of Mobile Communication ServicesCommunication is an old but modern word.Before the 19th century,peop
15、lecommunicated mainly by letter.In the 19th century,with the enlightenment andrapid development of communication theories and basic sciences,communicationtechnology witnessed great progress entered the era of mobile communication.In the 1980s,the First generation analog mobile communication system(1
16、G)based on the concept of“cellular”achieved large-scale commercialization,using5G-A Ignites the Three-Types of New Intelligent Services2Frequency Division Multiple Access(FDMA)technology to realize analog modulationof voice signals.The Second Generation digital mobile communication system(2G)isbased
17、 on Time Division Multiple Access(TDMA)technology,to transmit voice andlow-speed data services,and realize global roaming.With the further developmentofdataandmultimediacommunications,theThirdGenerationmobilecommunication system(3G)came into being,using Code Division Multiple Access(CMDA)technology
18、to enhance the security of data communications,not only toprovide high-quality voice services,simultaneous transmission of voice and datainformation,but also to support multimedia services and the access to the mobileInternet.3G has realized the goal of mobile broadband multimedia communication,but
19、ithas not stopped people from further research on communication technology.The4th Generation(4G)mobile communication technology is represented by 3GPPLTE/LTE-A system,which introduces Orthogonal Frequency Division Multiplexing),MIMO(Multi-Input&Multi-Output(OFDM),and other key technologies.In 4G,ben
20、efiting from the improvement in network performance(such as rate and delay),the IP-based multimedia communication services and the integration of mobilecommunication and WLAN services are realized and bring new experiences for users.On this basis,along with the vigorous development and innovation of
21、upper-layer applications(including the needs of individual consumers and industrycustomers),the mobile communication network needs to provide diversified,flexible,and customizable services,and the rate is no longer the only pursuit.The 5thGeneration(5G)mobile communication technology contains three
22、typical applicationscenarios,enhancedMobileBroadband(eMBB),ultra-ReliableLowLatencyCommunications(uRLLC),and massive Machine Type of Communication(mMTC),which put forward brand new requirements on rate,delay,reliability,connectionscale and other capabilities.Table 1 Summary of the evolution of mobil
23、e communication network and services5G-A Ignites the Three-Types of New Intelligent Services3Mobile communicationtechnologyNetwork capabilitiesServices and applications1G/Analog voice services2GRate:10100kbpsDigital voice,text3GRate:(DL)3.6 Mbps(UL)382kbpsHigh-qualityvoiceservices,multimediaservices
24、,mobile Internet services4GRate:10Mbps1GbpsRadio delay:10msTraffic density:0.11Mbps/m2Connection density:105/km2Mobility:350km/hIP-based multimedia communication services,VoLTE,IoT5GRate:100Mbps10GbpsRadio delay:ms levelTraffic density:10Mbps/m2Connection density:106/km2Mobility:500km/heMBB:HD video
25、s、VR/ARuRLLC:verticals(V2X,smart factory,etc.)mMTC:AIoT(smart logistics,smart cities,etc.)Throughout the development of mobile communications,network and business,terminal interdependence between the three,promote each other,the evolution ofthe network to promote the rapid development of terminals a
26、nd services,incubatingmore new needs of users,and the growing demand for a better life for the user hasinjected the driving force for the continuous development of mobile communicationstechnology.1.2.TheTechnologicalRevolutionandtheSynergisticEvolution of Mobile CommunicationsThe technological revol
27、ution and mobile communication technology arereshaping the global industrial landscape with synergistic evolution spirally.Scientificand technological changes centered on artificial intelligence(AI),the Internet ofThings(IoT),and cloud computing are driving service forms to deep intelligence.AI5G-A
28、Ignites the Three-Types of New Intelligent Services4models break through the boundaries of cognition and decision-making,equippingmachines with human-like capabilities,and mobile communication technology(5G/5G-A)is becoming the“neural network”of intelligent transformation throughthe characteristics
29、of ultra-low latency,ultra-large bandwidth,and ubiquitousconnectivity.Inthefuture,servicedevelopmentwillbecharacterizedbyintelligence-infused applications,3D-enabled content,and cloud-migrated services.1.Intelligence-infused applicationsBreakthroughsinAItechnologyandtheAI-communicationnetworkconverg
30、ence give rise to“AI+”application scenarios.AI applications with powerfuldata processing capabilities and deep learning technology can more accuratelyunderstand user needs and behavior,so as to provide more personalized,intelligentservices.The integration of technologies in multiple fields has promo
31、ted thedevelopment of mobile AI services,and incubated three new types of informationconsumption services such as embodied intelligent robots,intelligent devices,andintelligent Internet-connected vehicles,etc.With the wide application of emerging AIservices,the demand for connectivity and computing
32、power has also shownsignificant growth.2.3D-enabled contentWith the continuous maturity of 3D display technology and content generationtechnology,service content will gradually shift from 2D to 3D.3D applications offer aricher experience for users,thanks to their unique immersive,high-quality displa
33、y,and interactive capabilities.At present,the main forms are XR,glass-free 3D,etc.Thisshift from 2D to 3D not only helps to improve the efficiency of users work and life,but also signals that future entertainment applications will pay more attention touser experience and interactivity,and promote th
34、e innovative development of thewhole industry.3.Cloud-migrated servicesWith the continuous maturation and popularization of cloud computing5G-A Ignites the Three-Types of New Intelligent Services5technology,cloud-based services are gradually developing with the widespreaddeployment of 5G networks.Th
35、e bandwidth and latency guarantees provide strongsupport for cloud-based services,enabling users to access cloud-based servicesanytime and anywhere,realizing a truly mobile work and life.In addition,thedevelopment of cloud-based services not only reduces operation and maintenancecosts and improves r
36、esource utilization efficiency,but also enables services torespond to market changes more flexibly.In the future,cloud-based services will beapplied in more fields,bringing users more convenient,efficient,and richexperiences.Chapter 2 Typical Scenarios and Requirements2.1.Overview of New Information
37、 Consumption Services2.1.1.The Three-Types of New Intelligent Services Incubated by MobileCommunications1.Embodied intelligent robots:breakthroughs in embodied intelligenceEmbodiedAIrealizesthequalitativechangeoftheclosedloopof“Perception-Decision-Motion”by empowering robots with the ability to inte
38、ract withreal-time environments,promoting the transformation of robots from a single tool toan autonomous collaborative partner with an“embodied brain”,and realizingautonomous learning and complex environment interaction.The technologicalbreakthroughs of embodied intelligent robots focus on three ma
39、jor directions.Thefirst is the integration of multi-modal perception,with the help of LiDAR,visualsensors,and tactile feedback to build human-like senses,for example,BostonDynamics robots dynamically adapt to complex terrain,and China Mobiles“BodyBrain”for robot training based on the Jiutian Big Mod
40、el can integrate multi-sourcedata to improve scene adaptability.The second is autonomous learning andgeneralization ability,through deep reinforcement learning and simulation training,robots can quickly migrate skills to real tasks,such as Tesla Optimus learninghousework operations.The third is the
41、upgrading of human-machine emotional5G-A Ignites the Three-Types of New Intelligent Services6interaction,breaking through the traditional command mode,for instance,thepensioner robot can feel the users emotions and provide active health support.The embodied intelligent robot architecture includes th
42、e robot“brain”forplanning and decision,the robot“cerebellum”for motion control,and the robot“body”.There are two types of architecture models,the hierarchical decision-makingmodel and the end-to-end model.The hierarchical decision-making model breaksdown the task into different layers,trains with mu
43、ltiple neural networks respectively,and then combines them in the way of a pipeline.There are data transfer,interaction,and coordination needs between different layers in the hierarchical decision-makingmodel,with relatively less need for training data and a stronger scene generalizationability.Take
44、 Figure 1 robot as an example.The robot brain accesses AI multi-modalbig model and provides visual inference and language understanding,the robotcerebellum acts as the cerebellum for motion control and generates torso sensingand execution,and the robot body accepts the motion commands from the neura
45、lnetwork strategy for control execution.The end-to-end model combines the robotbrain and the robot cerebellum into one,and directly converts the task objectivesinto control signals through a single neural network training,realizing a seamlessconnection from input to output,which has the advantage of
46、 data and computingpower,and can support the complex training needs of the end-to-end model;however,the amount of data required is estimated to be at the level of hundreds ofbillions,and the speed of reasoning and response is slow.At present,most roboticscompanies employ the hierarchical decision-ma
47、king model,such as Figure,AgiBot,Leju,etc.,since the multi-layer industrial collaboration reduces product developmentand application cycles.Tesla and Google employ the end-to-end model,which haslower development complexity and can save the transmission and coordination costsbrought about by multi-la
48、yer collaboration.5G-A Ignites the Three-Types of New Intelligent Services7Figure 1 The embodied intelligent robot architectureFor the applications,embodied intelligent robots are extending from basicservices to high-value scenarios.In family scenarios,AI dietitian robots achievepersonalized health
49、management.In the industry,multi-robot collaborative assemblyimproves automobile manufacturing efficiency.In dangerous environments,robotsreplace humans to carry out high-risk tasks,such as overhauling nuclear powerplants.In the future,with the 5G-A/6G network optimizing delay and quantumcomputing t
50、o accelerate decision-making,2030 is expected to enter the era ofubiquitous intelligent agents,in which robots will become the core carrier of theconvergence of the digital economy and the entity industry.2.Intelligent devices:accelerating ubiquitous intelligenceAs the core entrance of information c
51、onsumption,intelligent devices haveevolved from communication tools to AI-driven multi-modal interaction platforms.Ubiquitous intelligence aims at the seamless convergence and active service,relyingon the technological integration and ecological synergy of intelligent devices,topromote intelligent s
52、ervices to be seamlessly integrated into human life and realizethedeepinterconnectionandactiveresponseofHuman-Machines-Device-Environments.As a carrier,intelligent devices are accelerating this process throughtechnological innovation and scene expansion.At the technological level,theend-side computi
53、ng leap(AI cell phone NPU computing power up to 100TOPS)andmulti-modal interaction upgrade(AR navigation,intent understanding)support tens5G-A Ignites the Three-Types of New Intelligent Services8of billions of parameters of large models for localized operation.The end-cloudsynergistic architecture(H
54、ongmeng OS cross-equipment latency less than 0.5seconds)and AI native operating system reconstruct the logic of the service,realizingthe transformation from a“human-controlled device”to“equipment predictingdemand”.For the applications,in the ToC market,intelligent devices have entered ourdaily life.
55、For example,whole-house smart devices are actively linked to handlingsecurity and energy consumption,and the AI cell phone has become an intelligentpersonal assistant.In the ToB scenarios,industry AI devices optimize the productionline through digital twins,and the smart city promotes vehicle-road-c
56、loud synergy toreduce congestion.Moreover,the emerging applications,such as the low-altitudeUAV cluster scheduling and meta-universe XR,are building integrated virtual-realityexperience and continuing to expand the boundaries.The accelerated realization of ubiquitous intelligence is essentially the
57、tripleresonance of technology,scene,and ecology.From the breakthrough of computingpower on the end side to the synergy of the whole scene,the intelligent device isevolving from an isolated device to a ubiquitous intelligent agent,reconfiguring theinteraction mode and productivity form of human socie
58、ty.In the future,with theintegration of 6G communication,quantum computing,and other technologies,intelligent devices will be more deeply integrated into the physical world,reshapethe social operation paradigm,and become the core hub for the convergence of thedigital economy and the entity industry.
59、3.Intelligent connected vehicles:reconstructing the traffic ecologyIntelligent connected vehicles take the deep integration of vehicle,road,cloud,network,and map as the core,and through technological breakthroughs andecological restructuring,upgrade the traditional travel mode to an efficient,safe,a
60、ndsustainable intelligent transportation system.At the technological level,multi-modalsensing and decision-making(LIDAR+high-precision maps to realize centimeter-level5G-A Ignites the Three-Types of New Intelligent Services9environment recognition)and vehicle-road-cloud synergy(5G-A/6G network tosup
61、port millisecond-level interactions)break the data silos,and the automotivesoftware system(Tesla FSD,Hongmeng OS cross-end synergy)promotes functioniteration and ecological openness.For the applications,Robotaxis single-kilometercost is reduced to RMB 1.8,and the intelligent cockpit reshapes the dri
62、vingexperience through AR-HUD and emotional interaction.Intelligent connected vehicles are not only the product of technologyintegration but also the engine of travel ecological reconstruction.Throughvehicle-road-cloud collaboration,data drive,and ecological openness,it is promotingthe transformatio
63、n of the transportation system from“people adapting to vehicles”to“vehicles serving people”.In the future,with the integration of 6G communication,quantum computing,and other technologies,intelligent connected vehicles will bedeeply embedded in the smart city,and become the core node of the synergis
64、ticdevelopment of the digital economy and the entity industry.2.1.2.Development Drivers and Policies of New Information ConsumptionServicesAs a new type of growth point in the era of digital economy,the development ofemerging businesses represented by the“New Three Kinds”of informationconsumption ha
65、s been strongly supported by the Chinese government and localpolicies,and accelerated by multiple driving forces,such as technologicalbreakthroughs,market demand and industry chain synergy.1.National policy supportChinas national policies have provided all-round support for the“three newtypes”of inf
66、ormation consumption through a combination of top-level design,financial incentives,scenario opening and commercial support.Intelligent terminals,robotsandintelligentnetworkedcars areacceleratingtowardsscaleandindustrialization under the dual drive of technological breakthroughs and marketdemand.Wit
67、h continued policy support and steady growth in market size,Chinas5G-A Ignites the Three-Types of New Intelligent Services10intelligent emerging business is about to enter a period of rapid development.(1)Top-level design and strategic planningNational strategic positioning:The Chinese governments w
68、ork report in2025 explicitly lists smart grid-connected new energy vehicles,artificially intelligentcell phones and computers,and intelligent robots as new-generation intelligentterminals,and positions them as the“three new things”of informationconsumption,aiming at releasing the potential of consum
69、ption and fostering newquality of productivity.Chinese Local policy support:Local governments of Shenzhen,Chongqingand other places have issued special documents to support the research anddevelopment of embodied intelligent robots,for example,the“Shenzhen Action Planfor Technological Innovation and
70、 Industry Development of Embodied IntelligentRobotics(2025-2027)”,issued in 2025,mentioned that the focus on supportingembodied intelligent robotic core components,AI chips,bionic manipulator andother key core technology research and development.(2)Application scenarios and commercial supportScenari
71、o-based trialing and commercialization support:Many Chineselocal governments promote the trialing and demonstration of the application of thethree-types of new intelligent services in logistics,industry,transportation,and otherscenarios.Governmentprocurementandbenchmarkingdemonstration:ManyChinese l
72、ocal governments have included intelligent robots and intelligent terminalsin their government procurement catalogs,for example,Hefei plans to promoteintelligent robotic solutions in education,healthcare,and other fields.2.Industry development drivers(1)Technologyintegrationandinnovationbreakthrough
73、s:Thedeep5G-A Ignites the Three-Types of New Intelligent Services11integrationof5Gnetworks,AIbigmodels,cloudcomputing,andothermulti-disciplinary technologies promotes the upgrading of intelligent devices in thedirection of low-power consumption and high computing power,such as end-side bigmodels sig
74、nificantly improving the natural interaction capability of cell phones andwearable devices.Intelligent connected vehicles rely on the development ofvehicle-grade chips and the integration of vehicle-road-cloud architecture to realizethe leap from single-vehicle intelligence to full-domain collaborat
75、ion.Intelligentrobots benefit from the technology iteration of the“Perception-Decision-Control”chain,combined with multi-modal AI breakthroughs in the ability to adapt tocomplex scenes.Moreover,the collaborative innovation of the upstream anddownstream of the industry chain accelerates the commercia
76、lization of technologylanding,while the cloud collaborative architecture can further reduce the computingthreshold,giving riseto the scaleapplication of lightweight devices andhigh-precision services,forming a technology-driven multiplier effect.(2)Market demand and consumption upgrading:The first i
77、s aging andintelligent demand.Under the aging trend,the demand for home robots andintelligent health devices is surging.In addition,the market expansion of intelligentconnected vehicles is accelerated due to the increase in new energy penetration rate.Secondly,the upgrading of consumer electronics i
78、s promoting the device marketiteration.Cell phones are transforming into AI phones,and wearables are extendingto more scenarios such as health monitoring.(3)Industry chain synergy and ecological construction:The first is thetransformation of the role of operators.From“pipeline provider”to“ecological
79、builder”,operators are gradually building the industrial ecosystem through thepan-alliance integration of terminal vendors and chip companies.The second is theeffect of the regional industry clusters.Local governments are actively laying outindustrial cluster demonstration areas,such as Shenzhen to
80、create a“4+3”intelligentrobotics industry layout,Hangzhou Qiantang District and Ningbo Qianwan NewDistrict form the Hangzhou Bay Intelligent Connected Vehicle Cluster,reducing the5G-A Ignites the Three-Types of New Intelligent Services12cost of enterprise collaboration.2.2.Typical Scenarios and Requ
81、irements of Intelligent Robots2.2.1.Typical ScenariosThrough the in-depth integration of the physical body and intelligentdecision-making,embodied intelligent robots are accelerating their penetration intomultiple scenarios of consumers and industry scenarios,and reconfiguring the way ofhuman life a
82、nd production.1.Scenarios for consumers(1)Life housekeeper:Humanoid robots with dexterous hands and uprightwalking ability can complete clothing folding,cleaning,cooking,carrying anddelivering household chores,etc.In addition,bionic interactive robots can provideemotional support through facial expr
83、essions and voice interaction,which can beused for health management and emotional comfort in retirement communities.(2)Intelligent toys:The embodied intelligent robot is deeply integrated into thefield of intelligent toys,reconfiguring childrens growth experience throughprogramming education,emotio
84、nal accompaniment,and robot performance andentertainment,and accelerating the integration of the IP economy(such as thematerialization of the secondary characters)and the upgrading of the emotionalcomputing,to promote toys from“short-term entertainment”to“long-termcompanionship”,becoming a new carri
85、er of family life and education.2.Scenarios for industries(1)Industrymanufacturing:Inautomotiveproduction,electronicsmanufacturing,and other industry fields,embodied intelligent robots can replacemanual labor to carry out sorting and handling,safety inspection,precision operation,quality control,etc
86、.,in the production line,and are suitable for complex,dangerousindustry environment to carry heavy objects,manufacturing,and other tasks,toenhance production efficiency and safety.(2)Transmission and distribution for logistics:Through the integration of LiDAR5G-A Ignites the Three-Types of New Intel
87、ligent Services13and deep vision,the embodied intelligent robot can autonomously plan obstacleavoidance paths,realize intelligent sorting,handling,and transportation of goods,and improve the efficiency of warehousing.(3)Park inspection:The embodied intelligent robot can realize all-weatherautonomous
88、 inspection through multi-modal perception(LiDAR,infrared thermalimaging,etc.),edge computing,etc.,to reduce artificial risks.In addition,it can becombined with self-driving vehicles to expand the robots scope of action,withdynamic obstacle avoidance and real-time monitoring capabilities,to become t
89、hecore node of the intelligent park operation and maintenance.(4)Medicalsurgery:Surgicalrobotsintegratemedicalmodelsandhigh-precisionroboticarms,intelligentlygeneratesurgicalnavigationrecommendations by analyzing surgical impact data to promote minimally invasiveand precise operation innovation,and
90、in the future will be combined with AI torealizemulti-disciplinarycollaborativedecision-makingandacceleratetheconstruction of intelligent medical ecosystem.(5)Emergency rescue:The embodied intelligent robots can break through thephysical limitations,including temperature difference,poisonous gas,and
91、 otherextreme environmental limitations,and replace the manual operation in communitypatrol,detonation,reconnaissance,data collection,etc.,in dangerous scenarios suchas firefighting,nuclear power plant inspection,chemical combustion detection,etc.Meanwhile,it can be applied to earthquake rubble sear
92、ch and rescue,fire rescue,and other critical scenes,shorten the golden rescue time,and enhance the safety ofhigh-risk tasks.(6)Welcome guide:The robot optimizes the service experience throughmulti-modal interaction(voice recognition,emotional recognition),providing userswith problem consultation,rou
93、te guidance,merchandising,and other services,reshaping the efficiency of public services and interaction temperature.5G-A Ignites the Three-Types of New Intelligent Services142.2.2.The WorkflowThe workflow of embodied intelligence can be divided into four parts,environment perception,decision analys
94、is,motion generation,and motionexecution.The workflow achieves learning and adjustment optimization throughinteraction with the environment to complete task planning and commandexecution.Figure 2 The workflow of embodied intelligent robotsAmong them:1.Environment perception:mainly performed by the r
95、obot body(such aslimbs,hands,feet,skin,etc.),including object and scene perception in the externalenvironment,as well as behavioral and expression perception of the user.Theperception results are used for data acquisition and inputted for subsequent decisionanalysis.2.Decision analysis:mainly perfor
96、med by the robot brain,analyzing data anddecision-making results for three categories,task planning(combining atomic skillsto generate motion commands),environment understanding and analysis(analyzingthe real environment information in route planning/navigation scenarios),andintelligent Q&A(human-ro
97、bot Q&A interaction).3.Motion generation:mainly executed by the robot cerebellum,generatingmotion commands based on the decision-making results,including commands basedon fixed rules(dancing choreography),commands based on interactive learning(adjusting hand posture to complete the item grasping),an
98、d commands based on the5G-A Ignites the Three-Types of New Intelligent Services15big model(question answering).4.Motion execution:mainly completed by the robot body,including themotion or Q&A according to the commands,and interactions with the environmentand the user for feedback before the next rou
99、nd of process.As the robot brain and the robot cerebellum of the embodied intelligent robotneed high computing power support in decision analysis and command generation,to minimize the complexity and cost of robot hardware deployment and enhance theflexibility of the robot itself,the capability impr
100、ovement of the robot brain and therobot cerebellum will be the future development trend when they are detached fromthe robot body and deployed remotely.Based on the hierarchical decision-makingmodel,the current deployment method offers two options:end-cloud collaborationand cloud-edge-end collaborat
101、ion.The end-cloud collaboration is the currentmainstream deployment option.In the future,for multi-robot collaborationscenarios,the cloud-edge-end collaboration deployment option will become one ofthe possible evolution directions.By deploying the motion model of the robotcerebellum in the edge node
102、s(e.g.,base stations,edge servers,etc.)and making fulluse of their computing power resources,the communication and work efficiency ofembodied intelligent robots can be effectively improved.Table 2 The comparison between end-cloud collaboration and cloud-edge-endcollaborationEnd-cloud collaborationCl
103、oud-edge-end collaborationCloudRobot Brain:human-robot interaction,intent understanding,task planning,etc.RobotBrain:human-robotinteraction,intent understanding,task planning,etc.Edge/Robot Cerebellum:generation of motioncommands and trajectoryEndRobotCerebellum:generationofmotion commands and traje
104、ctoryRobot Body:motion executionRobot Body:motion executionProsSuitable for single-robot scenariosSuitable for multi-robot networking5G-A Ignites the Three-Types of New Intelligent Services16FastdeploymentandscenariogeneralizationscenariosThemotionmodelinferenceofmultiple robots can use edge computi
105、ngresources at the same time,effectivelyreducing the power consumption of therobots themselvesConsDemanding High Manufacturing PrecisionandStringentCostRequirementsforRobot BodiesMulti-robot tasks place high demands onnetworkconnectivityandcomputingpowerIndustrysupportingMain option for the industry
106、 currently/2.2.3.Service RequirementsTypical services of embodied intelligence include real-time interaction,designedmotion,and remote operation,in which the robot obtains recognizable multi-modalcommands to complete the execution of motions and interactions.Designed motiondoes not require network t
107、ransmission of data,whereas real-time interaction andremote operation require the cloud server to parse the commands,and thereforerequire mobile communication networks to transmit the commands and data.Figure 3 The service flow of embodied intelligent robots1.Requirements for network connection5G-A
108、Ignites the Three-Types of New Intelligent Services17Embodied intelligent robots can be applied to different scenarios of ToC and ToB,from the daily life of the public to industry production.According to the analysis ofservice flow(shown in Figure 3),the downlink transmission data of embodiedintelli
109、gent robots are mostly motion commands,while the uplink transmission dataare slightly different according to different types of demands.Real-time interaction,as a rigid demand of embodied intelligent robot services and requesting highnetwork performance(rate,delay,etc.),includes voice Q&A,environmen
110、t analysis,etc.Voice Q&A involves audio acquisition,text conversion,text transmission,audioparsing,etc.Environment analysis refers to human-robot interaction Q&A afterrecognizing the physical world,objects,etc.In addition to voice Q&A-related tasks,itinvolves the tasks of capturing,encoding,transmit
111、ting,and parsing images or videos.The remote operation requires the transmission,decoding,and rendering of imagescaptured by remote headsets or cameras,and therefore requires mobility andreliability in addition to rate and latency.This white paper focuses on analyzing the latency and rate requiremen
112、ts of thenetwork for embodied intelligence robots.Embodied intelligence end-to-end delayconsists of the segmented delay of each part.Currently,the response delay of asingle task is 15 seconds,of which the large model processing delay accounts formore than 50%.Figure 4 The end-to-end delay analysis o
113、f the embodied intelligent robotsIn the laboratory test environment,for the multi-modal Q&A,the test results5G-A Ignites the Three-Types of New Intelligent Services18are as follows,which are basically in line with the theoretical analysis.(1)Rate testService model:transferring 3 images(720P,400KB)pe
114、r secondTest results:as Table 3Table 3 The network test result of the embodied intelligent robot applicationRound-trip delayEstimation of radio delayUL frame-level rate2s(feeling good)UL:1sDL:180ms3.25Mbps1.2s(felling normal)UL:350msDL:30ms9.4MbpsNote:In the real test,it is proved that The delay of
115、a single picture analysis of thelarge model is about 800ms,and Transmission delay of backbone network inlaboratory environment is about 20ms.(2)Delay testService model:capture and store human&robot audio,analyze the intervalsbetween two audio segments and derive the round-trip response delayTest res
116、ult:the Q&A delay of the voice interaction prototype is 2.15 seconds(acceptable good response time).2.Requirements for computing power(1)Home service robots:The home environment has random obstacles(such aspets,toys),light changes,etc.,and the robots need to adapt to the flexible but havelow real-ti
117、me requirements(hundred milliseconds).(2)Industrial robots:Industrial scenarios(e.g.automotive welding,preciseassembly)require closed-loop control of robot motions and sensor feedback to be5G-A Ignites the Three-Types of New Intelligent Services19completed within microseconds(s),with stringent real-
118、time requirements.The autonomous action and cooperative operation of intelligent robots incomplex manufacturing scenarios rely on high-intensity AI computation,includingenvironmentperception(e.g.,targetrecognition,obstacledetection,three-dimensional reconstruction),dynamic planning(e.g.,path plannin
119、g,taskassignment),and intelligent decision-making(e.g.,real-time strategy adjustment,motion prediction).Due to the power consumption,computing power,and weightconstraints of the robot body,it is difficult for the robot itself to independentlyundertake high-intensity computational tasks.Therefore,the
120、 embodied intelligentrobots put forward a clear demand for offloading computing and AI processing to thenetwork side:(1)Edge-side AI arithmetic requirements:Offloading the robot brain functionsto the edge side requires that the edge nodes have strong AI computing capability forreal-time processing o
121、f multi-channel high-resolution video streams,LiDAR pointcloud data,and completing the complex AI model inference tasks.(2)AI fast inference:The edge needs to deploy a high-performance AI inferenceengine to support large-scale AI model real-time inference.The delay needs to becontrolled at the milli
122、second level to meet the needs of the robots real-timedecision-making.(3)Dynamic update and training of AI models:The changes in the industrialproduction environment and the diversity of tasks require that the AI model can beupdated quickly and iteratively according to the field data,and the edge si
123、de needsto have online learning and model updating capabilities to quickly respond to theenvironmental changes and task requirements.The network can play the advantage of being close to users and perceiving users,and make full use of the computing advantage of network edge nodes through thecomputing
124、 power sharing of the cloud-edge-end collaboration to provide low-latencyand high-reliability computing services.5G-A Ignites the Three-Types of New Intelligent Services203.Requirements for dataThe training data collection and storage of embodied intelligence large models isdifficult,for example,the
125、 multi-modal dataset of Googles intelligent robots(RT1,RT2)needs to collect millions of data from 160,000 tasks in 22 robots,so there is aneed for data collection,storage and sharing for the network.(1)Data collection and aggregation:The edge network needs to receive sensordata uploaded by multiple
126、robots in real time and quickly perform preprocessing andintegration.(2)Data preprocessing and compression:Edge nodes need to support datapreprocessing,feature extraction,data compression,etc.,to reduce the burden oftransmission and storage,and improve the efficiency of subsequent AI calculations.(3
127、)Data storage and synchronization:The edge side needs to have certain datastorageandsynchronizationmechanismstoensuretheconsistencyandsynchronization of the data when multiple robots collaborate,and to avoidcollaborative errors caused by data delay or desynchronization.The network can provide data c
128、ollection,storage,and sharing,and buildintra-network data management to support data sharing among multiple embodiedintelligent robots,realizing real-time analysis and processing of data in the vicinity,and reducing data transmission and processing overhead.2.3.Typical Scenarios and Requirements of
129、AI Agent2.3.1.Introduction of AI AgentAn AI agent is a software entity or system that is capable of autonomouslyperceiving,making decisions,and executing actions in order to achieve specific goals.It has the capability to gradually accomplish given goals through independentthinking and tool invocati
130、on.The collaboration between humans and AI can be divided into three modes.AI5G-A Ignites the Three-Types of New Intelligent Services21embedding mode,AI copilot mode,and AI Agent mode.Compared to the first twomodes,the AI Agent mode is more efficient and will be the main mode ofcollaboration between
131、 humans and AI in the future.1.AI Embedding Mode:Users communicate with AI through language,useprompts to set goals,and then AI assists users in achieving these goals.The role of AIis a tool for executing commands,while humans play the roles of decision makersand commanders.2.AI Copilot mode:Humans
132、and AI participate in the workflow together,eachplaying their own roles and complementing each others abilities.AI intervenes in theworkflow,from providing suggestions to assisting.The role of AI is a knowledgeablepartner.3.AI Agent Mode:Humans set goals and provide necessary resources(such ascomput
133、ing power),then AI independently undertakes most of the work,and finallyhumans supervise the process and evaluate the final results.In this mode,AI fullyembodies the interactive,autonomous,and adaptive characteristics of intelligentagents like independent actors,while humans play more roles as super
134、visors andevaluators.Figure 5 The development of AI Agent5G-A Ignites the Three-Types of New Intelligent Services22Currently,multi-modal dialogue applications are a companion feature of AInative applications launched by OTT.Mainstream AI native applications alreadysupport multi-modal dialogue,such a
135、s Alibabas Tongyi,Zhipu AIs ChatGLM,Doubao,etc.,which have all launched vision-based multi-modal dialogue.Users can captureimages on their mobile phone cameras and have real-time conversations with AI.Nevertheless,currently,the resolution of applications is still relatively low,basicallymaintained a
136、t 720P level.Table 4 AI native applications supporting multi-modal real-time dialogueDoubaoZhipu AIiFLYTEKSPARKAlibabaTongyiUsers118 Million8 Million8 Million5 MillionResolution720P720P720P720P2.3.2.Typical ScenariosThe combination of AI Agent with different device forms can be widely appliedin pers
137、onal consumption and industry production,improving user experience andproduction efficiency.The typical application scenarios of AI agents include:1.Mobile AI Assistant:AI mobile phones have reached the stage of AIprimitive biology,with personal AI assistants as the core.This is the first typicalapp
138、lication of AI Agent landing.After the user issues a voice command,the mobile AIassistant can complete tasks such as e-commerce shopping,ordering takeout,booking train tickets,and interacting with friends on social media.2.Embodied Intelligence:AI Agent can interact with the environment throughphysi
139、cal entities to perceive the environment,recognize information,makeautonomous decisions,and take action.It is applied in scenarios such as life5G-A Ignites the Three-Types of New Intelligent Services23management,industrial manufacturing,park inspections,etc.3.Intelligent customer service:AI Agent te
140、chnology can automatically handlehigh-frequency scenarios such as bank wealth management consultation ande-commerce after-sales disputes based on multi-round dialogue management andintention recognition,improving service efficiency and enhancing user satisfaction.4.Driving assistant:AI Agent can com
141、bine real-time road condition analysisand driving behavior monitoring to achieve functions such as dangerous lane changewarnings and dynamic optimization of freight routes,reducing the incidence of trafficaccidents.5.Financial assistant:AI Agent can use big data correlation analysis anddynamic risk
142、modeling to complete high-value decisions such as intelligent stockportfolio adjustment and credit anti-fraud detection,optimizing asset allocationreturns.6.Marketing Assistant:AI Agent can rely on user profile mining and generativeAI to automatically generate social media hot copy and predict adver
143、tisingeffectiveness,improving the return on investment of marketing.7.Medical Assistant:AI Agent can integrate cross-modal medical data anddiagnosis and treatment knowledge graphs,assist in CT imaging lesion localization,and generate personalized medication plans,to improve clinical diagnosis effici
144、encyand accuracy.2.3.3.The WorkflowThe workflow of AI Agent can be divided into three steps:perception and input,inference and decision-making,and execution and feedback.Through the use oflarge models and memory data for decision inference,execution plans are generatedand stored for the next decisio
145、n execution.5G-A Ignites the Three-Types of New Intelligent Services24Figure 6 The workflow of AI Agent1.Perception and Input(1)Intention understanding:With the help of large model,accuratelyunderstand users questions and intentions.(2)Problem analysis:Task decomposition,breaking down the question i
146、ntoseveral sub questions.2.Inference and decision-making(1)Parameter matching:Utilizing cloud and local memory to learn andsummarize problem-solving experience before.(2)Decision planning:Determining the solutions and steps and generating theorder of execution tool calling according to the solutions
147、.3.Execution and Feedback(1)Execution call:Calling various tool components(such as third-party apps,native functions on mobile phones,etc.)to execute.(2)Memory storage:Observing users feedback,adjusting execution in the nextinteraction,and storing the execution results for subsequent reasoning and5G
148、-A Ignites the Three-Types of New Intelligent Services25decision-making.The implementation of AI Agent mainly includes three solutions:device AI,cloud AI,and hybrid AI.1.Device AI:Both data processing and model inference are completed at thedevice,with fast response speed,high privacy,low network re
149、quirements,but weakcapability to handle complex tasks and low level of intelligence.(1)Advantages:Low latency response:No need to upload data to the cloud and wait forprocessing results,providing instant feedback.High privacy:All data processing is carried out locally.Offline availability:Even if th
150、e phone is in a network-free state,the AIassistant can still work normally.(2)Disadvantages:Limited computing resources:The hardware performance of smartphones isrelatively limited,making it difficult to run complex and large-scale AI models.Difficulty in updating models:Due to limitations in mobile
151、 phone storagespace and processing power,updating local models requires a significant amount oftime and traffic.2.Cloud AI:All computing and data storage are placed in the cloud,utilizingthe powerful computing resources of the cloud to handle complex tasks with highnetwork requirements and privacy a
152、nd security risks.(1)Advantages:Powerful computing power:Capable of running complex large-scale AImodels and handling various complex tasks.5G-A Ignites the Three-Types of New Intelligent Services26Convenient model updates:The cloud can update and optimize AI modelsat any time.Rich knowledge resourc
153、es:The cloud can integrate a large amount ofknowledge and data,providing AI Agent with more comprehensive and accurateinformation.(2)Disadvantages:High latency:Data transmission requires a certain amount of time,especially in unstable networks or weak signals,where latency becomes longer.Privacy ris
154、k:Users voice and interaction data need to be uploaded to thecloud,which poses a certain risk of privacy leakage;High cost:Long term use of cloud services is expensive.3.Hybrid AI solution(Device AI+Cloud AI):Simple tasks are executed on thedevice side,while complex tasks are uploaded to the edge/cl
155、oud,balancingperformance,privacy,and network dependencies.(1)Advantages:Balance performance and privacy:Utilize the low latency and privacyprotection advantages of Device AI to handle local tasks,and leverage the computingpower of Cloud AI to handle complex tasks.Improveresourceutilization:Flexiblya
156、llocatetasksbasedonthecharacteristics of tasks and the resource status of devices.Enhanced reliability:In the event of network instability or interruption,Device AI can continue to complete some basic tasks.(2)Disadvantages:High system complexity:Requiring coordination of task allocation and data5G-
157、A Ignites the Three-Types of New Intelligent Services27exchange between the device side and the cloud side,resulting in high complexity.High development difficulty:Development requires consideration of boththe device side and the cloud side technologies and environments,resulting inrelatively high d
158、evelopment cycles and costs.Hybrid AI solution combines the advantages of both Device AI and Cloud AI,which not only protects privacy and has low latency,but also has powerfulcomputing power to support complex tasks,making it the mainstream solution forpersonal AI assistants today.Table 5 The compar
159、ison of AI Agent solutionsSolutionProsConsApplicationScenariosComputingRequirementsDevice AILow latency andhigh privacyLimited computingcapabilityFace recognition,voice assistantDevice:110 billionmodel parametersCloud AIStrong computingcapability and richdata resourcesRelying on network,existing lat
160、encyproblemSearch engineServer:100 billionmodel parametersHybird AILow latency andhigh computingcapabilitiesComplex taskallocationSmart voiceassistantDevice(110 billionmodel parameters)+Server(100 billionmodel parameters)2.3.4.Service Requirements1.The smartphone assistant service based on the devic
161、e AI solutionThe smartphone assistant is based on the device AI solution that obtains voicecommands through the phone microphone.The smartphone then performs textconversion,intent understanding,planning and decision-making,and generatescommands that are subsequently executed.5G-A Ignites the Three-T
162、ypes of New Intelligent Services28Figure 7 The service flow of smartphone assistant based on the device AIThe service has low network requirements;mainly,the timed model updatesand data synchronization will request the network.The downlink rate for modelupdate is approximately 1Gbps and the uplink r
163、ate for data synchronization is about100Mbps.2.The smartphone assistant service based on the cloud AI solutionThe smartphone assistant service based on the cloud AI solution involvesobtaining voice commands through the cell phones microphone,decomposing thecommands at the device side,and uploading t
164、hem to the cloud server via the mobilecommunication network for data and command parsing,task planning and execution.The server then generates downlink response information and sends it back to thesmartphone to display the results for the user to supervise.Figure 8 The service flow of smartphone ass
165、istant based on the cloud AI(Take5G-A Ignites the Three-Types of New Intelligent Services29take-out delivery ordering as an example)This service places high demands on the network.Its main service data isuplink-transferred,including user requests,user profiles,screenshots,servicerequest commands,tex
166、t frames,etc.Figure 9 The test results of smartphone assistant based on the cloud AIThe test results of the AI assistant service based on the cloud AI solution showsthat for the takeout ordering service,the uplink rate is up to 10.9Mbps(oneinteraction)and 4.4Mbps(consecutive interactions),and the do
167、wnlink rate is up to3.1Mbps.During the test,to finish one takeout ordering task,the system needs 15interactions for average and the total time for one task is 60s for average,hence thelatency for single interaction is about 4s.In the future,with the improvement ofcloud processing capacity and the re
168、duction of processing delay,the singleinteraction delay will drop to 23 seconds.5G-A Ignites the Three-Types of New Intelligent Services30Table 6 Test results of the smartphone assistant service based on the cloud AIsolutionInteraction typeData typeUL RateDL RateDataVolumeAgent DataInteractionUser r
169、equests and user history information10.9Mbps/3.5MbMotion plans/command/0.2Mbps70kbScreenshots4.4Mbps/1.4MbService App DataInteractionCommands for service requests1.6Mbps/500kbText frames+images/1.6-3.1Mbps500kb-1MbNote:Radio delay is calculated at 320ms(for the smooth interaction)3.The smartphone as
170、sistant service based on the hybrid AI solutionThe smartphone assistant based on the hybrid AI solution obtains the usersvoice commands through the cell phone microphone.The device agent firstpre-processes the commands and converts them to text.The text is then uploaded tothe agent server via the mo
171、bile communication network for intent understanding,task planning,and command generation,and commands are sent back in thedownlink transmission.The device agent then calls a third-party APP(e.g.MeituanDelivery)to execute the commands.The pages and data of the third-party APP aregenerated according t
172、o commands,and then the device agent processes thescreenshot of the APP page and uploads the processed data to the agent server viathe mobile communication network for task analysis and decision making.The serverthen generates commands to be sent back to the device-side agent in the downlinktransmis
173、sion.5G-A Ignites the Three-Types of New Intelligent Services31Figure 10 The service flow of smartphone assistant based on the hybrid AI(Taketake-out delivery ordering as an example)This service places high demands on the network.Its main service data isuplink-transferred,including screenshots,servi
174、ce request commands,text frames,etc.Figure 11 The test results of smartphone assistant based on the hybrid AIThe test results of the AI assistant service based on the cloud AI solution showsthe for the takeout ordering service,the uplink rate is up to 2.5Mbps and thedownlink rate is up to 3.1Mbps.Du
175、ring the test,to finish one takeout ordering task,5G-A Ignites the Three-Types of New Intelligent Services32the system needs 10 interactions for average and the total time for one task is 22s foraverage,hence the latency for single interaction is about 2.2s.In the future,with theimprovement of the p
176、rocessing capability of the cloud and the reduction of theprocessing latency,the single interaction latency will be reduced to 11.5 seconds.Table 7 Test results of the smartphone assistant service based on the hybrid AIsolutionInteraction typeData typeUL RateDL RateDatavolumeAgent DataInteractionUse
177、r requests and user history information10.9Mbps/3.5MbMotion plans/command/0.2Mbps70kbScreenshots4.4Mbps/1.4MbService App DataInteractionCommands for service requests1.6Mbps/500kbText frames+images/1.6-3.1Mbps500kb-1MbNote:Radio delay is calculated at 320ms(for the smooth interaction)2.4.Typical Scen
178、arios and Requirements of AI Glasses2.4.1.Typical ScenariosAI glasses are the product form of traditional glasses iterated to AR/MR glasses,and they are lightweight wearable devices integrating AI and intelligent hardware.The core function is to combine AI technology with glasses form,and through th
179、ecore modules such as cameras,sensors,and LLM,AI glasses can realize the functionsof environment perception,intention analysis,AI and large language models,andsuperposition of virtual and real information,and provide users with voice assistants,real-time translations,navigation reminders,etc.,becomi
180、ng an interactive portal forusers to connect to the digital world.The application scenarios of AI glasses are expanding to penetrate into manyspecialized fields such as medical treatment,industry,education,etc.The applicationscenarios of AI glasses include:1.Consumer market:AI glasses can be used fo
181、r environment and picture5G-A Ignites the Three-Types of New Intelligent Services33recognition,and the real-time translation of personal life and entertainmentactivities to enhance user experience.2.Industry applications:AI glasses can provide real-time information andguidanceinequipmentmaintenance,
182、productionmanagement,andqualityinspection.3.Education and training:AI glasses can be used for online education andskills training to provide an immersive learning experience.4.Medical field:In telemedicine and surgical assistance,AI glasses can helpdoctors access real-time data and guidance.5.Augmen
183、ted reality meeting:Through AI and AR technology,users caninteract in a virtual environment to enhance the sense of participation in themeeting.Figure 12 The panoramic view of AI glasses applicationsThese application scenarios demonstrate the potential and value of AR glassesin multiple fields,and t
184、he market growth of AI glasses provides strong support.2.4.2.The WorkflowThe workflow of AI glasses can be divided into four steps,informationacquisition,information recognition and command inference,decision-making and5G-A Ignites the Three-Types of New Intelligent Services34execution,and feedback
185、and presentation.Through the LLM and information-baseddata decision inference,AI glasses generate response information and presentfeedback to the user.Figure 13 The workflow of AI glasses1.Information Recognition and Command Inference(1)Information recognition:Utilizing AI capabilities to parse the
186、environmentand commands to accurately extract key command information.(2)Command inference:Utilizing AI big model and knowledge to understandthe users intent.2.Decision-making and execution(1)Decision-making:Making decisions based on the intent,planning the taskflow,and decomposing the required step
187、s.(2)Task execution:Executing task steps,calling various functional components,generating task results,and making decision adjustments based on the results.3.Feedback and presentation:Judging the results and generating thecorresponding audio and video information,and presenting the information andre
188、sults on the users device.2.4.3.Service Requirements1.AI glasses service flowThe camera and microphone of glasses body are activated to obtain image and5G-A Ignites the Three-Types of New Intelligent Services35voice command by voice commands,and the data information is uploaded to thecloud server th
189、rough mobile networks for content parsing,intent understanding,andplanning and decision-making.The downlink response information is generated to besent back to the side of glasses for voice playback.From the point of view of the data content of interaction between AI glassesand the server,the uplink
190、 data mainly is voice commands and image snapshot(forphoto Q&A)/video(for video Q&A),while the downlink data mainly is the voiceresponse data.In the future,AI glasses in the downlink direction may also includeother data information such as AR-augmented image.Figure 14 The service flow of AI glasses2
191、.Network requirementsTable 8 Estimated statistics of data transmission in different service of AI glassesServiceData ContentUL Traffic VolumeRateProtocolsImage Q&AVoice command+Image(1 snapshot)100KB(The duration of asingle voice question:1.5s)545Kbps(ave)840Kbps(max)TCPVideo Q&AVoice command+Video(
192、2fps)8.3MB(Continuousshooting duration:140s)475Kbps(ave)1.6Mbps(max)UDPNote:The data is sourced from laboratory validation(1)AI glasses image Q&A5G-A Ignites the Three-Types of New Intelligent Services36Figure 15 The test result of AI glasses image Q&A(2)AI glasses video Q&AFigure 16 The test result
193、 of AI glasses video Q&AThe above data are the test results of AI glasses service in laboratory.In thefuture,AI smart glasses will have a richer variety of service types and a wider rangeof application scenarios.In order to meet the requirement of the real-timehuman-machine dialogue experience,AI gl
194、asses and server platforms will have morefrequent interactions and higher-quality content transmission,which will also put5G-A Ignites the Three-Types of New Intelligent Services37forward higher network bandwidth requirements for future networks.2.5.TypicalScenariosandRequirementsofIntelligentConnec
195、tedVehicles2.5.1.Typical ScenariosIntelligent connected vehicles are intelligent transportation tools that integrateautomotive engineering,smart technologies,and connectivity capabilities.They arebuilt upon vehicles equipped with perception,computing,and execution capabilities,centered around intell
196、igent systems capable of understanding,analyzing,andlearning,and rely on interconnected technologies enabling communication,interaction,and collaborative connectivity as their core foundation.These vehicles find applications in toC scenarios such as intelligent cockpits,ARnavigation,and assisted dri
197、ving,as well as toB scenarios like autonomous taxis andunmanned logistics delivery,including:1.Intelligent Cockpit:Leveraging AI-driven voice interaction and facialrecognition technology to deliver personalized services and seamless multi-modalhuman-vehicle collaboration,dynamically optimizing the i
198、n-vehicle environment andentertainment systems.2.AR Navigation:Utilizing AI-based real-time environmental perception anddynamic path planning to overlay virtual navigation information onto real-worldtraffic imagery,enhancing driving decision accuracy in complex road conditions.3.Assisted Driving:By
199、integrating multi-sensor data through AI,functionalitiessuch as lane-keeping,adaptive cruise control,and emergency obstacle avoidance areachieved,with continuous optimization of driving security and scenario adaptabilityvia deep learning.4.Autonomous Taxi Services:Relying on AI algorithms for full-s
200、cenarioperception and multi-objective decision-making,enabling vehicles to autonomouslyaccept orders,avoid obstacles,and optimize routes on urban roads,providing 24/75G-A Ignites the Three-Types of New Intelligent Services38efficient mobility services.5.Unmanned Logistics Delivery:AI-powered high-pr
201、ecision positioning anddynamic obstacle detection ensure delivery vehicles can navigate in complex urbanenvironments autonomously,perform real-time obstacle avoidance,and completethe last-mile delivery.6.Campus Unmanned Transportation:AI-coordinated scheduling systemsintegrate fleet control and rout
202、e planning to achieve all-weather automated materialtransportation within industrial parks,significantly reducing labor and energy costs.7.Autonomous Sanitation Vehicles:Employing AI-based visual recognition toidentify garbage distribution and road boundaries.These vehicles autonomouslygenerate effi
203、cient cleaning routes and dynamically adjust operational modes inresponse to weather changes,enhancing the intelligence level of urban operations.2.5.2.The Workflow1.The intelligent cockpit with the co-pilot AI assistantThe intelligent cockpit refers to a digital in-vehicle space centered around the
204、in-vehicle infotainment system,integrating technologies such as LCD instrumentclusters,heads-up displays,voice-and gesture-based multi-modal interaction,andnetwork connectivity features.It delivers personalized services through multi-modalinteraction(combining voice,touch,gestures,etc.).The AI co-pi
205、lot assistant,powered by AI Agent technology,enhances the intelligent cockpit experience byenablingmulti-modalinteraction,proactiveengagement,andpersonalizeddevelopment.The workflow of the intelligent cockpit with the co-pilot AI assistant can bedivided into three steps:perception&analysis,inference
206、&decision-making,andexecution&feedback.This closed-loop system ensures continuous adaptation touser behavior,environmental changes,and evolving service demands,driving theintelligent cockpit toward seamless,intuitive,and user-centric experiences.5G-A Ignites the Three-Types of New Intelligent Servic
207、es39Figure 17 The workflow of the intelligent cockpit with the co-pilot AI assistant(1)Perception and InputIntention understanding:With the help of large model,accuratelyunderstand users questions and intentions.Problem analysis:Task decomposition,breaking down the question intoseveral sub questions
208、.(2)Inference and decision-makingParameter matching:Utilizing cloud and local memory to learn andsummarize problem-solving experience before.Decision planning:Determining the solutions and steps and generating theorder of execution tool calling according to the solutions.(3)Execution and FeedbackExe
209、cution call:Calling various tool components(such as third-party apps,native functions on mobile phones,etc.)to execute.Memory storage:Observing users feedback,adjusting execution in the nextinteraction,and storing the execution results for subsequent reasoning anddecision-making.2.Cloud-assisted int
210、elligent drivingCurrently,autonomous driving is evolving from“single-vehicle intelligence”to5G-A Ignites the Three-Types of New Intelligent Services40“vehicle-road-cloud collaborative driving”.Single-vehicle intelligence relies ononboard sensors to collect external environmental data and uses algori
211、thms forautonomous vehicle control.This approach demands high-performance onboardsensors and computational platforms.Vehicle-road-cloud collaboration,by contrast,gathers external information from multiple endpoints(vehicle-side,road-side,andcloud-side)and leverages synergistic algorithmic processing
212、 between onboard andcloud-based systems to achieve high-level autonomous driving.Figure 18 The development of intelligence drivingCloud-assisted intelligent driving services refer to the real-time uploading ofcomplex semantic traffic information(e.g.,irregular traffic signs,tide lanes,orambiguous ro
213、ad markings)that vehicles cannot locally interpret.A cloud-based largeAI model processes and identifies this data in real-time,then instantly transmits theresults back to the vehicle.This reduces traffic accidents,enables real-time optimalroute selection,and enhances autonomous driving efficiency.Th
214、e workflow can be divided into four stages:environment perception,DecisionAnalysis,command Generation,and Command Execution.The vehicle executescommandslocally(e.g.,rerouting,braking,orsteeringadjustments)andcontinuously learns and optimizes through interaction with the environment(e.g.,adapting to
215、new traffic patterns).This integrated framework combines localexecution with cloud intelligence,creating a robust solution for next-generation5G-A Ignites the Three-Types of New Intelligent Services41autonomous driving.Figure 19 The workflow of the cloud-assisted intelligent driving(1)Environment pe
216、rceptionRoad:Perception of scenario and physical world.Thing:Perception of things around vehicles and on the road.Human:Perception of human voice and facial expression.(2)Decision analysisTask Planning:Combining capabilities to generate commands.Environment analysis:Analyzing the real environment in
217、formation,such asthe road,trajectory,navigation.(3)Command generationCommands based on rules.Commands based in learning.Commands based on large models.2.5.3.Service Requirement1.The intelligent cockpit with the co-pilot AI assistant5G-A Ignites the Three-Types of New Intelligent Services42The intell
218、igent cockpit service can utilize any AI Agent solution.Take the hybridAI as an example.The AI Agent assistant on the vehicle obtains the users voicecommands through the cell phone microphone,pre-processes the commands,andconverts them to text.The text is then uploaded to the agent server via the mo
219、bilecommunication network for intent understanding,task planning,and commandgeneration,and commands are sent back in the downlink transmission.The vehicleagent then calls a third-party APP to execute the commands.The screenshots of thethird-party APP are processed and uploaded to the agent server vi
220、a the mobilecommunication network for task analysis and decision making.The server thengenerates commands to be sent back to the vehicle-side agent in the downlinktransmission,and the vehicle agent calls the local system to execute the commands.Figure 20 The service flow of the intelligent cockpit w
221、ith the co-pilot AI assistantThis service places high demands on the network.Its main service data isuplink-transferred,including screenshots,service request commands,text frames,etc.2.Cloud-assisted intelligent driving5G-A Ignites the Three-Types of New Intelligent Services43Figure 21 The service f
222、low of the cloud-assisted intelligent drivingThe cloud-assisted intelligent driving service collects and encodes images andvideos from car cameras,which are then uploaded to cloud servers via mobilecommunication networks for image and video analysis and task analysis.The servicethen generates comman
223、ds to be transmitted back to the vehicle for execution.This service places high demands on the network,with uplink imagetransmission being the main type of data transferred.Chapter 3 5G-AEmpowersNewInformationConsumption Services3.1.5G-A Technology Scope5G-Advanced(5G-A)narrowly refers to the new te
224、chnologies defined in the3GPP R18 specification.Broadly,it has become a general term for a newdevelopment phase.All new features defined in the 3GPP R18 standard or innovativetechnical solutions based on product implementation are collectively termed as5G-A.5G-A acts as a bridge between 5G deploymen
225、t and next-generation 6Gcommunication,driving innovation in industries such as autonomous driving,5G-A Ignites the Three-Types of New Intelligent Services44Ambient IoT,and immersive digital experiences.Figure 22 5G-A Top 10 Innovations3.2.Enhanced Connectivity GuaranteeNew information consumption se
226、rvices such as embodied intelligence,AIglasses,AI Agent,and intelligent connected vehicles are characterized by diverseservice models,crossing indoor and outdoor scenarios,and stringent networkperformance requirements(e.g.,bandwidth,delay,reliability).5G-A,with itssignificantly enhanced connectivity
227、 capabilities,provides robust support for thedevelopment of these new services through five core capabilities.Service perceptionidentifies high-priority transmission demands and dynamically allocates resources toensure critical data flows receive priority.Rate improvements guarantee seamlessdelivery
228、 of ultra-HD video streams and critical data,ensuring timely and smoothtransmission.Latency assurance enables instantaneous response to commands andreal-time interaction,critical for applications like remote control and immersiveexperiences.Mobility enhancement ensures seamless handover in high-spee
229、dscenarios,maintaining low latency and stability during transitions between networksor environments.Cluster communication realizes that efficient device-to-devicecoordination supports high-density device interactions,enabling synchronizedoperations in scenarios like smart cities or industry automati
230、on.The synergistic effectofthesefivecorecapabilitiescollectivelyconstructsahighlyreliable,5G-A Ignites the Three-Types of New Intelligent Services45high-performance connectivity foundation,which empowers the innovation andapplicationofdiverseAI-drivenservicesacrossbroaderdomains,drivingadvancements
231、in areas such as autonomous systems,immersive media,andintelligent infrastructure.1.Service perception(1)5G slicing:Leveraging the Traffic Descriptor in 5G network slicing to identifyand distinguish service traffic enables the network to perceive the service type andcharacteristics,associate with th
232、e corresponding slice identifier,establish anappropriate slice session connection,and configure end-to-end connectivityincluding core network,transport network,and radio access network components.This mechanism instructs each network element node to provide service assurance.The 5G slicing architect
233、ure offers precise service identification with diverse andflexible granularity levels.It features an immediate effect capability where serviceinitiation automatically triggers transmission service assurance.Service assurance issimultaneously implemented across radio access,transport,and core network
234、domains,achievingcomprehensiveend-to-endnetwork-servicecollaborationassurance throughout the entire service lifecycle.Figure 23 The technique principle of 5G slicing(2)QoS:The QoS Flow represents the smallest differentiation granularity of QoS(Quality of Service),where different QoS Flows provide va
235、rying levels of service5G-A Ignites the Three-Types of New Intelligent Services46assurance capabilities.AI-driven intelligent services typically involve multiplecorrelated data streams,each with distinct QoS requirements.The network canidentify service types by analyzing the 5QI(5-Quantum Identifier
236、)carried in the QoSFlow and prioritize traffic based on importance.By assigning different QoSrequirements to data streams according to their criticality,the network allocatesappropriate transmission resources.For instance,basic-layer data is configured withhigh-reliability QoS to ensure priority tra
237、nsmission of critical information,whilenon-critical data employs standard QoS.Under the premise of maintaining userexperience,the air interface may tolerate certain transmission errors for non-prioritydata,thereby enhancing transmission efficiency and improving both service qualityand the effectiven
238、ess of wireless communication.Figure 24 The technique principle of QoS2.Rate improvements(1)3CC:3CC enhances network performance by bundling three distinctfrequency band carriers into a unified channel.It combines joint schedulingalgorithms to dynamically allocate data streams,precisely synchronize
239、inter-carriertiming,and leverages terminal-side multi-carrier processing capabilities.Thistechnology significantly boosts peak network throughput,reduces latency,andenhances capacity.This approach provides critical support for emerging applicationslike embodied intelligence:its ultra-wide bandwidth
240、and low-latency characteristicsenable real-time transmission of multi-channel high-definition environmental5G-A Ignites the Three-Types of New Intelligent Services47perception videos,massive sensor data,and precise control commands fromintelligent devices such as robots.It facilitates seamless auton
241、omous navigation,human-robot collaboration,and immersive remote operation,delivering essentialconnectivity assurance for next-generation services.Figure 25 The technique principle of 3CC(2)SUL(Supplementary Uplink):SUL is to deploy an independent uplink carrieronalow-frequencybandwhilesharingthesame
242、downlinkcarrierwithmid/high-frequency primary carriers.By leveraging the strong coverage andpenetration capabilities of low-frequency signals,the network dynamically schedulesterminals to switch to the SUL carrier for data transmission in weak signal scenarios(e.g.,when downlink RSRP falls below a t
243、hreshold).This addresses the challenges oflimited 5G high-frequency uplink coverage and terminal transmit power constraints.Through high-low frequency coordination and time-frequency domain resourceaggregation,SUL significantly improves uplink edge rates and reduces delay.Itenables emerging applicat
244、ions such as embodied intelligence robots(real-timesynchronization of multi-sensor data streams for autonomous navigation andhuman-robot collaboration),intelligent connected vehicles(massive traffic videobackhaul for real-time road condition monitoring and AI-based decision-making),industry AI quali
245、ty inspection,and XR(Extended Reality)immersive interactions.ThistechnologyprovidescriticaluplinkconnectivityassuranceforAI-drivenhuman-machine collaboration scenarios,ensuring reliable and efficient datatransmission in resource-constrained environments.5G-A Ignites the Three-Types of New Intelligen
246、t Services48Figure 26 The technique principle of SUL3.Latency assurance(1)L4S(Low Latency,Low Loss,Scalable throughput):For AI-driven intelligent applications,interactive response latency is a criticalparameter to ensure smooth user experience during cloud-AI interaction.If theresponse delay cannot
247、be maintained at a low level,users will find AI-based servicesintolerable.Therefore,reliable and deterministic latency performance must beguaranteed for such delay-sensitive applications.L4S(Low Latency,Low Loss,Scalable Throughput)congestion control isspecifically designed for delay-sensitive servi
248、ces.It alleviates congestion byperforming real-time congestion detection and proactive congestion control acrossend-to-end(E2E)transmission links,ensuring a seamless user experience.The L4S mechanism operates as follows:network devices evaluate congestionstatus by analyzing the waiting time of data
249、packets in the transmission queue.Thiscongestion information is then transmitted to the application layer,enabling it toadjust service transmission rates dynamically based on network conditions.L4S represents an end-to-end perception-feedback-response framework undernetwork-service-terminal collabor
250、ation.By dynamically and adaptively matchingtransmission rates with the perceived congestion state of the link in real time,it5G-A Ignites the Three-Types of New Intelligent Services49reduces the waiting duration of L4S packets in the buffer queue,ensuring interactivelatency and preventing severe us
251、er experience degradation caused by fullcongestion.Figure 27 The technique principle of L4S(2)PSDB:In practical network transmission,AI-based services involve diversedata formats.Application-layer data is fragmented into multiple packets,whichcollectively form a data packet set.These packets within
252、the set exhibitinterdependencies,meaning the loss or delay of any single packet can compromisetheaccuratedecodingoftheentireapplication-layerdata.Additionally,retransmission or delayed reception of packets introduces prolonged waiting latency.Therefore,scheduling data transmission at the granularity
253、 of the packet set as afundamental unit can significantly enhance latency assurance performance,therebyimproving the overall user experience and increasing user satisfaction with servicequality.4.Mobility enhancement(1)Frame-level mobility enhancement:In AI-driven applications such asintelligent con
254、nected vehicles and outdoor smart robotic dog inspections,mobilityscenarios are prevalent.However,cell handover during movement significantlyimpacts service throughput and user experience.Frame-level mobility enhancementtechnology primarily mitigates these effects by controlling handover timing tomi
255、nimize disruptions to data transmission.When the currently transmitted datastream exhibits frame-level characteristics,the technology identifies features offrame header and trailer packets.It initiates the handover process during the interval5G-A Ignites the Three-Types of New Intelligent Services50
256、between two frames,thereby reducing the impact of handover interruption delay onthe complete transmission of a single frame.Figure 28 The technique principle of frame-level mobility enhancement(2)Link Adaptation Optimization in Mobility Scenarios:The MCS(Modulationand Coding Scheme)optimization sele
257、ction strategy for mobility scenarios addressesthe issue of overly conservative initial MCS selection after handover.Two approachesare employed:First,at the target base station,historical MCS data collected beforeand after previous handovers is utilized to set the initial MCS after the handover.Seco
258、nd,the device reports measurement reports or CQI(Channel Quality Indicator)information from the source cell to the target cell during the reconfigurationcompletion command(e.g.,Physical Channel Reconfiguration Completion).Thetarget cell estimates SINR(Signal-to-Interference-plus-Noise Ratio)based on
259、 themeasurement report or references the CQI measurements from the source cell priorto handover to determine the post-handover initial MCS.By implementing thesemethods,the initial MCS selection level can be elevated,effectively mitigating theabrupt rate drop and excessive service latency during hand
260、over processes.5G-A Ignites the Three-Types of New Intelligent Services51Figure 29 The technique principle of link adaptation optimization5.Cluster communicationCurrently,intelligent connected vehicles basically support the update ofin-vehicle systems and software based on the over-the-air(OTA)downl
261、oadtechnology,which can improve the response speed of the in-vehicle system,enhancethe batterys endurance,and enrich the functions of the entertainment systemthrough OTA upgrades.The problem is that when automobile manufacturers carryout OTA updates for intelligent connected vehicles,they generally
262、choose specifictimes for large-scale upgrades,and the upgrade packets are large,resulting inhigh-flow downlink transmission at the same time and taking up a large portion ofthe network bandwidth,leading to the problem of the network congestion.For asingle vehicle,the limited resources allocated by t
263、he network result in a long upgradetime for a single vehicle.5G MBS technology adopts broadcast or multicast to realize point-to-multi-pointdistribution and can perform OTA upgrades for a large number of intelligentconnected vehicles at the same time.It not only effectively saves network bandwidth,b
264、ut also significantly shortens the upgrade time of the single vehicle.5G MBStechnology can also be used for OTA upgrades for a large number of intelligent robotsat the same time.3.3.Computing Power Resources Opening and Sharing5G-A Ignites the Three-Types of New Intelligent Services52In the 5G-A era
265、,base station computing power is undergoing a profoundtransformationfromdedicatedphysicalhardwaretotheintegrationofgeneral-purposeprocessorsanddedicatedphysicalhardware,supportingincreasingly diversified service requirements.Traditionally,base stations mainly relyon dedicated chips such as ASICs,DSP
266、s,and FPGAs to handle communication tasks.With the emergence of new applications such as AI,high-definition video,intelligentconnected vehicles,and embodied intelligence,the service demand for computingpower has shown exponential growth.For this reason,5G-A base station computingpower begins to stre
267、ngthen general-purpose processors such as CPUs,GPUs,andNPUs.With the advantages of wide distribution and proximity to users,5G-A basestations have become the core carrier of wireless connection services and edgecomputing power.As a core driving technology,the sharing of wireless computing power reso
268、urcesplays a key enabling role in empowering service development.Currently,5G-Awireless computing power sharing faces two major challenges.The first challenge ishow to uniformly manage and orchestrate wireless computing power with a largenumber of wireless computing power nodes,heterogeneous forms,a
269、nd distributeddeployments.The second challenge is how to provide wireless computing powerthrough the wireless access network.In terms of unified management and orchestration of wireless distributedheterogeneous computing power,the wireless access network management layerneeds to enhance computing po
270、wer orchestration capabilities.After the distributedwireless computing power is connected to the network,it actively registers with themanagement and orchestration layer to achieve automatic discovery and globalunified management and orchestration of wireless computing power.Consideringthat wireless
271、 computing power has obvious characteristics of distribution anddynamic fluctuation,the management and orchestration layer can adopt resourcepooling technology to combine discrete wireless computing power nodes into alogically unified deep edge computing power pool,effectively improving the5G-A Igni
272、tes the Three-Types of New Intelligent Services53utilization rate and flexibility of wireless computing power resources.In terms of wireless computing power sharing,wireless computing power nodescanprovidewirelesscomputingpowerservicesandwirelesscommunication-computing integrated services.For the wi
273、reless computing powerservice,the management and orchestration layer realizes the logical isolation andon-demand allocation of wireless computing power through virtualization technologyandestablishesafull-lifecyclemanagementmechanismincludingresourceregistration,discovery,reservation,elastic expansi
274、on,and release.OTT serviceproviders and vertical industry users obtain available computing power resourcesthrough service discovery,make optimal choices based on factors such asgeographical location and computing capabilities,and realize the applicationdeployment in wireless computing power.The wire
275、less communication-computing integrated service realizes the deephosting and intelligent maintenance of applications on the wireless access networkside by constructing a new type of network service with deep integration ofcommunication and computing.The wireless management and orchestration layercan
276、 obtain multi-dimensional requirements such as service experience requirementsthrough standardized interfaces,such as key SLA indicators such as end-to-end delay,throughput,jitter,and packet loss rate.The wireless management and orchestrationlayer first translates the application SLA requirements in
277、to requirements for twotypes of resources:wireless communication and computing,deploys applications onmatching wireless computing power,and generates service quality guaranteestrategies.The wireless management and orchestration layer monitors the statusindicators of applications,the wireless network
278、 environment,and the statusinformation of wireless computing power.When it predicts or finds that theapplication experience may deteriorate,it can achieve multi-dimensional experienceguarantee through the joint adjustment of wireless communication resources andwireless computing power.The sharing of
279、 wireless computing power is the key to supporting the5G-A Ignites the Three-Types of New Intelligent Services54development of 5G-A services.Through the unified management and orchestrationof wireless computing power resources,wireless computing power services,andwireless communication-computing int
280、egrated services,it can support the flexibledeployment of services and the joint optimization of communication and computing,and effectively guarantee the quality of business experience.3.4.Data Acquisition and StorageFacing the data requirements of the differentiated communication-computingintegrat
281、ed application scenarios,the traditional centralized data resource supplymode represented by“data centers+data pipelines”can no longer meet therequirements in terms of real-time performance,synchronism,usability,security,andprivacy.Therefore,the wireless communication-computing integrated 5G-A netwo
282、rkneeds to achieve real-time data collection,routing and forwarding,and processingand management functions in dynamic spatial,temporal,and system environments.It is advisable to introduce the following data-related functions.1.Data collection:Through customized data collection capabilities,thewirele
283、sscommunication-computingintegrated5G-Anetworkcancollectcross-protocol layer(such as PHY,MAC,PDCP)and cross-system domain data fromvarious devices and wireless networks according to native AI task requirements andunified rules.Meanwhile,it can provide data distribution services based on multiplecoll
284、ection modes to support distributed AI computing tasks.2.Data preprocessing:This function preprocesses the collected data,such asdata cleaning,denoising,and feature extraction,to enable subsequent computingtasks to be executed more efficiently.It can also dynamically deploy datapreprocessing-related
285、 functions according to the customized dataset requests fromcomputing tasks,and perform integration processing on data from different sources.In addition,data privacy protection processing is an important function in thepreprocessing process.Data services must comply with corresponding privacy5G-A I
286、gnites the Three-Types of New Intelligent Services55protectionregulations,protectinguserprivacythroughde-identification,anonymization,and other capabilities.3.Data routing and forwarding:Considering the diversity of data sources,thecomplexity of transmission links,and the flexibility of transmission
287、 modes,thisfunction ensures the timeliness,reliability,and integrity of collected data duringtransmission and forwarding in all scenarios(such as mobility scenarios).It canintelligently select the most suitable path and transmission scheme to transfer datafrom data sources to computing nodes among c
288、louds,edges,and devices.4.Data storage:In the wireless communication-computing integrated 5G-Anetworkenvironment,heterogeneouscomputingnodeshavesignificantperformance differences,and some intelligent devices have weak data cachingcapabilities,making it difficult to synchronize multi-source received
289、data,while datasynchronization is crucial for maintaining data consistency.The data caching andsynchronization can solve the data synchronization problem from different times andspacesthroughadatacachingmechanism,achievingsynchronizationofcommunication-computing data collected from clouds,edges,and
290、devices,andensuring that computing nodes can obtain multi-source synchronized data at lowcost,as well as ensure the reliability and availability of stored data.Chapter 4 Summary and OutlookThe new generation of information consumption services is profoundlyreshaping personal life,industrial ecosyste
291、ms,and social operation.For individuals,AI assistants enable efficient and natural interactive services,AI glasses expandimmersive virtual-real integration experiences,and embodied intelligence promotespersonalized companionship and collaboration of robots.For industries,embodiedintelligent robots b
292、ecome the hub of mobile productivity and empower remotecontrol and decision-making in scenarios such as industry campus and medicalhealthcare.For society,the new intelligent services,such as intelligent connected5G-A Ignites the Three-Types of New Intelligent Services56vehicles,will jointly promote
293、the universal accessibility of barrier-free services,therefinement of urban governance,and low-carbon development.The capabilities ofnew services,such as multi-modal perception,real-time decision-making,andcross-domain collaboration,rely on the network foundation of 5G-A with ultra-lowlatency,ultra-
294、large bandwidth,high reliability,and mobility guarantee.5G-A providescore support for intelligent services to evolve from single-point innovation touniversal penetration by constructing an integrated foundation of connection+computing power+data.Facing the requirements of mobile communicationnetwork
295、s for services,this white paper clarifies the processes and network metrics ofdifferent intelligent services,providing technical references for the development ofintelligent emerging services in the current network.Looking to the future,with the development of complex service scenarios suchas embodi
296、ed intelligence and networked autonomous driving,multi-technologyintegration and cross-industry collaboration will become the core driving forces.First,we need to continuously deepen 5G-A capabilities,strengthen the foundation of5G-A networks through technologies such as communication-sensing integr
297、ation,network intelligence,and immersive communication,build a collaborative ecosystem,work with industry partners from multiple fields including communications,automotive,robotics,and cloud computing to deeply explore network capabilitiesand service requirements,and create benchmark applications.Se
298、cond,for the 6G era,technological layout needs to further leap towards multi-agent collaboration.Through technological research in fields such as intelligent agent collaborationnetworks,breakthroughsindigitaltwininteraction,andexplorationofcommunication-sensing-intelligence-computing integrated arch
299、itectures,we need topromote the development and implementation of closed loops for universalintelligent agent cognition,decision-making,and action.Only by advancing both thecontinuous deepening of current network capabilities and the forward-looking layoutof next-generation technologies can we achie
300、ve leapfrog development fromsingle-agentintelligencetocollectiveintelligenceandfromscenario5G-A Ignites the Three-Types of New Intelligent Services57empowerment to social reshaping.5G-A Ignites the Three-Types of New Intelligent Services58Appendix:TheTheoretical AnalysisofEmbodiedIntelligence Servic
301、e RequirementFor the theoretical analysis of requirements for emerging services such asembodied intelligence,the radio delay is first estimated through segmented delay,and then the radio rate is calculated based on the radio delay.The derivation formulafor the radio delay isRadio Delay =E2E Delay Ro
302、bot Processing Delay Backbone&CN Delay Cloud Processing Delay(1)The radio rate can be estimated based on the radio delay.For data types such asvoice,text,commands,and images,based on Formula(2),the derivation formula forthe air interface rate is:radio rate transmission data+communication payloadradi
303、o delay(2)wherein,communication overhead transmission data0.2.When the robots uplink transmission service is a video-based service,sincevideo is transmitted in a frame-level continuous manner,and different resolutionsand frame rates correspond to different data volumes per frame and different radiod
304、elays,the derivation formula for the radio rate of video-based services is:radio rate data per frame+communication payloadradio delay(3)Wherein,communication overheaddata per frame0.2,data per frame=bit rate/frame rate,andradio delay=1/frame rate.5G-A Ignites the Three-Types of New Intelligent Servi
305、ces59Based on the above formulas,Table 9 provides the theoretical estimated valuesof air interface transmission delay and air interface rate by taking voice transmission,1080p images,and 1080p/30fps video as examples.The service scenario for videotransmission is set as remote control of robots via V
306、R headsets:Table 9 The theoretical analysis of embodied intelligence service requirementNote:1.Assume the delay of each part:terminal 50ms,backbone network 50ms,largemodel 500ms.2.Assume the delay of each part:terminal 50ms,backbone network 50ms,largemodel 1000ms.3.After 3D rendering and encoding,it
307、 is approximately 1.5 to 2 times the originalvideo bit rate.4.Taking XR as an example,the single-loopback delay within 100ms can basicallymeet user experience requirements;assume the delay of each part:terminal 10ms,backbone network 30ms,cloud rendering 30ms.5.Video frame-level services exhibit burs
308、t characteristics,with the instantaneousrate approximately 2 to 3 times the average rate.Transmission DataEnd-to-end DelayService DataUL Radio DelayUL Radio RateVoice/Text/Command1s100KB400ms12MbpsImage(1080P)2s400KB900ms24.5MbpsTransmission DataBit RateFrame RateRadio DelayDario RateVideo(1080P,30fps)UL Video Transmission8Mbps30fps33.33ms10MbpsDL VR Presentation12Mbps330ms429Mbps5