坏机器人长期以来一直是互联网的祸害。它们潜伏在真实的人流中。许多企业误解了不受限制的自动化交通的负面影响。但其他人知道,坏机器人不是良性的,它们有着非常集中的赚钱动机。我们已经从早期的基本票倒卖机器人走了很长一段路。如今,在等网站上,您可以购买一系列复杂的定制微调器、点检器、票证下载器和pdf生成器,以便在全球任何平台上购买任何活动的票证。要抢先购买限量版运动鞋,很容易在、或等网站上购买任何hypebots或运动鞋机器人。但糟糕的bot发展的下一个演变已经开始。坏机器人正试图改善自己的形象,让自己看起来合法。这一波新的bot运营商正在建立业务,他们可以从网站上获取专有数据,打包数据,并向任何愿意购买所有打包为“商业智能”服务的公司提供具有竞争力的数据源。这种“坏机器人作为一种服务”的重新命名在许多方面证明了自己。首先,通过采用专业的网站提供商业智能服务,称为定价智能、金融替代数据或竞争洞察。通常,这些企业提供专注于特定行业的数据产品。其次,在您的行业内,购买废弃数据的压力越来越大。没有任何企业希望在市场上失败,因为竞争对手可以访问可供购买的数据。最后,招聘职位的人越来越多,他们希望用诸如网络数据提取专家或数据清理专家之类的头衔来填补职位空缺。在这种环境下,很难看到bot问题很快消失。除了内容和价格刮擦,最大的坏机器人问题是凭证填充和凭证破解。每一个登录的网站都会受到这些攻击,一种新的现象正在出现,即针对一家公司的巨型凭证填充攻击的兴起。Imperva缓解的最近一次攻击持续了60小时,包括4400万次登录尝试。总的来说,数十亿被破坏的凭证的可用性助长了凭证填充的增加,但这种大规模的攻击可能会导致严重的基础设施紧张,导致速度减慢或停机。这些大型应用程序层凭据填充攻击对任何没有准备好处理如此大量的坏机器人请求的组织来说,其破坏性可能与大规模DDoS攻击一样大。
2020-12-01
28页




5星级
这份长达130页的“2020年药物发现、生物标志物开发和先进研发前景概述”报告标志着DKA制药部门自2017年以来,就人工智能(Al)在医药研究行业的应用这一主题的系列报告中的第九次安装。这一系列报告的主要目的是提供一个全面的行业概况,在什么是与采用艾尔因德鲁发现,临床研究和其他方面的药物研发。本概述以信息思维导图和信息图表的形式突出了趋势和见解,并对构成行业空间和关系的关键参与者的绩效进行了基准测试。这是一个概述分析,以帮助读者了解什么是目前正在发生的行业,并可能给出一个想法,什么是下一步。自上一版以来,已经引入了实质性的更新,突出了快速发展的行业动态,以及制药人工智能领域投资和业务发展活动的整体增长。生物技术公司、生物技术投资者和制药机构的名单已经扩展到包括新的实体,此外,一份新的领先合同研究组织(CRO)名单也被添加进来,概述了合同研究行业对先进数据分析技术日益增长的兴趣。我们还回顾了上一版的数据和章节,并反思了自那以后发生的变化。除了投资和商业趋势外,该报告还提供了对应用和研究领域一些最新成果的技术见解。这份长达130页的“2020年药物发现、生物标志物开发和先进研发领域概述”报告是DKA制药部门自2017年以来就在制药研究行业中开发的人工智能(Al)应用专题系列报告的第九期。这一系列报告的主要目的是全面概述在药物发现临床研究和药物研发的其他方面采用铝的行业前景。这一概述以信息性思维导图和信息图的形式突出了趋势和见解,并对关键技术的绩效进行了基准测试在行业内形成空间和关系的参与者。这是一个概述分析,以帮助读者了解什么是目前正在发生的行业,并可能给出一个想法,什么是下一步。自上一届会议以来,已进行了实质性更新,突出了快速发展的行业动态,以及制药领域投资和业务发展活动的整体增长。A-生物技术公司、生物技术投资者和制药机构的名单已扩大到包括新的企业,先进的研究和技术已被添加到一个领先的行业研究合同列表。我们还回顾了上一版的数据和章节,并反思了自那以后发生的变化。除了投资和商业趋势外,该报告还对应用和研究领域的一些最新成果提供了技术见解。这份报告是考虑到前所未有的全球coVID19大流行而编写的,这场大流行严重影响了地球上的每个行业,制药和生物技术也不例外。令人期待的是,coVID19为药物制造商和疫苗研发人员带来了一些机遇,总体而言,它也促进了制药领域的快速发展。在2020年上半年,不仅推出了许多铝辅助药物再利用项目,而且研究环境也变得更加协作,有许多开放科学项目,许多公司开放了免费访问其平台的渠道,CoVID19还为制药行业带来了重大挑战,如临床试验、常规研究项目和实验室工作流程的中断。然而,我们关于COVID19对生物技术投资前景影响的研究结果将在即将发表的专门讨论这一问题的报告中提出。
2020-12-01
40页




5星级
#Tech2021 Ideas for Digital Democracy Washington, DC Ankara Belgrade Berlin Brussels Bucharest Paris Warsaw Edited by Karen Kornbluh and Sam duPont With Forewords by Rep. Will Hurd and Christopher Schroeder November 2020 November 2020 #Tech2021 2#Tech2021: Ideas for Digital Democracy 3 Foreword Will Hurd 5 Foreword Christopher Schroeder 7 Introduction Karen Kornbluh, Sam duPont, and Eli Weiner 10 Unlocking Digital Governance Toomas Ilves 12 Investing in the Future with a National Bank for Green Tech Reed Hundt 15 Leveraging Open Data with a National Open Computing Strategy Lara Mangravite and John Wilbanks 17 Building Civic Infrastructure for the 21st Century Ellen P. Goodman 20 Mitigating Supply Chain Risk: Component Security is Not Enough Edward Cardon, Harvey Rishikof, and Thomas Hedberg, Jr 22 Addressing the Harmful Effects of Predictive Analytics Technologies Rashida Richardson 25 Advancing Digital Trust with Privacy Rules and Accountability Quentin Palfrey 27 Prioritizing Workforce Mobility in the Age of Digital Transformation Laura Taylor-Kale 30 Launching a Cyber Risk Grand Challenge Adam Bobrow 32 Strengthening the Global Internet with a Digital Trade Agreement Sam duPont 34 Establishing a Tech Strategist Cohort Across the Federal Government Ian Wallace 36 Upgrading Digital Financial Infrastructure for Fairness Kabir Kumar and Tilman Ehrbeck 39 Reforming the Patent System to Support Innovation Lisa Larrimore Ouellette and Heidi Williams 41 Averting a Crisis of Confidence in Artificial Intelligence R. David Edelman 43 Protecting Democracy and Public Health from Online Disinformation Karen Kornbluh Table of Contents November 2020 #Tech2021 3#Tech2021: Ideas for Digital Democracy A critical factor in the United States economic and military success has been the achievement of global leadership in advanced technology; however, the next administration will inherit the countrys most tenuous global position in this area since the Second World War. In todays Fourth Industrial Revolution, technological change over the next 30 years will make the last 30 years look insignificant. The next admin- istration will also deal with a dramatically shifting global landscape influenced by the long-term effects of the coronavirus pandemic and a Chinese govern- ment that is trying to rapidly erode U.S. technological advantages through legal and illegal means. Winning this generation-defining struggle for global leadership in advanced technology will not just affect the U.S. economy but will also shape the rest of the century for the entire world. The next administration must have a comprehensive technology agenda to spur innovation in the United States, leverage innovative technologies within government to better serve citizens, mitigate the challenges posed by technological disruption, and work with allies to ensure our democratic values drive development of these new tools. Though artificial intelligence (AI) is just one of many critical emerging technologies, the blueprint for achieving global leadership in AI can be a useful guide for how the next administration could foster innova- tion across a number of technologies. The explosion of data and computational capability has made advances in AI possible; but these resources are concurrently chokepoints preventing the maturity of the industry. Continued AI innovation will require large amounts of data and if the federal government provided more high-quality data sets to the public, entrepreneurs and researchers could compete more closely on the quality of their ideas, rather than their access to proprietary data sets. Open data does not just advance innovation, it can also promote equity by reducing one source of bias in AIinferior training data. While vetted gov- ernment data sets will not eliminate bias, this coupled with investment in digital infrastructure can go a long way in addressing digital equity. Whether it is increas- ing access to supercomputing resources for academic researchers to advance basic knowledge or providing broadband access so underserved communities can participate in the digital economy, the United States will not reach its full AI potential if bright minds are left behind. Bringing these technologies into the public sector will also allow governments at all levels to better serve citizens. In the face of a global pandemic, government information technology systems at the federal, state, and local levels have been tested. When citizens need- ed government the most, paper-based processes and legacy digital systems failed to scale, causing unnec- essary delays and suffering. Rapidly scaling capacity is just one benefit of moving to the cloud. With the public sectors data safely in the cloud, civil servants will be able to use modern tools, like those powered by machine learning and AI, to draw insights that were previously impossible. Armed with this new in- telligence, civic leaders can offer Americans a better, more efficient version of government. The effort to modernize government systems should not cease after the coronavirus pandemic. Instead, we should use this as an impetus to supercharge modernization efforts. While technology can be used to improve society, these same digital tools will be used against us by our adversaries. Russian disinformation operations have turned tools designed to bring us together into weap- FOREWORD WILL HURD November 2020 #Tech2021 4#Tech2021: Ideas for Digital Democracy ons to drive us apart. While the United States first ex- perienced this in full force during the 2016 elections, many of its European allies, from the United Kingdom to Montenegro, have been dealing with the effects of Russian interference for years. In the summer of 2020, National Counterintelligence and Security Center Di- rector William Evanina stated that not only did this malicious activity shows no signs of abating, but that countries like China and Iran were also starting to take a page out of the Russian playbook. In addition to dis- information, we have to be prepared for our adversar- ies continued use of cyberattacks to steal intellectual property, probe critical infrastructure, and violate the privacy of Americans. The next administration will be unable to tackle these challenges alone. Beginning with the Marshall Plan that rebuilt Europe after the Second World War and served as the bedrock commitment enabling the creation of NATO in 1949, the center of internation- al prosperity and security has been U.S.-led alliances, not the United States alone. We stood up to despots and tyrants and helped our friends stand on their own. We did not take spoils but showed leadership and worked toward shared goals with our allies. If the next administration embraces the understanding that the United States has become an exceptional nation not because of what we have taken but because of what we have given, then we will continue our position as the global leader in advanced technology despite un- certain times. November 2020 #Tech2021 5#Tech2021: Ideas for Digital Democracy I am often asked about the most exciting develop- ments in technology, and I like to cite the potential of artificial intelligence and data science, advancements in robotics and genomics, and more. But perhaps the greatest leap globally in technology is not the tech itself, but increasingly universal access to it. Ten years ago, analysts predicted that by 2020, two-thirds of humanity would have a smart deviceeach “phone” with more computing power than NASA had to put a man on the moon. Today most communities have blown through those predictions, dramatically expanding the ability of people everywhere to connect, collaborate, and learn. What is more, this shift has unleashed talent and innovation, forever changing who can compete in the new global economy, and how they do so. The coronavirus pandemic has accelerated all these trendsperhaps ten years of technology adoption and embrace of digital life has happened in a matter of months. Compelled to buy daily staples online, attend virtual classes, and video chat with their doctors, mil- lions have embraced behavioral changes that will only reinforce and intensify the speed of technological ad- vancement. That expanded access to technology is unleashing so much bottom-up innovation should not mask the top-down impact that governments and other institu- tions can have. It is tempting, especially in the busi- ness world, to hope these institutions merely “get out of the way,” and sometimes they should. At the same time, the physical infrastructure, education systems, regulatory environments, and rule of law created by these institutions are at the center of what allows a society to survive and thrive in the midst of rapid change. In the United States and around the globe, the stakes could not be higher. While billions of people have rapidly entered the digital age, millions in the United States lack access. We have long paid lip ser- vice to the “digital divide,” and some efforts to bridge it have made progress. But in the 21st century, asking someone to work, live, and learn without the Internet is like asking them to get by in the 20th century with- out a road to drive on. Since the Second World War, succeeding in the global economy has meant making technology in, or selling a product to, the United States. This assump- tion no longer holds. As innovative talent is unleashed in every country, globally competitive enterprises are being built everywhere. China is the prime example of a rising market that now stands toe to toe with the United States and it has succeeded by developing technology that is increasingly popular worldwide. And there are many “mini Chinas” rising: from Indo- nesia to Vietnam, Egypt to Kenya, Estonia to Brazil. We are witnessing a new globalism, whether we wish to believe it or not. And we are in the earliest stages of these momentous shifts. So where are these shifts discussed in the U.S. po- litical debate? It is shocking that the answer is “al- most nowhere.” Not one question in the presidential debates focused seriously on the United States place in global innovation, or how new tools will reshape how to learn, engage, heal, buy, or sell domestically. When technology does enter the political discussion, it is often treated as a side show, a ribbon-cutting PR event for politicians and nothing more. Or it is viewed solely for the threats it creates: from data breaches to political manipulation. It is typical of Washington to look backward and try to drive policy change through old-fashioned mod- FOREWORD CHRISTOPHER SCHROEDER November 2020 #Tech2021 6#Tech2021: Ideas for Digital Democracy els. Do we need a START treaty for cyberwar? Should fintech innovators be regulated under the regimes cre- ated for banking systems decades ago? This instinct is antithetical to the ethos of innovation. Washington cannot get caught in the tar of bureaucracy and regu- latory constraint, lest we fail to achieve what citizens expect and our country needs. What has been most seriously lacking is a coher- ent, cohesive, fact-driven analysis of where we are, what we want, and how we get there. We risk a hap- hazard approach with no overarching plan or vision for the future. The German Marshall Funds Digital Innovation and Democracy Initiative (GMF Digital) has leapt out as a leader in advancing innovation and increas- ing economic opportunity for all, while strengthening democratic values at home and abroad. The breadth and coherence of #Tech2021honest, expert-led, digestible, and action-orientedis astounding. It pushes us to stop sleepwalking toward predictable outcomes and offers ideas that will light up conver- sation in the United States and among its allies and partners. Technology knows no party or border. U.S. leader- ship requires the will to move beyond political over- simplification and demands a grounding in the facts as we understand them, a coherent debate about 21st century strategy, and clear, actionable ideas that the next administration must prioritize. #Tech2021, in the end, is an inspiring call to action. November 2020 #Tech2021 7#Tech2021: Ideas for Digital Democracy INTRODUCTION KAREN KORNBLUH, SAM DUPONT, AND ELI WEINER Congressman Will Hurd and Chris Schroeder underscore in their forewords that the United States finds itself at a pivot point when it comes to inno- vation. New technologies will bring enormous new opportunity we must seize to address our existing challengesand new disruption to which we must respond. Fortunately, good ideas abound for how to ensure these innovations improve lives, increase national security, and strengthen democratic values. #Tech2021 offers strategic, turnkey reforms from experts for how the U.S. government can leverage technology to ensure individuals and society thrive in the midst of rapid change. Despite the diversity of these briefs, some themes emerge: Innovation is fundamentally a bottom-up phe- nomenon, so opportunity to participate must be broadly distributed. As Schroeder observes, while many may wish for the government to simply “get out of the way,” governments and other institutions working from the top down are needed to spur physical infrastructure (especially broadband access), education and training, and smart rules of the road that unlock the technological potential of our society and economy. Privacy protections and positive corrections to systemic inequities must be built in to ensure democratic values are protected and strength- ened. Innovation happens in a global context. Dem- ocratic allies should work together to ensure that new technologies support and strengthen democratic values. The ideas offered up are varied and specific. Digital identities and resilient data architec- ture. Estonias former president Toomas Ilves urges we learn from the Estonian model to improve the de- livery of government services by creating a function- al framework for digital governance. He urges two critical policy interventions: creating secure digital identities for individuals and creating resilient data architectures for government. A national bank for green tech. Reed Hundt proposes closing the gap in funds needed to con- vert to 100 percent clean energy by financing cata- lytic investments that drive private capital toward a clean, technology-driven economy that creates new, high-paying jobs. A National Green Bank would fo- cus on directly financing clean-energy projects, sup- porting state and local green banks, purchasing addi- tional greenhouse-gas reductions, and ensuring a just transition. A national open computing strategy. Lara Man- gravite and John Wilbanks argue the government should provide subsidized cloud computing to low- er cost barriers for scientific researchers to analyze large data sets and leverage its negotiating power to protect federal resources and the privacy of citizens whose data are analyzed. Civic infrastructure fo
2020-11-26
47页




5星级
随着视频流、游戏、视频会议和DDoS的显著增长,对服务提供商和互联网服务交付链产生了一定的影响。欧洲和北美的服务提供商从2月到9月的数据显示,视频用户增加了30%,美国的VPN端点增加了23%,DDo.
2020-11-26
51页




5星级
到2020年,我们已经看到5G在全球范围内的强劲扩张,网络提供了改变社会的新的通信能力和服务。当我们继续见证这一成功时,我们应该记住,迈向5G的第一步是在十年前,那时尽管4G刚刚推出,但在那之后仍然很.
2020-11-25
26页




5星级
CEA-Leti,technology research institute17 avenue des Martyrs 38054 Grenoble Cedex 9,Francecea- MONSITJ/ADOBESTOCKEdge Artificial IntelligencePress contact:Marion Levy T. 33 438 781 817marion.levycea.fr Technical contact:Elisa Vianello T. 33 438 789 092elisa.vianellocea.fr AT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|2About Edge Artifical Intelligence -3Bringing edge artifical intelligence to life-3Soft&hardware:the perfect match -5Edge AI:the fast&chip need -6The need for technologies -8Supporting industrials -8A multi-domain expertise-9Collaborative projectssuccess stories-10Ecosystem -11Grenoble:a center of excellence-11MIAI Grenoble-Alpes institute-11Appendix -13AI tutorials-13Biography of Emmanuel Sabonnadiere,CEO of CEA-Leti-14CEA-Leti in brief-1550 years of R&D for industrial innovation -16 MURATART/ADOBESTOCKAT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|3About edge artifical intelligence Bringing edge artifical intelligence to life A rtificial Intelligence is no longer an abstract concept,it already fuels our everyday life with communication tools(e.g Google“Smart Compose”feature,Siri,etc).Tomorrow,AI will play a greater and perhaps a more important societal role,predicting and assessing our health risks,providing customer support,easing trafic congestion.Cars will be packed with AI features,including speech and gesture recognition,eye tracking,and so on.Some of these applications will require unprecedent responsiveness(e.g.braking systems).In such a context,the Cloud only will not do.AI will also need to be supported locally,meaning at the Edge.Algorithms will need to be processed locally,directly on the hardware device.With the General Data Protection Regulation in mind,European Commission for Internal Market set the challenge:80%of data will need to be processed directly within the hardware over the next five years.Currently,only 20%are being supported locally.CEA-Letis experts mission will consist of combining high performance computing capacity with low energy consumption in ultra-miniaturized systems,at low cost.Systems supporting AI at the edge will need to be able to perform thousands of billions of operations per second,consuming a single Watt or even less.P.JAQCUET/CEAAT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|4Bridging the gap:privacy and efficiencyBecause connection to the Cloud or any kind of networks wont be required anymore,systems will be fully independent,able to process data and take decisions by themselves.Systems will be able to operate independently,translating into increased cybersecurity.The absence of back and forth between the object and a distance platform will help keeping citizens data safe and private.Beyond privacy,Edge AI addresses various current technical challenges,by offering:Energy sobriety:more than 90%of data sent to the Cloud is never used again.Beyond trim waste,it has become vital to drastically reduce data transfers and cut on data storage cost.Greater autonomy with fully independent systems:In fact,complex decisions without depending on the Cloud are key for medical devices providing continuous treatment(i.e diabetics).Continuous safe operation:applications such as autonomous vehicles or production lines will require continuous safe operation.Low latency:a current data ride from a sensor to the Cloud located at 1,500 km and back takes about 10 ms.Edge AI will help reduce latency to 1 ms or less.Diabeloop Diabeloop is a French independent company developing in partnership with CEA-Leti disruptive technological innovations to automate the treatment of Type one diabetes.Its first product,the DBLG1 System,is an integratedsystem that allows glycemic control in an automatic and highly efficient way.The core of this innovation is an Artificial Intelligence hosted on a terminal that connects via bluetooth with a continuous glucose meter(CGM)and an insulin pump.The algorithm makes and executes the many therapeutic decisions that the patients currently have to handle by themselves.Patients are only expected to log meals and physical activities.LA CHOUETTE COMPAGNIE/CEAAT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|5Soft&hardware:the perfect match Until recently,everything you needed to know about AI was software-related.Edge AI has slightly changed the tune,with the need for on-device algorithm processing.As algorithms are becoming more and more complex to address most advanced AI,hardware is requiring new microelectronic solutions to meet the evolving demands of IoT packed with AI.Transfer cloud-based software solutions within a highly miniaturized chip is no small task.Of central importance is the need to achieve unprecedented efficiency and speed in the collection and analysis of data,while also managing power consumption and form factor.In the hardware domain,this will require innovative thinking and new paradigms in sensors,processors,memory,interconnection,and packaging.Both memory and computing capacity are key to determine the cost/performance ratio of an AI solution,including the design of algorithms.Because Software and Hardware are now going hand in hand,CEA-Leti collaborates with CEA-List to develop the best Hardware/Software solutions that will support AI locally.The program offers common laboratories to tailor,in partnership with industrials,complete solutions,from algorithms solutions to chip design.SpiritCEA-Leti introduced SPIRIT,the world-first fully integrated neural network on-chip with non-volatile resistive memory.So far,memories were placed outside of chips leading to high energy consumption.With this co-integration in the same die of analog spiking neurons and resistive synapses leveraging resistive random access memory cells(RRAM),CEA-Leti enables the push for distributed computing devices to support artificial intelligence at the edge.These spiking neural networks are designed by CEA-Leti and the RRAM are fabricated in a post-processat CEA-Leti on CMOS-based wafers.CEA-LETIAT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|6Edge AI:the fast&chip need Because conventional chips cant keep up with the upcoming of most sophisticated AI supported within devices,a growing need for semiconductor technologies has emerged.Yesterdays Cloud-based Web giant are now heavily investing into the semiconductor industry,and actively looking for new R&D solutions to migrate most AI to the edge.To help industry keep up the race and integrate AI into already ultra-miniaturized chips,CEA-Letis launched a specific Edge AI program to pioneer quick and reliable semiconductor solutions,from computing,sensing to data storage.In-memory computingOne focus is a fundamental problem of modern computing,moving data between memory and processor now costs vastly more than computation.Data transfer and memory access account for up to 90%of system energy usage.CEA-Leti is specifically developing neuro-inspired architectures to explore new programming models,and“In-Memory Computing”to bridge the gap between memory and logic units.CEA-Leti and its partners are involved in MYCUBE an ERC-backed project to stack memories onto processors.The ERC-backed My Cube project is setting its sights on the first-ever in-memory computing technology.The goal is to be able to carry out simple computations directly in a circuits memory.A demonstrator built on silicon nanowiresthe most appropriate for the applicationand non-volatile resistive memory will be completed in 2022,using 20 times less energy than a conventional circuit.CEA-Leti work on advanced 3D stacking strategies to integrate an additional memory layer on top of the logic unit.3D stacking strategies(in-memory computing)CEA AT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|7Increasing data storage capactiesFor data storage,the institute leverages its expertise in resistive non-volatile memories,including OxRAM and PCM.Resistive non-volatile memories are very power-frugal as they may quickly shift from an active saving mode to a sleeping mode.Compatible with standard CMOS processes,they are key to future edge-AI chips,for both embedded high-performance applications,like in cars or satellites,and ultra-low-power smart sensors.CEA-Leti and Intel partnership on 3D technologies In October 2020,CEA-Leti and Intel have announced a new collaboration on advanced 3D and packaging technologies for processors to advance chip design.The research is focusing on assembly of smaller chiplets,optimizing interconnection technologies between the different elements of microprocessors,and on new bonding and stacking technologies for3D ICs,especially for making high performance computing(HPC)applications.KTSDESIGN/SHUTTERSTOCKAT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|8Supporting industrials Industrial solutions for 2025In 2019,CEA-Leti launched a program dedicated to responding to the growing and urgent need of industry for solutions to successfully migrate artificial intelligence to the edge.The program brings together some 50 multidisciplinary experts through various partnerships,possessing all the necessary skills to develop hardware and software solutions,capable of supporting Edge AI.The goal is to tailor highly reliable and low-power solutions leveraging new approaches inspired by neural networks,combining digital and analog technologies.Consequently,the Van Neumann approach has been replaced by neuromorphic approaches and innovative hardware architectures featuring in-memory computing.From the design phase to manufacturing,CEA-Leti experts and the programs partners mission is to develop solutions that will be marketable by 2025.Rethinking the architecture of electronic chips To create hardware solutions from scratch that combine high-performance computing and energy efficiency using low-cost,integrated SoC components,the programs experts are looking into ways of designing innovative architectures,and neuromorphic architectures in particular,capable of:Bringing the computing units and the data storage units closer together Making full use of the potential of non-volatile memories capable of keeping the information even when the power supply is off Positioning the memories above the computing units or leveraging in-memory computing Combining the sensors and imagers with AI computing units,and Developing algorithms specific to Edge AI.The need for technologies P.JAYET/CEA AT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|9A multi-domain expertiseSoftware to hardware:a multidisciplinary teamThe development of high-performance,low-cost and low-consumption Edge AI solutions requires a broad range of skills in the development of non-volatile memories,sensors and circuits,including the development of advanced algorithms,such as incremental machine learning.CEA-Letis Edge AI program relies on a multidisciplinary team,from several institutes,capable of providing competitive,made-to-measure solutions that can be industrialized quickly.The program brings together some 50 experts from various institutes,such as CEA-Leti and CEA-List.Bio-inspired solutionsThe research engineers from the Edge AI program are working with biologists and researchers in cognitive psychology to draw inspiration from working mechanisms in the living world that are both energy-efficient and possess an unbelievable capacity to adapt.They are developing neuromorphic hardware systems,equipped with artificial neurons and synapses that optimize energy-consuming interactions.Specifically,the team is exploring three paths of research:Impulse data coding,similar to the brains neurons,that is both efficient and noise-resistant The development of dense,non-volatile memory technologies to implement the synapses,in order to bring them and the neurons as close together as possible The development of impulse sensors,vision sensors and micro electromechanical systems(MEMS)to take inspiration from the communications mechanisms in the natural world.Incremental machine learningThe recently developed,bio-inspired hardware will host the incremental machine-learning solutions that the program is also developing.Unlike current AI solutions,which require enormous learning databases,future Edge AI solutions will be able to learn gradually and economically.Edge Artificial Intelligence Team AT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|10The program demonstratorsSigma FusionSafe perception with low power sensor fusion solution based on Aurix second generationSpiritSpiking neural networks enabling massively parallel,low-power&low-latency computationIntact3D technologies for chiplet-based advanced systemRetineProgrammable vision chip enabling high frame rate and low latency image analysisCollaborative projects success storiesThe program is open to all profiles of industrial manufacturers that are looking for competitive,tailor-made solutions for incorporating artificial intelligence in their products now or in the future.The team is developing innovative and competitive technological solutions for major groups,SMEs and even start-ups.It also offers both differentiating technological building blocks and complete solutions,from software to hardware,from design to packaging,and from prototyping to small production runs.JM.FRANCILLON/VILLE DE GRENOBLEAT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|11Ecosystem Grenoble:a center of excellenceIn 2019,Grenoble was cited by an international jury and the French government as one of the four French centers for artificial intelligence.In addition to drawing from the concentration of AI expertise in the Grenoble-Alpes region,this program includes other French and international experts from the worlds of research,education and private enterprise to harness scientific excellence and build a French and European AI offer.The regional partner companies and institutes include STMicroelectronics and Schneider,plus Inria,Grenoble-Alpes University and INP-Grenoble.MIAI Grenoble-Alpes instituteCEA-Leti is a special partner of MIAI Grenoble Alpes Institute(Multidisciplinary Institute in Artificial Intelligence)located at Universit Grenoble Alpes(UGA).Though,MIAI plays a major role in:Artificial Intelligence(related to health,environment and energy fields Offering attractive courses for students and professionals of all levels Support innovation in large companies,SMEs and startups.AT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|12In concrete terms,the programs 54 million budget enables all these multidisciplinary players to manage a number of public-private collaborative projects focusing on application-oriented subjects,and it finances 28 chairs of excellence in seven subjects,including built-in AI,health,industry 4.0,the environment and energy,and societal issues.CEA-Leti actively contributes to these four chairs as part of the Edge AI program,in an effort to overcome the limits of neuromorphic architectures for AI,to provide better support for patients in the management of their treatment and to optimize telecommunications networks.NEURAM3Neural computing architectures in advanced monolithic 3D-VLSI nano-technologiesNEUROTECHNeuromorphic TechnologyTEMPOTechnology and hardware for neuromorphic computingEdge AI in European projectsEuropean alliances and partnership are vital to the suces of our multiple programs in which CEA-Leti is involved,as a project coordinator or as a partner.CEA-Letis formed an alliance with IMEC(Belgium)and Fraunhofer/FMD(Germany)to developp a pan-Euopean Technological platform to design,manufacture and test prototypes,including those using Edge AI or those which have a purpose on it.AT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|13CEA-Leti host a series of tutorials to help anyone trying to understand the technologies behind artifcial intelligence.APPENDIXAI tutorials P.JAYET/CEAAT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|14Emmanuel Sabonnadire has been the Director of CEA-Leti since 2017,after occupying various strategic positions.From 2014 to 2017,Mr.Sabonnadire headed the“Professional”Division of Philips Lighting in Amsterdam(Netherlands),following five years at the head of Alstom T&Ds Transformer Division.From 2008 to 2014,he held the position of CEO of General Cable Europe in Barcelona(Spain);and from 2005 to 2008,was CEO of NKM Noel in Wrzburg(Germany).Mr.Sabonnadire started his career in 1992 with Schneider Electric,where he held several strategic positions in product development.Serving the industry for over 25 years,Mr.Sabonnadire improves operational performance by building motivated teams.He has acquired solid experience in multicultural management,developing new markets both in Europe and globally.Innovation is key to all his actions.Emmanuel Sabonnadires outlook is shaped by operational excellence,technological innovation,talent management,and enthusiastic team building.Emmanuel holds a Ph.D.degree in physics from the Ecole Centrale de Lyon,a masters degree in electrical engineering from the Universit Technologie Compigne and an MBA degree from the Ecole Suprieure des Affaires de Grenoble.Emmanuel Sabonnadire also chairs the Nanoelec IRT(Technological Research Institute)and is the president of CEA-Leti Carnot Institute.He is president of Frances industry strategic committee(CSF)for bioproduction and a member of the CSF for“electronics industry.”Finally,Mr.Sabonnadire is a member of SEMIs European Council and has been assuming chairmanship for the JESSICA France Association since January 9,2020(CAPTRONIC program).APPENDIXBiography of Emmanuel Sabonnadiere,CEO of CEA-LetiAT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|15Founded in 1967330industrial partners(40%are SMEs and VSEs)1,900researchers700publications per year2,760patents heldCreation of 65start-ups315 Mannual budgetISO 9001certified since 2000250thesis and post-doctoral students10,000 mof cleanroom space,200 mm and 300 mm semiconductor production unitsBased in France(Grenoble)with offices in Belgium(Brussels),the USA(Silicon Valley)and Japan(Tokyo)APPENDIXCEA-Leti in brief CEAAT-A-GLANCE|EDGE ARTIFICIAL INTELLIGENCE|NOVEMBER 2020|1650 years of R&D for industrial innovation In 1957,an“integrated electronics”research group was formed at the CEA in Grenoble.It was tasked with the design and maintenance of nuclear reactor electronic systems and to a range of civil and military nuclear engineering needs.At that time,many integrated circuits were produced in American factories and this motivated Letis integrated electronics group to develop its own transistor technology.In 1963,the Institute produced its first integrated circuit and,in 1966,it announced production of the first MOS transistor.The CEA integrated electronics group became the“Laboratoire dlectronique et de technologie de linformation”(Leti)on October 10th,1967.Very quickly,Leti was organized to work and set up partnerships with industry.The tude et fabrication de circuits intgrs spciaux design and production of special integrated circuits subsidiary,known as Efcis,was founded in 1972.In 1982,it was integrated into Thomson Semiconducteurs,a company that merged with Italian SGS to form STMicroelectronics.In 1976,CEA-Leti produced and installed the first French scanner at Grenobles General Hospital.Six years later,in 1982,the Institute completed construction of 6,000 m2 of buildings,including 2,000 m2 of cleanrooms,in response to development needs in microelectronics,infrared technologies and magnetometry.Initial developments in micro-electro-mechanical systems(MEMS),especially accelerometers,were achieved at this time.CEA-Leti lodged a first generic patent for silicon-based comb capacitive lateral micro-accelerometers.Minatec was founded in 2006 around Letis activity,the aim being to bring together academic research,R&D laboratories and industry.Minatec focuses on micro-and nanotechnologies,and constitutes a new model for the research-education-innovation“knowledge triangle”.Today,this model structures the formation of major French university campuses like Paris-Saclay and Giant(Grenoble).CEA-LetiCEA-Leti,technology research institute17 avenue des Martyrs,38054 Grenoble Cedex 9,Francecea-San FranciscoTokyoSan FranciscoTokyoCEA-Leti Head OfficeAbout CEA-Leti(France)CEA-Leti,a technology research institute at CEA,is a global leader in miniaturization technologies enabling smart,energy-efficient and secure solutions for industry.Founded in 1967,CEA-Leti pioneers micro-&nanotechnologies,tailoring differentiating applicative solutions for global companies,SMEs and startups.CEA-Leti tackles critical challenges in healthcare,energy and digital migration.From sensors to data processing and computing solutions,CEA-Letis multidisciplinary teams deliver solid expertise,leveraging world-class pre-industrialization facilities.With a staff of more than 1,900,a portfolio of 3,100 patents,11,000 sq.meters of cleanroom space and a clear IP policy,the institute is based in Grenoble,France,and has offices in Silicon Valley and Tokyo.CEA-Leti has launched 70 startups and is a member of the Carnot Institutes network.Follow us on cea- and CEA_Leti.Technological expertiseCEA has a key role in transferring scientific knowledge and innovation from research to industry.This high-level technological research is carried out in particular in electronic and integrated systems,from microscale to nanoscale.It has a wide range of industrial applications in the fields of transport,health,safety and telecommunications,contributing to the creation of high-quality and competitive products.For more information:cea.fr/english Brussels
2020-11-20
17页




5星级
2020年世界制造业报告:人工智能时代的制造业(World Manufacturing Report:Manufacturing in The Age in Artificial Intelligence)介绍了全球人工智能的发展状况,概述了人工智能在制造业价值链中的各种应用,调查了人工智能如何改变劳动力,解释了人工智能在制造业中的相关伦理和政策主题,最后列出了10个关键问题关于在制造业中成功和值得信赖地采用人工智能的建议。 人工智能在制造业并不是新鲜事物。然而,在过去的十年里,由于人工智能算法、计算能力、连接性和数据科学的进步,随着公司越来越把人工智能视为竞争优势的驱动力,它变得越来越重要。这一点从企业应用人工智能带来的全球收入的预计增长和制造业人工智能支出的巨大份额中可见一斑。然而,缺乏与人工智能合作的经验丰富的人才、缺乏专门知识以及对准确数据的需求仍然是采用人工智能的组织面临的重要挑战。尽管如此,企业越来越意识到人工智能的潜力,它不仅能提高生产效率,而且还能带来保持竞争力的新能力。 人工智能应用对所有制造活动的影响都是显著的,预计在未来几年内还会增加。在最广泛的数字供应网络(DSN)层面上,人工智能在需求预测和相关的同步规划、自动化仓库管理、自动化设计和开发以及连接服务方面具有重要价值。在工厂或车间层面,人工智能应用的目标是提高能源效率、产品和工艺质量、优化调度、机器人技术和提高人类操作员的能力。在人工智能应用最为成熟的最基本的机床水平上,自动化的质量检查、监控和控制、数据驱动的刀具磨损模型和预测性维护都将受益于人工智能。 至于制造业的劳动力,人工智能对未来的工作有着重要的影响。制造业将受益于人工智能,因为大量的任务可以自动化。然而,它无法复制许多以人为中心的任务,使人类的角色像以往一样重要。人工智能将增加组织中的许多现有角色,也将创造全新的角色,不仅是在人工智能相关技术的研究和技术开发领域,而且在业务战略、道德和合规性以及人工智能交互方面也将创造出全新的角色。与人工智能一起工作所需的技能也是相关的,工人不仅要具备更具技术性的人工智能和制造技能,而且还要具备以人为中心的技能,如道德/值得信赖的人工智能和横向技能。这意味着组织应该优先考虑员工的教育和培训,以使他们在以人工智能为中心的制造环境中取得成功。 虽然人工智能有望对增长和生产力产生积极影响,但它也带来了道德困境和监管问题,有可能阻碍其采用。在这方面,许多政府和组织已经提出了道德倡议和框架,以确定并试图解决相关的道德挑战。本报告确定了在制造业部署人工智能时面临的主要伦理挑战:人工智能系统的透明度、隐私和数据保护、技术稳健性和安全性、人的能动性以及合法性和合规性。不扼杀创新的标准和法规可以支持人工智能的部署,并指导值得信赖的人工智能系统的开发。 最后,为了指导人工智能在制造业中的可持续采用,提出了一套针对不同利益相关者的10条关键建议。这些建议侧重于从提高社会对人工智能的认识到实施标准和法规等关键主题,旨在帮助利益相关者解决关键问题,并在现在和未来利用人工智能在制造业的潜力。
2020-11-17
108页




5星级
支持人工智能的交互是当今卓越客户服务的根本。COVID-19疫情增加了客户对非接触式交易的需求,尤其是在金融服务业。毕竟,谁愿意拿自己的健康冒险签一张存款单呢?金融服务机构和客户通过人工智能支持的交互可以获得很多好处。 在凯捷研究院(Capgemini Research Institute)的最新报告智能货币:如何推动人工智能大规模转型金融服务客户体验(Smart Money:How to drive AI at scale to transform the financial services customer experience)中,我们采访了5000多名客户和300多名银行和保险业高管。我们发现,金融服务公司越来越多地转向人工智能来进行客户交互,并且从中获得了显著的收益。事实上,51%的客户每天都与银行和保险公司进行人工智能交互,而金融服务公司在面向客户的功能中部署人工智能后,其运营成本降低了13%,每个客户的收入增加了10%。当涉及到扩展这些计划时,问题就开始了。 如果不想被落在后面,金融服务机构应该: 投资于价值驱动的人工智能,以改变客户体验; 创建基于信任和道德的人工智能治理方法,以推动广泛的客户采用; 提供人工智能体验,考虑到需要同理心和情感的“签名时刻”; 建立AI使客户参与所需的技术基础; 为人工智能设置高级领导角色,旨在被加速采用; 让客户了解人工智能能为他们做什么,并使人工智能系统可解释和透明。
2020-11-17
44页




5星级
在COVID-19危机之后,我们对人工智能的依赖度急剧上升。今天,我们比以往任何时候都更期待人工智能能帮助我们限制身体互动、预测下一波大流行、消毒医疗设施,甚至运送我们的食物。但人工智能值得信任吗?&.
2020-11-17
44页




5星级
人工智能驱动企业:在规模上释放人工智能的潜力数十亿美元涌入人工智能领域是必然的,因为人工智能确实有能力彻底改变我们的经营方式。越来越多的跨行业公司正在对人工智能的应用做试点和概念验证。但人工智能在规模.
2020-11-16
40页




5星级
APRIL 2020 AI Chips: What They Are and Why They Matter An AI Chips Reference AUTHORS Saif M. Khan A.
2020-11-01
72页




5星级
THE MORALS OF ALGORITHMS A contribution to the ethics of AI systems 2 When discussing artificial int.
2020-11-01
14页




5星级
Artificial Intelligence and National Security Updated August 26, 2020 Congressional Research Service https:/crsreports.congress.gov R45178 Artificial Intelligence and National Security Congressional Research Service Summary Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles. Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technologys development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption. AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military. A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes. Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics. Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations. While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionaryif not revolutionaryeffect. Military AI development presents a number of potential issues for Congress: What is the right balance of commercial and government funding for AI development? How might Congress influence defense acquisition reform initiatives that facilitate military AI development? What changes, if any, are necessary in Congress and DOD to implement effective oversight of AI development? How should the United States balance research and development related to artificial intelligence and autonomous systems with ethical considerations? What legislative or regulatory changes are necessary for the integration of military AI applications? What measures can Congress take to help manage the AI competition globally? Artificial Intelligence and National Security Congressional Research Service Contents Introduction . 1 AI Terminology and Background . 1 Issues for Congress . 5 AI Applications for Defense . 9 Intelligence, Surveillance, and Reconnaissance . 10 Logistics . 10 Cyberspace Operations . 11 Information Operations and “Deep Fakes” . 11 Command and Control . 12 Semiautonomous and Autonomous Vehicles . 13 Lethal Autonomous Weapon Systems (LAWS) . 14 Military AI Integration Challenges . 15 Technology . 16 Process . 16 Personnel . 18 Culture . 19 International Competitors . 20 China . 20 Russia . 24 International Institutions . 26 AI Opportunities and Challenges . 27 Autonomy . 27 Speed and Endurance . 28 Scaling . 28 Information Superiority . 29 Predictability . 29 Explainability . 32 Exploitation . 34 AIs Potential Impact on Combat . 35 Minimal Impact on Combat . 35 Evolutionary Impact on Combat . 36 Revolutionary Impact on Combat . 37 Figures Figure 1. Relationships of Selected AI Definitions . 4 Figure 2. Chinese Investment in U.S. AI Companies, 2010-2017 . 22 Figure 3. Value of Autonomy to DOD Missions . 28 Figure 4. AI and Image Classifying Errors . 30 Figure 5. AI and Context . 31 Figure 6. Adversarial Images . 34 Artificial Intelligence and National Security Congressional Research Service Tables Table 1. Taxonomy of Historical AI Definitions . 3 Contacts Author Information . 39 Acknowledgments . 39 Artificial Intelligence and National Security Congressional Research Service 1 Introduction1 Artificial intelligence (AI) is a rapidly growing field of technology that is capturing the attention of commercial investors, defense intellectuals, policymakers, and international competitors alike, as evidenced by a number of recent initiatives. On July 20, 2017, the Chinese government released a strategy detailing its plan to take the lead in AI by 2030. Less than two months later Vladimir Putin publicly announced Russias intent to pursue AI technologies, stating, “Whoever becomes the leader in this field will rule the world.”2 Similarly, the U.S. National Defense Strategy, released in January 2018, identified artificial intelligence as one of the key technologies that will “ensure the United States will be able to fight and win the wars of the future.”3 The U.S. military is already integrating AI systems into combat via a spearhead initiative called Project Maven, which uses AI algorithms to identify insurgent targets in Iraq and Syria.4 These dynamics raise several questions that Congress addressed in hearings during 2017, 2018, and 2019: What types of military AI applications are possible, and what limits, if any, should be imposed? What unique advantages and vulnerabilities come with employing AI for defense? How will AI change warfare, and what influence will it have on the military balance with U.S. competitors? Congress has a number of oversight, budgetary, and legislative tools available that it may use to influence the answers to these questions and shape the future development of AI technology. AI Terminology and Background5 Almost all academic studies in artificial intelligence acknowledge that no commonly accepted definition of AI exists, in part because of the diverse approaches to research in the field. Likewise, although Section 238 of the FY2019 National Defense Authorization Act (NDAA) directs the Secretary of Defense to produce a definition of artificial intelligence by August 13, 2019, no official U.S. government definition of AI yet exists.6 The FY2019 NDAA does, however, provide a definition of AI for the purposes of Section 238: 1. Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets. 2. An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. 1 This report was originally written by Daniel S. Hoadley, U.S. Air Force Fellow. It has been updated by Kelley M. Sayler, Analyst in Advanced Technology and Global Security. 2 China State Council, “A Next Generation Artificial Intelligence Development Plan,” July 20, 2017, translated by New America, https:/www.newamerica.org/documents/1959/translation-fulltext-8.1.17.pdf, and Tom Simonite, “For Superpowers, Artificial Intelligence Fuels New Global Arms Race,” Wired, August 8, 2017, story/for-superpowers-artificial-intelligence-fuels-new-global-arms-race. 3 Department of Defense, Summary of the 2018 National Defense Strategy, p.3, https:/dod.defense.gov/Portals/1/ Documents/pubs/2018-National-Defense-Strategy-Summary.pdf. 4 Marcus Weisgerber, “The Pentagons New Algorithmic Warfare Cell Gets Its First Mission: Hunt ISIS,” Defense One, May 14, 2017, first-mission-hunt-isis/137833/. 5 For a general overview of AI, see CRS In Focus IF10608, Overview of Artificial Intelligence, by Laurie A. Harris. 6 P.L. 115-232, Section 2, Division A, Title II, 238. Artificial Intelligence and National Security Congressional Research Service 2 3. An artificial system designed to think or act like a human, including cognitive architectures and neural networks. 4. A set of techniques, including machine learning that is designed to approximate a cognitive task. 5. An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting.7 This definition encompasses many of the descriptions in Table 1 below, which summarizes various AI definitions in academic literature. The field of AI research began in the 1940s, but an explosion of interest in AI began around 2010 due to the convergence of three enabling developments: (1) the availability of “big data” sources, (2) improvements to machine learning approaches, and (3) increases in computer processing power.8 This growth has advanced the state of Narrow AI, which refers to algorithms that address specific problem sets like game playing, image recognition, and navigation. All current AI systems fall into the Narrow AI category. The most prevalent approach to Narrow AI is machine learning, which involves statistical algorithms that replicate human cognitive tasks by deriving their own procedures through analysis of large training data sets. During the training process, the computer system creates its own statistical model to accomplish the specified task in situations it has not previously encountered. Experts generally agree that it will be many decades before the field advances to develop General AI, which refers to systems capable of human-level intelligence across a broad range of tasks.9 Nevertheless, the rapid advancements in Narrow AI have sparked a wave of investment, with U.S. venture capitalists investing an estimated $
2020-10-30
43页




5星级
ETSI White Paper No. #34 Artificial Intelligence and future directions for ETSI 1st edition June 2020 ISBN No. 979-10-92620-30-1 Authors: Lindsay Frost (Document editor, NEC) Tayeb Ben Meriem (ORANGE) Jorge Manuel Bonifacio (PT Portugal) Scott Cadzow (Cadzow Consulting) Francisco da Silva (Huawei) Mostafa Essa (Vodafone) Ray Forbes (Huawei) Pierpaolo Marchese (Telecom Italia) Marie-Paule Odini (HPE) Nurit Sprecher (Nokia) Christian Toche (Huawei) Suno Wood (eG4U) ETSI 06921 Sophia Antipolis CEDEX, France Tel 33 4 92 94 42 00 infoetsi.org www.etsi.org Articial Intelligence and future directions in ETSI 3 Contents Contents 3 Executive Summary 4 Special Foreword: AI and Covid-19 4 1 Introduction 5 2 Importance of AI Issues across Europe and Globally 6 3 AI in ETSI Standardization Today 7 3.1 AI in 5G Systems 8 3.2 Network Optimization and End-to-End Service Assurance 9 3.3 IoT, Data Acquisition whereas other activities falling under the 5G umbrella take place elsewhere in ETSI. The overall trend with 5G expectations at the macro level is for a unifying connectivity platform for many applications, including those enabled by AI. In 3GPP 5G specifications, AI is broadly referenced in the two main areas of Core Network capabilities (5G NG Core) and Radio Access Network (5G RAN). In both areas, AI plays the role of an ancillary layer that can increase 5G network automation and effective management and orchestration. This layer can provide, too, an augmented user experience by expanding the 5G device capabilities using cloud-based AI functionality. AI has become an additional function in the management of RAN and the evolution towards the model of a SON (Self Organizing Network). In this field, ML (Machine Learning) can provide radio systems with the ability to automatically learn and improve from experience, without being explicitly programmed. This could become beneficial in radio contexts such as selecting the optimal 5G beam(s) and power level(s) configuration of a 5G cell at each transmission interval. Training of ML-based models can be based on the standardized collection of network configurations data together with corresponding network performances and traffic distribution, in order to predict network behaviour. Once trained, ML-based models could be deployed in the RAN to obtain optimal antenna and radio resource configurations. In the 5G Core Service Based Architecture (SBA), the role of AI engines can be envisaged in the Network Data Analytics Function (NWDAF (see ETSI TS 129 520 V15.0.0) 18), which provides the various Network Functions in the architecture with monitoring capabilities for the network or for the behaviour of specific customers. The 3GPP standard does not specify the architectural model of an AI solution in NWDAF, but just the service capabilities that are exposed and the way other 5G Core Network Functions can access the results. Articial Intelligence and future directions in ETSI 9 3.2 Network Optimization and End-to-End Service Assurance The pivotal deployment of 5G and network slicing has triggered the urgent need for a radical change in the way networks and services are managed and orchestrated. The ultimate automation target is to create largely autonomous networks that will be driven by high-level policies; these networks will be capable of self-configuration, self-monitoring, self-healing and self-optimization without further human intervention. Machine Learning and in general Artificial Intelligence are key enablers for increasing automation. To deliver their full potential, AI-powered mechanisms require fast access to data, abstraction of intelligent and contextual information from events and rule-based systems, supervision, streamlined workflows and lifecycle management. Data includes known events in the near future and past cycles of usage (daily, weekly, monthly, annual, etc.). Data is gathered from many sources: (1) data from network functional elements; (2) data from infrastructure (including cloud); (3) data from user equipment; (4) data from management systems; (5) data from external systems (databases, applications, etc.). It is possible that AI may be localized, may be used co-operatively across the communications network, or be within the individual services (e.g. eHealth, eTransport, etc.). Network optimization with the aid of AI can operate at different time scales and may have a broader scope that includes intelligent management and control of resources and parameters of a network and of particular services. Examples of such network and service management and control intelligence are: Autonomic (i.e. Closed-Loop) Configuration management; Autonomic Fault-management; Autonomic Performance management; Autonomic Security management; Autonomic Monitoring management; etc. Within ISG NFV (Network Function Virtualization), AI is being considered as a tool that eventually becomes part of the Management and Orchestration (MANO) stack. NFV virtualization is not explicitly considering AI, except in requirements to properly feed data and collect actions from AI modules wi 1: “Although NFV-MANO has already been equipped with fundamental automation mechanisms (e.g., policy management), it is still necessary to study feasible improvements on the existing NFV-MANO functionality with respect to automation . to investigate the feasibility on whether those automation mechanisms can be adapted to NFV-MANO during the NFV evolution to cloud-native.” ISG ZSM (ISG Zero-touch Network and Service Management), was formed with the goal to introduce a new end-to-end architecture and related solutions that will enable automation at scale and at the required minimal total cost of ownership (TCO), as well as to foster a larger utilization of AI technologies. The ZSM end-to-end architecture framework (see ETSI GS ZSM 002 19) has been designed for closed- loop automation and optimized for data-driven machine learning and AI algorithms. The architecture is modular, flexible, scalable, extensible and service-based. It supports open interfaces as well as model- Articial Intelligence and future directions in ETSI 10 driven service and resource abstraction. Closed loops (e.g. using the OODA model of Observe, Orient, Decide, Act) enable automation of management tasks and allow e.g. continuous self-optimization, improvement of network and resource utilization, and automated service assurance and fulfilment. The ISG ZSM is currently working to specify the closed-loop automation enablers, including automatic deployment and configuration of closed loops, means for interaction between closed loops (for coordination, delegation and escalation), use of policies, rules, intents and/or other forms of inputs to steer their behaviour, etc. In addition, the ISG is working to specify closed-loop solutions for particular end-to-end service and network automation use cases, based on the generic enablers and ZSM architectural elements for closed loops as defined in ETSI GS ZSM 002 19. The ETSI group ISG ENI designs “Experiential Networked Intelligence” based on data collection and processing using closed loop decision-making. The specification ETSI GS ENI 001 20 demonstrates a number of use cases on service assurance, fault management and self-healing, resource configuration, performance configuration, energy optimization, security and mobility management. The specification ETSI GS ENI 005 21 shows as a functional architecture how the data is collected, normalised and recursively processed to extract knowledge and wisdom from it. This data is used for decision-making and the results are returned to the network, where the behaviour is continually monitored. The requirements document ETSI GR ENI 007 3 on network classification of AI details the use of AI in a network into six stages, from No AI to full AI deployment. Clearly, no network is at either extreme of the six stages. ISG ENI is specifying training methods in document ETSI GS ENI 005 version 2 21. Training is often made with big data. Learning is the method used by the AI system to extract knowledge from the training data. Learning can take many forms: dictionary learning, rule-based learning, federated learning, supervised learning, reinforcement learning and “pure machine” unsupervised learning, or combinations of these. Training can be centralised, federated, umbrella-like or distributed peer-to-peer. An AI system that is trained and has learning in a particular field (e.g. image recognition, eHealth, networking and resource management, IoT, robotics, etc.) may continually adapt with further online learning, or may have offline learning to refresh its awareness of the situation (re-training). TC INT Core Network and Interoperability Testing group created TS 103.195-2 for the Generic Autonomic Network Architecture (GANA) 22 and TS 103 194 “Scenarios, Autonomic Use Cases and Requirements for Autonomic/Self-Managing Future Internet” 23. The optimization can be categorized as: Actions that are performed on network configuration parameters or network resources, e.g. transmission power, antenna tilt, routing policies, bandwidth allocation. Actions that are performed on the network structure, e.g. adding/removing network elements (either physical or virtualized instances). These actions imply configuration changes in order to accommodate the structural change. TC INT specifications consider events that can trigger a network to dynamically change network properties. Events vary depending on the specific AI systems deployed in the network and the level where they operate, external or internal to the network. These events can occur in a chain-like fashion, e.g. policy change can trigger several secondary events in lower level functional units. In conclusion, AI systems are already cited in many ETSI network specifications in ISG ZSM, NFV, ENI and TC INT, with an emphasis on dynamic optimization. Articial Intelligence and future directions in ETSI 11 3.3 IoT, Data Acquisition DGR/SAI-004 wi 6 will define and prioritize potential AI threats along with recommended topics for the ISG SAI to consider. The recommendations contained in this specification will be used to define the scope and timescales for the follow-up work. 2. Threat Ontology for AI, to align terminology; DGR/SAI-001 wi 7 seeks to define AI threats and how they differ from threats to traditional systems. In doing so the AI Threat Ontology specification will attempt to provide a path to align terminology across different stakeholders and multiple industries, including adversarial AI attack analysis. 3. Data Supply Chain Report, focused on data issues and risks in training AI; DGR/SAI-002 wi 8 considers that data is a critical component in the development of AI systems and access to suitable data is often limited, or even has been compromised so as to be a viable attack vector against an AI system. This report will summarize the methods currently used to source data for training AI, review existing initiatives for developing data sharing protocols and analyse requirements for standards for ensuring integrity/confidentiality in the shared information. 4. Mitigation Strategy Report, with guidance to mitigate the impact of AI threats; DGR/SAI-005 wi 9 summarizes and analyzes existing and potential mitigation against threats for AI-based systems and produce guidelines for mitigating against threats introduced by adopting AI into systems. Articial Intelligence and future directions in ETSI 14 5. Security testing of AI specification, in DGS/SAI-003 wi 10, will identify methods and techniques for security testing of AI-based components, and produce a thorough gap analysis to identify the limitations and capabilities in security testing of AI. In addition to the new work of ISG SAI, the ISG ZSM conducts security studies within its scope to identify security threats and motivate new standards. Security aspects are essential to address because the threat surface is extensive in the ZSM environment, due to the openness of the ZSM framework and the nature of emerging technologies such as AI/ML. In addition, compliance with country/region/industry security laws and regulations, including those related to AI, is and will be an obligation for ZSM service providers and their suppliers. To summarize, security and privacy issues require assuring the protection of users of the ICT systems that embed AI. ETSI has core competence in these areas. 3.5 Testing TC MTS (Methods for Testing (2) foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge; (3) ensure a policy environment that will open the way to deployment of trustworthy AI systems; (4) empower people with the skills for AI and support workers for a fair transition; Articial Intelligence and future directions in ETSI 20 (5) co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI. Within Europe, there are a number of committees that are in a dialogue with regulators, such as: The EC HLEG on AI, that published in April 2019 the Ethics Guidelines for Trustworthy Artificial Intelligence 1 followed in June 2019 by a document with 33 recommendations (see High-Level Expert Group. Policy and investment recommendations for trustworthy Artificial Intelligence, Published 26th June 2019 40 The Multi-stakeholder Platform, that is co-responsible for the EC Rolling Plan for ICT Standardization published it in March 2019 12 and in March 2020 41 with significant recommendations for AI. 4.4 Government Sponsored Research Projects Within Europe, as also in the USA, China or India, etc., there are a number of government sponsored research programmes, which may impact AI technology, e.g. concerning explainability. The EU Horizon Europe research programme beginning in 2020 has a strong emphasis on AI and there are also many projects from the previous Horizon 2020 programme that are still delivering results, for example: AI4EU is a Horizon 2020 project that will bring together researchers, innovators and European talents who are currently working in the field of artificial intelligence Humane-AI is a Horizon 2020 project to design the principles for a new science that will make artificial intelligence based on European values LOGISTAR project proposes the intensive use of Internet of Things, Open Data, Artificial Intelligence, optimization techniques and other advances in ICT for effective planning and optimizing of transport in the logistics sector. 4.5 Open Source R Definition of Categories for AI Application to Networks, https:/portal.etsi.org/webapp/WorkProgram/Report_WorkItem.asp?WKI_ID=56393 4 G20 Ministerial Statement on Trade and Digital Economy. Published 9th June 2019. https:/www.mofa.go.jp/files/000486596.pdf 5 COM(2018) 795 final. Coordinated Plan on Artifici
2020-10-30
32页




5星级
由诺华基金会和微软牵头的一份重要新报告显示,对数据和人工智能的投资对于推动应对和应对COVID-19大流行以及全球其他最大医疗挑战所必需的卫生系统改进至关重要。2020年9月由Broadband Commission健康数字和AI工作组发布人工智能重塑全球健康:人工智能成熟度路线图,该委员会由诺华基金会和微软共同主持。 这个报告基于对300多个现有的AI在健康方面的使用案例的综述,报告中显示了AI已经如何破坏健康和护理。 随后,报告提出了一个路线图,以帮助各国使用人工智能,并将各国的卫生系统从被动转变为主动,进而对健康情况进行预测甚至预防。低收入和中等收入国家(LMIC)应对系统性卫生挑战,例如卫生工作者短缺,人口服务不足,城市化进程迅速和信息虚假等,从人工智能中受益最大,但也遭受了最大的损失。 对COVID-19大流行的反应只是一个例子,这个例子也说明了全球健康状况现在如何依赖数据。 但是,大多数国家仍需要建立这些数据的可使用性和可操作性,并且不投资风险的政府会进一步扩大其人口中的医疗不平等现象。低收入和中等收入国家的许多案例已经将人工智能用于健康方面处于世界领先地位。 例如,卢旺达的虚拟健康咨询服务已经覆盖了三分之一的成年人口,并且印度的医院正在使用人工智能来预测心脏病发作的风险,即可以提前七年。高收入国家在健康方面也可以从人工智能中受益匪浅。 例如,医护人员短缺是一项全球性挑战,到2030年,全球缺口预计将达到1800万。这增加了对支持性人工智能工具的投资的必要性,该工具可以帮助护士和社区医护人员诊断和治疗传统上被认为有疾病的人。 人工智能不应取代人类,而应通过执行诸如处理大数据的任务来增强人类的能力,以加速和使健康问题的诊断更加准确。“除了现有的传染病负担和不断增加的慢性病潮流之外,许多国家还没有做好应对新出现的疾病的准备,例如COVID-19。数字技术和人工智能是重新设计卫生系统的重要推动力。”诺华基金会Broadband Commission卫生数字和AI工作组联合主席Ann Aerts如此说道。人工智能通过在潜在健康问题真正发生之前就发现潜在健康问题的方式,在增加访问量和改善结果的同时,还降低了成本。微软公司工作组联合主席保罗米切尔说:“人工智能不仅会在低收入国家产生巨大影响,而且还会在所有卫生系统中产生巨大影响。” “很明显,COVID-19正在推动技术在健康方面的应用发生巨大变化,我们看到几个月后,我原本期望的时间将达到数年甚至数十年。” Aerts博士说,医疗保健的最大变化将由企业,创新者,医疗专业人员和政府之间的伙伴关系推动。她说:“我们必须在急需卫生保健的国家中为人工智能开发一个可持续的生态系统,” “这必须在确保所有人的公平和获得的同时进行。随着卫生系统在大流行之后重建,技术创新必须成为议程的核心部分。”
2020-10-30
140页




5星级
The art of customer-centric artificialintelligence How organizations can unleash the full potential .
2020-10-22
48页




5星级
NOVEMBER 2018 MCKINSEY ANALYTICSNeil Webb Notes from the AI frontier: AI adoption advances, but foun.
2020-10-13
11页




5星级
State of AI Report October 1, 2020 #stateofaistateof.ai Ian HogarthNathan Benaich About the authors Nathan is the General Partner of Air Street Capital, a venture capital fi rm investing in AI-fi rst technology and life science companies. He founded RAAIS and London.AI, which connect AI practitioners from large companies, startups and academia, and the RAAIS Foundation that funds open-source AI projects. He studied biology at Williams College and earned a PhD from Cambridge in cancer research. Nathan BenaichIan Hogarth Ian is an angel investor in 60 startups. He is a Visiting Professor at UCL working with Professor Mariana Mazzucato. Ian was co-founder and CEO of Songkick, the concert service used by 17M music fans each month. He studied engineering at Cambridge where his Masters project was a computer vision system to classify breast cancer biopsy images. He is the Chair of Phasecraft, a quantum software company. stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Artifi cial intelligence (AI) is a multidisciplinary fi eld of science and engineering whose goal is to create intelligent machines. We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence. The State of AI Report is now in its third year. New to the 2020 edition are several invited content contributions from a range of well-known and up-and-coming companies and research groups. Consider this Report as a compilation of the most interesting things weve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future. We consider the following key dimensions in our report: -Research: Technology breakthroughs and their capabilities. - Talent: Supply, demand and concentration of talent working in the fi eld. -Industry: Areas of commercial application for AI and its business impact. -Politics: Regulation of AI, its economic implications and the emerging geopolitics of AI. -Predictions: What we believe will happen in the next 12 months and a 2019 performance review to keep us honest. Collaboratively produced by Ian Hogarth (soundboy) and Nathan Benaich (nathanbenaich). stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Thank you to our contributors stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Thank you to our reviewers Jack Clark, Jeff Ding, Chip Huyen, Rebecca Kagan, Andrej Karpathy, Moritz Mller-Freitag, Torsten Reil, Charlotte Stix, and Nu (Claire) Wang. Artifi cial intelligence (AI): A broad discipline with the goal of creating intelligent machines, as opposed to the natural intelligence that is demonstrated by humans and animals. It has become a somewhat catch all term that nonetheless captures the long term ambition of the fi eld to build machines that emulate and then exceed the full range of human cognition. Machine learning (ML): A subset of AI that often uses statistical techniques to give machines the ability to learn from data without being explicitly given the instructions for how to do so. This process is known as “training” a “model” using a learning “algorithm” that progressively improves model performance on a specifi c task. Reinforcement learning (RL): An area of ML concerned with developing software agents that learn goal-oriented behavior by trial and error in an environment that provides rewards or penalties in response to the agents actions (called a “policy”) towards achieving that goal. Deep learning (DL): An area of ML that attempts to mimic the activity in layers of neurons in the brain to learn how to recognise complex patterns in data. The “deep” in deep learning refers to the large number of layers of neurons in contemporary ML models that help to learn rich representations of data to achieve better performance gains. Defi nitions stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Algorithm: An unambiguous specifi cation of how to solve a particular problem. Model: Once a ML algorithm has been trained on data, the output of the process is known as the model. This can then be used to make predictions. Supervised learning: A model attempts to learn to transform one kind of data into another kind of data using labelled examples. This is the most common kind of ML algorithm today. Unsupervised learning: A model attempts to learn a datasets structure, often seeking to identify latent groupings in the data without any explicit labels. The output of unsupervised learning often makes for good inputs to a supervised learning algorithm at a later point. Transfer learning: An approach to modelling that uses knowledge gained in one problem to bootstrap a different or related problem, thereby reducing the need for signifi cant additional training data and compute. Natural language processing (NLP): Enabling machines to analyse, understand and manipulate language. Computer vision: Enabling machines to analyse, understand and manipulate images and video. Defi nitions stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Research -A new generation of transformer language models are unlocking new NLP use-cases. -Huge models, large companies and massive training costs dominate the hottest area of AI today: Natural Language Processing. -Biology is experiencing its “AI moment”: From medical imaging, genetics, proteomics, chemistry to drug discovery. -AI is mostly closed source: Only 15% of papers publish their code, which harms accountability and reproducibility in AI. Talent -American institutions and corporations further their dominance of major academic conference papers acceptances. -Multiple new institutions of higher education dedicated to AI are formed. - Corporate-driven academic brain drain is signifi cant and appears to negatively impact entrepreneurship. -US AI ecosystem is fuelled by foreign talent and the contribution of researchers educated in China to world-class papers is clear. Industry - The fi rst trial of an AI-discovered drug begins in Japan and the fi rst US medical reimbursement for AI-based imaging procedure is granted. -Self-driving car mileage remains microscopic and open sourcing of data grows to crowdsource new solutions. -Google, Graphcore, and NVIDIA continue to make major advances in their AI hardware platforms. -NLP applications in industry continue to expand their footprint and are implemented in Google Search and Microsoft Bing. Politics -After two wrongful arrests involving facial recognition, ethical risks that researchers have been warning about come into sharp focus. - Semiconductor companies continue to grow in geopolitical signifi cance, particularly Taiwans TSMC. -The US Military is absorbing AI progress from academia and industry labs. -Nations pass laws to let them scrutinize foreign takeovers of AI companies and the UKs Arm will be a key test. Executive Summary stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Scorecard: Reviewing our predictions from 2019 stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Our 2019 PredictionGradeEvidence New natural language processing companies raise $100M in 12 months. Yes Gong.io ($200M), Chorus.ai ($45M), Ironscales ($23M), ComplyAdvantage ($50M), Rasa ($26M), HyperScience ($60M), ASAPP ($185M), Cresta ($21M), Eigen ($37M), K Health ($48M), Signal ($25M), and many more! No autonomous driving company drives 15M miles in 2019. YesWaymo (1.45M miles), Cruise (831k miles), Baidu (108k miles). Privacy-preserving ML adopted by a F2000 company other than GAFAM (Google, Apple, Facebook, Amazon, Microsoft). Yes Machine learning ledger orchestration for drug discovery (MELLODY) research consortium with large pharmaceutical companies and startups including Glaxosmithkline, Merck and Novartis. Unis build de novo undergrad AI degrees.Yes CMU graduates fi rst cohort of AI undergrads, Singapores SUTD launches undergrad degree in design and AI, NYU launches data science major, Abu Dhabi builds an AI university. Google has major quantum breakthrough and 5 new startups focused on quantum ML are formed. Sort of Google demonstrated quantum supremacy in October 2019! Many new quantum companies were launched in 2019 but only Cambridge Quantum Computing, Rahko and Xanadu.ai are explicitly working on quantum ML. Governance of AI becomes key issue and one major AI company makes substantial governance model change. NoNope, business as usual. stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Section 1: Research stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Code Availability Paper Publication Date 2017 2018 2019 2020 25% 20% 15% 10% 0% 5% Research paper code implementations are important for accountability, reproducibility and driving progress in AI. The fi eld has made little improvement on this metric since mid-2016. Traditionally, academic groups are more likely to publish their code than industry groups. Notable organisation that dont publish all of their code are OpenAI and DeepMind. For the biggest tech companies, their code is usually intertwined with proprietary scaling infrastructure that cannot be released. This points to centralization of AI talent and compute as a huge problem. AI research is less open than you think: Only 15% of papers publish their code stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Hosting 3,000 State-of-the-Art leaderboards, 750 ML components, and 25,000 research along with code. Papers With Code tracks openly-published code and benchmarks model performance stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai % PyTorch Papers of Total TensorFlow/PyTorch Papers % of total framework mentions 100% 75% 50% 25% 0% Of 20-35% of conference papers that mention the framework they use, 75% cite the use of PyTorch but not TensorFlow. Of 161 authors who published more TensorFlow papers than PyTorch papers in 2018, 55% of them have switched to PyTorch. The opposite happened in 15% of cases. Meanwhile, the authors observe that TensorFlow, Caffe and Caffe2 are still the workhorse for production AI. Facebooks PyTorch is fast outpacing Googles TensorFlow in research papers, which tends to be a leading indicator of production use down the line stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai 47% of these implementations are based on PyTorch vs. 18% for TensorFlow. PyTorch offers greater fl exibility and a dynamic computational graph that makes experimentation easier. JAX is a Google framework that is more math friendly and favored for work outside of convolutional models and transformers. PyTorch is also more popular than TensorFlow in paper implementations on GitHub stateof.ai 2020 Repository Creation Date Share of implementations 100% 75% 50% 25% 0% 2017 2018 2019 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Huge models, large companies and massive training costs dominate the hottest area of AI today, NLP. Language models: Welcome to the Billion Parameter club 2018 (left) through 2019 (right)2020 onwards 11B 175B 9.4B 17B 1.5B 8.3B 2.6B 1.5B 66M355M340M330M665M465M340M110M94M 1.5B stateof.ai 2020 Note: The number of parameters indicates how many different coeffi cients the algorithm optimizes during the training process. Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Empirical scaling laws of neural language models show smooth power-law relationships, which means that as model performance increases, the model size and amount of computation has to increase more rapidly. Bigger models, datasets and compute budgets clearly drive performance stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Tuning billions of model parameters costs millions of dollars Based on variables released by Google et al., youre paying circa $1 per 1,000 parameters. This means OpenAIs 175B parameter GPT-3 could have cost tens of millions to train. Experts suggest the likely budget was $10M. stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai This sparse transformer-based machine translation model has 600B parameters. To achieve the needed quality improvements in machine translation, Googles fi nal model trained for the equivalent of 22 TPU v3 core years or 5 days with 2,048 cores non-stop stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Without major new research breakthroughs, dropping the ImageNet error rate from 11.5% to 1% would require over one hundred billion billion dollars! Many practitioners feel that progress in mature areas of ML is stagnant. Were rapidly approaching outrageous computational, economic, and environmental costs to gain incrementally smaller improvements in model performance stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai This has implications for problems where training data samples are expensive to generate, which likely confers an advantage to large companies entering new domains with supervised learning-based models. A larger model needs less data than a smaller peer to achieve the same performance stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Google made use of their large language models to deliver higher quality translations for languages with limited amounts of training data, for example Hansa and Uzbek. This highlights the benefi ts of transfer learning. Low resource languages with limited training data are a benefi ciary of large models stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai Since 2012 the amount of compute needed to train a neural network to the same performance on ImageNet classifi cation has been decreasing by a factor of 2 every 16 months. Even as deep learning consumes more data, it continues to get more effi cient Training effi ciency factorTwo distinct eras of compute in training AI systems stateof.ai 2020 Introduction | Research | Talent | Industry | Politics | Predictions#stateofai PolyAI, a London-based conversational AI company, open-sourced their ConveRT model (a pre-trained contextual re-ranker based on transformers).
2020-10-12
177页




5星级
The State of AI in Insurance Point of View 2020 1 2 3 4 5 6 Introduction About This Survey Insurance.
2020-10-11
20页




5星级
斯里兰卡概况政策和倡议,产业前景和韧性,基础设施和投资,斯里兰卡技能概况,斯里兰卡IT-BPM行业:2019/20年行业状况斯里兰卡IT-BPM行业多年来一直表现良好,并以更快的速度发展,疫情进一步加快了这一增长,为个人、企业、行业和国家层面对数字技术的更高接受铺平了道路。斯里兰卡已经证明了它有潜力,最重要的是政府和私营部门的支持,把国家的it景观带到下一个水平,从而巩固其作为一个it中心的地位,不仅在该地区,而且在全球市场。
2020-10-10
56页




5星级
十五五规划建议全文(25页).pdf
三个皮匠报告:2025银发经济生态:中国与全球实践白皮书(150页).pdf
2025刘润年度演讲PPT:进化的力量.pdf
三个皮匠报告:2025中国情绪消费市场洞察报告(24页).pdf
清华大学:2025年AIGC发展研究报告4.0版(152页).pdf
三个皮匠报告:2025中国AI芯片市场洞察报告(24页).pdf
麦肯锡:2025年人工智能发展态势报告:智能体、创新与转型(英文版)(32页).pdf
三个皮匠报告:2025中国稀土产业市场洞察报告-从资源到战略武器,中美博弈的稀土战场(25页).pdf
毕马威:2025年第四季度中国经济观察报告(82页).pdf
Gartner:2026年十大战略技术趋势报告(28页).pdf