《大成(Dentons):2025年全球人工智能趋势报告:关键法律问题(英文版)(28页).pdf》由会员分享,可在线阅读,更多相关《大成(Dentons):2025年全球人工智能趋势报告:关键法律问题(英文版)(28页).pdf(28页珍藏版)》请在三个皮匠报告上搜索。
1、Global AI trends report Key legal issues for 2025IntroductionGiangiacomo Olivi Partner,Europe Co-Head of Intellectual Property,Data and Technology,MilanD+39 02 726 268 Simon ElliottPartner,Head of Data Privacy,Cybersecurity and AI for UK,Ireland and Middle East,LondonD+44 20 7246 As we enter 2025,we
2、 reflect on a period marking a paradigm shift in the adoption of artificial intelligence(AI).Major tech companies have poured more than US$150 billion into AI capital expenditure,1 the overall AI market has pushed past US$184 billion 2 and there are reports acknowledging near 300 AI use cases across
3、 various industries.3 Some organizations have evidently reaped these benefits,as valuations soar alongside the AI markets expansion.4 2024 unequivocally evidenced the ability of AI and its transformative potential to capture the focus of the market.The market is already crystallizing into its next p
4、hase.Now,deployment of AI is no longer a concept or trial.It has the potential to contribute US$15.7 trillion to the global economy by 2030,5 and major tech companies are already predicted to spend up to US$250 billion on AI infrastructure in 2025 alone.6 In the upcoming years,we expect to see busin
5、ess models increasingly shift to being AI-driven at the same time as new global regulations emerge,such as those developing more robust protections ensuring safe and responsible AI development.And there will be strong emphasis on business leaders having sufficient knowledge of AI to effectively navi
6、gate this shifting landscape.It is apparent that the green light is on for organizations to unlock AIs potential,and it is forming central pillars in business strategy and investment decisions around the globe.However,it is imperative for businesses to be prepared before unlocking AIs potential.This
7、 involves staying informed about the developing issues and trends impacting the adoption of the technology,as well as preparing organizations internal risk and operating structures.Deployment of AI is no longer a concept or trial.It has the potential to contribute US$15.7 trillion to the global econ
8、omy by 2030,and major tech companies are already predicted to spend up to US$250 billion on AI infrastructure in 2025 alone.There will be strong emphasis on business leaders having sufficient knowledge of AI to effectively navigate this shifting landscape.1.https:/ legal issues in AI you need to kno
9、w about for the year ahead2 Global artificial intelligence(AI)solutionsEditorsDocument title or D 3We recently surveyed 450 business leaders and general counsel to assess where large organizations are in their AI adoption journey and it was clear that,despite the AI hype,many are not at the stage of
10、 fully understanding where the technology can be transformative and executing on targeted strategic deployment.For instance,AIs interaction with intellectual property rights is one of the most challenging issues needing near-term resolution and highlights emerging trends,such as growing attention on
11、 copyright concerns regarding potential infringement arising from the output from generative AI and what appropriate licensing partnership models should take.Considering this,organizations will need to consider safeguards and evaluate optimized protection strategies.This includes developing clear st
12、rategies on where licensing arrangements can protect or monetize content and looking to protect self-developed AI technologies by exploring specialist patent applications.The approach to procurement or licensing of AI technology from external vendors is an area that is becoming increasingly“front of
13、 mind”.This is a key strategic decision for companies focusing their growth and transformation plans around enhanced AI capabilities.According to our Laws of AI Traction Report,seven in 10 business leaders view AI adoption and implementation as the key growth driver for their organization.As this ma
14、rket evolves,organizations interested in contracting for external AI technology must consider end-to-end procurement strategies and ensure compliance with current AI regulations while anticipating potential changes.2025 is set to be another important year for organizations and leaders in terms of AI
15、 regulation and governance,which will see the initial provisions of the EU AI Act take effect a global benchmark on AI regulation.Our global team has leveraged their understanding of current trends and client challenges to provide insights on how the regulatory environment is influencing approaches
16、here.Notably,63%of business leaders currently do not have a formalized AI roadmap.Establishing robust building blocks from a governance perspective is essential to turbocharge AI strategy.The approach to procurement or licensing of AI technology from external vendors is an area that is becoming incr
17、easingly“front of mind”.Anyone with responsibility for their organizations legal or risk agenda will benefit from reviewing this report which also covers emerging issues such as:an emerging global consensus around minimizing the risks of AI use;the increasing focus on privacy and security by design;
18、how AI is pushing businesses towards self-governance frameworks founded on ethical considerations;and how courts are expected to tackle the issue of algorithmic bias.We hope you find this report helpful and would be interested to hear how you and your legal teams are addressing these issues,as well
19、as others not included in the report.If you would welcome a tailored discussion regarding your organizations approach to AI,please contact an appropriate person listed in the report or email to arrange a meeting.For more detailed insights on how businesses are entering this new era of working with A
20、I,please explore our Laws of AI Traction Report available at .Notably,63%of business leaders currently do not have a formalized AI roadmap for high-impact AI integration.In a landscape where 74%of business leaders believe that AI is an important mechanism to protect their organizations revenue and b
21、ottom line,establishing robust building blocks from a governance perspective is essential to turbocharge AI strategy.While this varies by sector,the speed of AI adoption will depend on these building blocks to anticipate risks and help organizations close the gap between AI ambitions and the actions
22、 they take.The growth of AI underscores the importance for businesses to position themselves to manage the associated risks of this evolving landscape.This report highlights what we at Dentons see as key legal and risk trends for AI in 2025.63%3 Global artificial intelligence(AI)solutionsContents4 G
23、lobal artificial intelligence(AI)solutionsAI regulation,governance and ethics5Data privacy and cybersecurity10AI projects and procurement12Employment and talent management14IP protection and enforcement16Disputes and managing liability18M&A and investments20Competition and antitrust22AI regulation,g
24、overnance and ethics Regulatory cohesion starts to show the way forward Chantal BernierOf Counsel,Co-chair Global Privacy&Cybersecurity Group,OttawaD+1 613 783 The global AI regulation landscape is fragmented and rapidly evolving.Earlier optimism that global policymakers would enhance cooperation an
25、d interoperability within the regulatory landscape now seems distant.Instead,we continue to see the policy process to regulate AI progress throughout the world at different stages and adopting different models,from policy statements to soft law,to tabled or adopted legislation.However,through our su
26、pport of global businesses,we see the beginnings of a common global direction emerging on how to minimize the risks of AI use and create the structures to address the core principles of safe and ethical AI development and use that are becoming the cornerstones of global AI regulations.In order to de
27、velop these AI governance structures,businesses need to anticipate evolving legal requirements and regulatory approaches.Driven by this increasing cohesion,new governance models and strategies for AI have emerged in both the public and private sectors,offering valuable frameworks for other organizat
28、ions to follow.For example,the European Commissions AI governance initiatives offer models from which companies can draw inspiration to avoid reinventing the wheel.Leading global technology companies increasingly provide a benchmark in their publicly available standards and principles.Globally,while
29、 there is a convergence around fundamental ethical principles and values,there remains a need to be cognizant of regional approaches to AI regulation and adopting organizations own framework accordingly.Understanding these diverse strategies is crucial for companies operating in multiple jurisdictio
30、ns.In order to develop these AI governance structures,businesses need to anticipate evolving legal requirements and regulatory approaches.5 Global artificial intelligence(AI)solutionsEditor6 Global artificial intelligence(AI)solutionsUnited States ContributorsTodd D.DaubertPartner,WashingtonD+1 202
31、408 Peter Z.StockburgerOffice Managing Partner,San DiegoD+1 619 595 The Trump administration likely will reduce regulation,minimize international cooperation and eliminate current Executive Orders with the goal of fostering innovation and US competitiveness.Plans may involve appointing an“AI czar”to
32、 coordinate federal efforts,focusing on infrastructure development like data centers and semiconductor manufacturing.This deregulatory approach may be resisted by skeptics,including key advisors.States will likely continue adopting sector-specific AI regulations to address concerns about safety and
33、ethics,and courts will likely address key issues in pending cases.A fragmented and patchwork landscape will likely need to be navigated in the near-term.CanadaContributorChantal BernierOf Counsel,Co-chair Global Privacy&Cybersecurity Group,OttawaD+1 613 783 Canadas direction emerges from the propose
34、d Artificial Intelligence and Data Act(AIDA)and the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.With an election looming,AIDA has an uncertain future.The Voluntary Code commits the signatories to Accountability,Safety,Fairness and Equity,
35、Transparency,Human Oversight and Monitoring,and Validity and Robustness.7 Global artificial intelligence(AI)solutionsLatin AmericaContributorJuanita AcostaPartner,BogotaD+57 601 743 In Latin America,most countries only have soft law or equivalent instruments regarding the use of AI,except for Peru,w
36、hich has implemented a regulation focused on principles and the promotion of AI usage.Further details may be expected shortly,as several countries,such as Chile,Colombia,Brazil,Mexico,Panama,Peru and Costa Rica,are submitting bills and legal initiatives to regulate AI,particularly to protect persona
37、l data and intellectual property.Latin America will continue to be a key region to watch in 2025.AfricaContributorsShahid SulaimanSenior Partner,Cape TownD+27 21 686 Davin OlenAssociate,JohannesburgD+27 11 326 Efforts to regulate AI are emerging across Africa.Leaders across the continent include Mau
38、ritius,which has released an AI strategy,along with Kenya and Nigeria,which are both consulting with stakeholders to develop national AI strategies.In South Africa,stakeholder engagement has increased since the release of a draft AI policy framework for discussion.Further,South Africas Patent Office
39、 has recently registered an AI as a patent inventor,contrasting with rejections of the same application elsewhere.This decision is based on the formative process for patent registrations in South Africa and provides an important incentive for AI development in the region.8 Global artificial intellig
40、ence(AI)solutionsUnited Kingdom ContributorSimon Elliott Partner,Head of Data Privacy,Cybersecurity and AI for UK,Ireland and Middle East,LondonD+44 20 7246 AI regulation in the United Kingdom finds itself in a challenging position.Based on a clear vision in the UK National AI Strategy to continue a
41、s a global leader in supporting the development and adoption of AI(and aiming to unlock the economic benefits in the digital economy and productivity),to date the UK has focused on a pro-innovation,light-touch approach.This approach is centered on responsibility being placed on sectoral regulators t
42、o develop appropriate guidance and codes of practice and avoiding AI-specific legislation.”Guardrails”had previously been the watch word.The UK has also seen its opportunity to be a balance or bridge between the safety-focused approach of the EU and the less regulated approach of the US.However,ther
43、e is a focus on the need to acknowledge an increasing consensus of the potential harms and risks that can arise from insufficiently regulated AI and to legislate accordingly.The direction of travel appears to be an intention to do so,in a proportionate manner.Details of a proposed legislative approa
44、ch focusing specifically on the“most powerful”AI models are expected to be published for consultation shortly.Proposed legislation is likely to also involve codifying requirements for leading AI labs to make models available for testing.This supports another key aspect of the UKs contribution to the
45、 global development and regulation of AI,positioning the UK AI Safety Institute as the global leader in undertaking and coordinating global research on the most important risks that AI presents to society to enable the best-informed policy decisions to be made.This will likely continue to be a key f
46、ocus,particularly considering an anticipated scaling back of its US counterpart.European Union ContributorsGiangiacomo OliviPartner,Europe Co-Head of Intellectual Property,Data and Technology,MilanD+39 02 726 268 Chiara BocchiCounsel,MilanD+39 02 726 269 Europes regulatory strategy reflects its comm
47、itment to safeguarding fundamental rights,promoting trust in AI and shaping a global regulatory standard.The European Union is indeed at the forefront of global efforts to regulate artificial intelligence,with its landmark AI Act.The AI Act has been welcomed as the worlds first comprehensive AI-spec
48、ific legal framework,providing a legal definition of“AI System”,and categorizing AI systems based on their potential risk for individuals and fundamental rights,focusing on the use of technology,rather than the technology per se.Complementing the AI Act,the EU is advancing additional measures to add
49、ress legal and liability challenges associated with AI.The proposed AI Liability Directive seeks to modernize non-contractual civil liability rules,ensuring they are equipped to handle the unique complexities of AI systems.Furthermore,the recent Revised Product Liability Directive extends liability
50、to encompass software,AI systems and digital services that influence product performance such as navigation tools in autonomous vehicles bridging critical gaps in consumer protection.Driven by this immediate and comprehensive legislation,new AI governance models are being deployed throughout the EU
51、and will likely gain further traction from the newly established EU AI Office,fostering the promotion of the EU approach also beyond its borders.Many businesses but not all-are turning to governance models designed for the EU AI Act as their benchmark model for managing compliance with developing gl
52、obal regulation.9 Global artificial intelligence(AI)solutionsAsia-PacificContributors Michael ParkPartner,MelbourneD+61 3 9194 Matt HennessyPartner,MelbourneD+61 3 9194 In September 2024,the Australian government released a Voluntary AI Safety Standard comprising a number of AI guardrails to create
53、best practice guidance for the use of AI.The government also proposed mandatory guardrails for AI in high-risk settings,which were subject to public consultation.It is possible Australia could enact legislation drawing upon some of the concepts in the EU AI Act,but it currently remains unclear how t
54、he government will proceed.In May 2024,the Singapore government introduced the Model AI Governance Framework for Generative AI,which details best practice guidance on responsible development,deployment and use of AI.Chinas Interim Measures for the Management of Generative AI Services commenced in 20
55、23 and should continue to be observed as the regions first comprehensive binding regulation on generative AI.Data privacy and cybersecurityPrivacy and security by design becoming the key cornerstones for effective AI risk management and digital resiliencePeter Z.StockburgerOffice Managing Partner,Sa
56、n DiegoD+1 619 595 Todd D.DaubertPartner,WashingtonD+1 202 408 A convergence of rapidly evolving technological developments is leading to an increased focus on privacy and security by design and effective AI and data governance by companies and regulators around the world as the practical impact of
57、AI on data privacy and security becomes clearer.The dynamic landscape of data privacy and security is demanding continuous adaptation from organizations and regulators,which privacy and security by design combined with strong governance help to achieve.The past year was marked by an increased scruti
58、ny of AIs impact on privacy,a heightened focus on protecting childrens data,a need for businesses to adapt their models to comply with stricter data privacy laws and a growing practical risk arising from AI-enabled cyber threats.Governments globally are enacting stricter data privacy regulations to
59、protect personal information.Regulators are also scrutinizing the ethical implications of AI systems,prompting businesses to adopt privacy-preserving techniques like federated learning and differential privacy.Recent examples of AI-powered chat bots urging minors to engage in self-harm,suicide and v
60、iolence against parents have led to intense scrutiny of whether these undesirable outcomes are the result of poor design choices or failures of governance.The past year was marked by an increased scrutiny of AIs impact on privacy,a heightened focus on protecting childrens data,a need for businesses
61、to adapt their models to comply with stricter data privacy laws and a growing practical risk arising from AI-enabled cyber threats.Engaging in privacy by design can help to reduce the likelihood that personal information used for training of AI models is disclosed in the results produced by the mode
62、ls or that personal data is incorporated into training data without appropriate consideration and mitigations.10 Global artificial intelligence(AI)solutionsEditors11 Global artificial intelligence(AI)solutionsAs privacy laws become more complex and intertwined with AI-specific regulations,and as lit
63、igation risks increase,businesses face greater challenges in complying with inconsistent requirements without unnecessarily hindering technological innovation.Privacy by designPrivacy by design embedding privacy features into products,services and processes from their inception has become a cornerst
64、one for organizations prioritizing data protection,particularly in the AI age.By addressing privacy concerns early,businesses can ensure compliance,reduce risks and build consumer trust.Engaging in privacy by design can help to reduce the likelihood that personal information used for training of AI
65、models is inappropriately disclosed in the results produced by the models or that personal data is incorporated into training data without appropriate consideration and mitigations.The widespread adoption of privacy by design signals a shift in attitudes toward privacy from treating privacy as a str
66、ategic asset rather than a mere afterthought.Companies embracing privacy by design are also better able to demonstrate a proactive commitment to transparency and responsibility,meeting the expectations of regulators and consumers for ethical data handling and AI development.CyberattacksCyberattacks,
67、often supported by AI-powered tools,are more frequent and sophisticated,creating significant risks for organizations and governments worldwide.For example,Chinese state-linked hackers,known as“Salt Typhoon”,infiltrated global telecommunications networks,compromising sensitive communications of senio
68、r officials.Integrating security by design is essential to withstanding these cyberattacks and enhancing digital resilience.Many organizations are more effectively integrating security into the design of their systems,adopting zero trust architecture to more effectively control and verify access to
69、resources,using AI to detect threats in real time,automate responses and prevent attacks,moving more data to the cloud to take advantage of third-party expertise,and implementing extended detection and response to integrate data from multiple security products into a single system to provide a more
70、holistic view of potential threats.Robust and pragmatic governance structures and practices are critical for ensuring that the security measures implemented by design security by design continue to function as intended and remain updated with the latest intelligence and technology patches,and helpin
71、g organizations that have suffered a security incident demonstrate that they had taken reasonable security measures to regulators and plaintiffs.Collaboration among stakeholders is critical to address these challenges.Governments,industry groups and businesses are increasingly working more effective
72、ly together to establish global standards for data privacy and security,harmonize approaches and promote cross-border cooperation.Increased harmonization and collaboration would simplify compliance and enhance cybersecurity resilience.In the face of rising cyber risks,stricter regulations and increa
73、singly sophisticated technology,proper design and governance is a necessary foundation for balancing innovation with responsible risk management.Proactive,thoughtful and integrated approaches to data privacy and security will become increasingly important to efficiently navigate challenges and captu
74、re opportunities.Cyberattacks,often supported by AI-powered tools,are more frequent and sophisticated,creating significant risks for organizations and governments worldwide.AI projects and procurementThe build vs buy dilemma continuesMichael ParkPartner,MelbourneD+61 3 9194 As organizations around t
75、he world continue to experiment with AI solutions and progress towards implementing AI systems as part of their internal business processes and products,many organizations are confronted with a stark choice:whether to“build”or“buy”an AI solution to meet their needs.Unfortunately,there is no easy or“
76、one-size-fits-all”answer to this question.On the one hand,many smaller or less technically sophisticated organizations lack the internal technical capability or resources to build or train their own AI solutions from scratch.Accordingly,these organizations are typically seeking to procure AI solutio
77、ns from third-party providers.As part of these procurement activities,organizations need to grapple with new twists on a range of typical legal issues including ownership of AI outputs,re-use of customers inputs and data as training data for the suppliers other customers,and potential privacy and se
78、curity concerns where personal information is used as an input as well as novel legal issues,such as liability for so-called hallucinations in the output of AI models and other potential performance issues.On the other hand,larger or technically sophisticated organizations may have the internal capa
79、bility to build their own tailored AI solutions.In some cases,the“build”option may start with using a publicly available or open-source AI model that the organization deploys,refines and trains itself using its own proprietary datasets.Increasingly,the deployment of certain types of AI solutions can
80、 also require the use of specialized computing hardware to achieve the best possible performance.As a result,organizations that are seeking to build and train their own AI solutions are also having to consider whether to purchase and host the necessary computing hardware themselves or whether to obt
81、ain access to such hardware via third-party providers(in a manner similar to cloud computing).Consequently,most AI solution“builds”necessarily involve some element of“buy”as well.For multinational organizations,the procurement and deployment of AI solutions on a global basis presents additional chal
82、lenges.The governments of various countries around the world are taking differing approaches to the regulation of AI,ranging from a more prescriptive approach found in the European Unions AI Act to a more targeted or risk-based approach adopted by countries such as Australia.In light of this evolvin
83、g regulatory landscape around the world,we are yet to see an emerging or settled consensus on what a“market”position is on various contractual terms for the supply of an AI solution.This could be impacting multinationals“corporate agility”i.e.their ability to quickly adapt and respond to the opportu
84、nities offered by AI.Despite the challenges,we are seeing organizations endeavor to address the key contractual risks associated with AI-driven services in many template agreements.We anticipate that organizations will continue to confront these issues throughout 2025 in this fast-moving space.12 Gl
85、obal artificial intelligence(AI)solutionsEditorBuild vs buy dilemma in the legal industry key issues to consider Betislav imralEurope Insight&Intelligence Director,PragueD+420 236 082 The debate around whether to build or buy generative AI solutions is also a pivotal consideration in the legal indus
86、try,where data confidentiality,workflow customization and cost-efficiency are critical.While in-house development offers long-term benefits in scalability,adaptability and control over sensitive processes,this approach is not without challenges,which can incentivize the outsourcing of the desired ca
87、pabilities.Size of firmSmaller firms,for instance,may lack the technical expertise required to design and maintain such systems and the initial investment in infrastructure and talent can be prohibitively high.For these firms,partnering with external vendors to design and build tailored solutions ca
88、n bridge the gap,allowing them to leverage expert capabilities without the full burden of internal development.Moreover,smaller teams might struggle to allocate resources for training and change management,which are critical for successful adoption throughout the firm.Cost considerationsWhile third-
89、party AI solutions provide a rapid entry point and occasionally even better out-of-box integration,they come with significant recurring costs that can easily accumulate into substantial annual operational expenses for large firms.By contrast,in-house development involves reasonable capital investmen
90、t,and consumption-based operational costs,though these can be scaled predictably.Naturally,smaller firms may struggle with the initial investment and the resources needed to scale adoption effectively.Data confidentialityMany third-party tools operate in environments that may not meet the stringent
91、privacy requirements of sensitive legal cases.In-house solutions provide greater control over data processing and ensure compliance with regulatory standards,protecting both client trust and organizational reputation.However,implementing secure,on-premises environments demands robust IT infrastructu
92、re,which smaller firms might lack.CustomizationLegal work often demands specific workflows that off-the-shelf tools cannot adequately address without costly modifications.Developing an internal solution allows organizations to tailor workflows and integrate complementary tools,ensuring the system ev
93、olves alongside their needs.ScalabilityWithout the constraints of per-user licensing models,organizations can expand adoption across their workforce at minimal additional cost.This scalability is vital for embedding AI into daily workflows and maximizing its potential.Still,achieving widespread adop
94、tion requires investment in training and change management,which can be resource intensive.The rapid pace of technological advancement further supports the case for building.AdaptabilityOff-the-shelf solutions often lock firms into specific vendors,limiting flexibility as new capabilities emerge.An
95、internally developed platform can integrate cutting-edge AI models and adapt to evolving needs,ensuring long-term relevance.Building an in-house AI solution can enhance differentiation and strategic positioning.Proprietary platforms allow firms to stand out as leaders in legal innovation,attract top
96、-tier talent and create new client-facing tools that drive revenue.Smaller firms can mitigate the challenges of building internally by forming partnerships with external vendors or adopting phased approaches that gradually integrate in-house capabilities.This strategy enables access to expertise whi
97、le distributing the workload and investment over time,making the process more manageable.Nonetheless,smaller firms may find it challenging to capitalize on these opportunities without dedicated resources and clear strategic alignment.While the benefits of in-house development are significant,this ap
98、proach is not universally suitable.Smaller legal teams or firms without the necessary technical capabilities may find off-the-shelf solutions more practical in the short term.As tooling becomes more accessible and costs decline,building internally will likely become a viable option for a broader ran
99、ge of organizations.Advancements such as low-code and no-code platforms,pre-trained AI models and modular infrastructure components are increasingly reducing technical barriers,enabling even smaller firms to explore customized solutions with minimal development expertise.For firms with the capacity
100、to invest strategically,prioritizing internal innovation can unlock the full potential of generative AI,delivering transformative value to clients and stakeholders.13 EditorEmployment and talent managementPlanning for workforce transformationPurvis GhaniPartner,Global Chair Employment and Labor Prac
101、tice,LondonD+44 20 7320 Elouisa CrichtonPartner,GlasgowD+44 141 271 Employment and people practices are a key component of any organizations AI roadmap,and this will continue to increase in 2025.The workplace of the future will need to have the skills and resources to effectively implement and lever
102、age the benefits of AI and leaders need to consider how the technology may reshape their talent planning.A few new and continuing key trends we anticipate for 2025,include:AI decision-makingCompanies will need to manage potential legal risks and must carefully consider employment law and data protec
103、tion implications across different jurisdictions,including AI bias and discrimination in a range of areas,including decision-making processes in recruitment and performance evaluations,and equality,diversity and inclusion impacts of use of AI in interactions with employees and customers.Companies ne
104、ed to ensure that privacy notices are fit for purpose and future proofed to address any automated processing,that policies reflect the process for decision-making and that employers understand the need for human check and balance and ownership of decisions.This can also be relevant in contentious sc
105、enarios as it is key that people give evidence on decision-making.The workplace of the future will need to have the skills and resources to effectively implement and leverage the benefits of AI and leaders need to consider how the technology may reshape their talent planning.Companies will need to m
106、anage potential legal risks and must carefully consider employment law and data protection implications across different jurisdictions,including AI bias and discrimination in a range of areas.14 Global artificial intelligence(AI)solutionsEditors15 Global artificial intelligence(AI)solutionsEmployees
107、 are using AI even where this is not led by the employerEven where businesses do not have a proactive AI plan,staff are often experimenting with AI products themselves and engaging with them organically.There is a risk of inconsistent/inappropriate/unmonitored use of AI by staff.This could result in
108、 commercially or personally sensitive information being processed on AI software which is not controlled by or known by the employer.This risk can also arise in recruitment with candidates using AI during virtual interviews employers should consider whether to permit this and design interviews with
109、AI use in mind,or actively prohibit the use of AI and take steps to ensure it cannot be used to create an unfair advantage.Employers need to have updated policies and deliver training focusing on IT use/conduct/data protection policies and privacy notices to ensure appropriate limits,guidance and sa
110、feguards are in place.Talent planning and skills gap riskThe prominence of AI means that different skills are valued and needed by many employers.Employees who can get the best out of AI are valuable and that may mean a change in recruitment,progression,development and training strategies at all lev
111、els.However,there is a growing risk that AI prominence results in employees missing out on core learning with a risk of a skills gap forming.Companies need to understand what skills are needed,appropriate use of AI and how to factor in this changing skills profile into performance management,recruit
112、ment and retention exercises.Employers need to have updated policies and deliver training focusing on IT use/conduct/data protection policies and privacy notices to ensure appropriate limits,guidance and safeguards are in place.Companies need to understand what skills are needed,appropriate use of A
113、I and how to factor in this changing skills profile into performance management,recruitment and retention exercises.IP protection and enforcementIncreasing regulatory scrutiny anticipated in 2025Robyn ChatwoodPartner,MelbourneD+61 3 9194 The rapid advancement of AI continues to raise complex questio
114、ns about the applicability of intellectual property(IP)laws to AI and AI-generated works.The unprecedented pace of development of this technology is pushing enterprises towards self-governance frameworks founded on ethical considerations.IP remains one of the leading and most contentious issues in r
115、espect of AI governance.In 2025 and beyond,we expect to see governments across the world grappling with balancing strategies aimed at encouraging the development of AI and innovation while,at the same time,attempting to modernize IP and AI legal frameworks to account for AI.Rights in input data used
116、 to train AI models and infringement of IPA highly debated topic is whether use of copyright-protected materials to train AI models should be considered as an infringement of the underlying copyright.Or should AI models be entitled to create new,derived content“informed”by the training data(as a rea
117、l person may be having consumed the same source information)?This continues to provide a challenge to legislators worldwide.In 2025,we expect to see increased regulatory scrutiny of organizations that create or use AI technologies which have been trained using information/data protected by IP rights
118、.Regulators worldwide are now paying greater attention to balancing the benefits of AI against concerns about the protection of IP.By way of example,the Labour government elected in the UK in 2024 pledged to bring forward legislation tackling AI in 2025 and opened a consultation on the issue in Dece
119、mber 2024.In the consultation,the government is seeking views on an extension of the express exception for text and data mining(TDM)to allow data mining for commercial purposes,coupled with the ability for rights holders to opt out,which would bring the UK more in line with the EU.The consultation r
120、uns until 25 February 2025.The issue of IP infringement has taken center stage in global legislative discourse where AI models are trained on IP-protected data.While in some cases the right to scrape has been set out contractually between AI models and end-users,many large enterprises have been sued
121、 in various countries,in respect of unauthorized scraping of copyrighted work resulting in nuanced questions around fair use through democratized data mining of works in the public domain pending decision before the judiciary.Interestingly,many governments are leveraging AI to detect infringement an
122、d mitigate the risks.The rapid advancement of AI continues to raise complex questions about the applicability of intellectual property(IP)laws to AI and AI-generated works.We expect to see governments across the world grappling with balancing strategies aimed at encouraging the development of AI and
123、 innovation while,at the same time,attempting to modernize IP and AI legal frameworks to account for AI.16 Global artificial intelligence(AI)solutionsEditor17 Global artificial intelligence(AI)solutionsIn a judgment of the Hamburg Regional Court,it was held that even a machine-understandable(vs mach
124、ine-readable)disclaimer of a website specifically precluding the scraping thereof for the purpose of data mining would not preclude such mining done for scientific research that was publicly available without a cost.This decision is not res judicata yet and subject to debate.With AI having the capab
125、ility to generate images with unprecedented quality,it has also brought to the fore a unique issue within the larger ambit of infringement deepfakes.Thus far,legislations have recognized impersonation as an offence under penal,privacy and information technology laws.However,in a landmark development
126、,courts have recognized the personality of celebrities as being monetizable assets which are prejudiced by the emergence of deepfakes along with causing disrepute to their individual personas.Rights in output data AI-generated works,AI-inventions and other AI-outputs and infringementIn most countrie
127、s,authorship of creative works and invention of new technology can only be attributed to humans and can be procured by corporations via a work-for-hire arrangement.A vital question is whether AI can be regarded as a legitimate author of the content it generates or as an inventor in the case of paten
128、ts,given the lack of legal personality of the AI itself.Pertinently,the Commission for Intellectual Property and Companies in Africa was the first global office to have granted a patent application where AI was the inventor.This move had received considerable backlash from other countries.However,th
129、e Hong Kong government has declared AI-generated works as being capable of copyright protection under the existing law.The US Patent Office has also issued a nuanced Inventorship Guidance providing a framework for examiners of patent applications to assess the quantum of human contribution for the i
130、nvention to qualify for patent protection a move seeking to balance IP rights with the need to leverage upcoming technology.In the UK as well,the law specifically permits copyright protection in“computer-generated works”though the broader question of originality being a precondition for IP protectio
131、n continues to be ambiguous.In Europe,AI cannot be stated as inventor of a patent.Additional contributors include:Joel Bock(US),Michael Franzinger(US),Sunita Kaur Chima(Malaysia),Jennifer Cass(UK),David Wagget(UK),Constantin Rehaag(Germany),Aliya Seitova(Kazakhstan),Jenni Rutter(New Zealand),Nadia O
132、rmiston (New Zealand),Gne Haksever(New Zealand),Davin Olen(South Africa),Shahid Sulaiman(South Africa),Catherine Lee(Singapore),Andre Rahadian(Indonesia),Minh Tran(Vietnam),Linh Tran(Vietnam),Richard Keady (Hong Kong),Julian Ng(Hong Kong)and Dong-Hwan Kim(South Korea).Training AI using personal data
133、 or protected IP continues to provide a challenge to legislators worldwide.Disputes and managing liabilityThe importance of proactive risk managementPeter Z.StockburgerOffice Managing Partner,San DiegoD+1 619 595 Craig NeilsonPartner,LondonD+44 33 0222 Constantin RehaagPartner,Europe Co-Head of Inte
134、llectual Property,Data and Technology Group,FrankfurtD+49 69 45 00 12 Globally,dispute and litigation trends surrounding AI are evolving rapidly as the technology becomes more pervasive across industries.In 2025,we will continue to see courts grappling with the novel challenges AI presents,from defi
135、ning liability for AI-driven decisions to addressing algorithmic bias that disproportionately affects protected classes.National legislation applicable to the key areas we have outlined below has not been universally drafted to account for the challenges posed by AI and this factor,coupled with the
136、rapid pace of technological advancement,ensures that AI-related disputes will remain a dynamic and contentious area of law.However,we have seen some dispute resolution bodies now offering bespoke rules for AI or other technology-related disputes to ensure that they are resolved as efficiently as pos
137、sible with appropriate legal and technical expertise.2024 saw key legislative initiatives,such as the EU AI Liability Directive.Businesses and policymakers alike will continue to be under growing pressure to anticipate and address these legal risks,emphasizing the need for robust governance,complian
138、ce frameworks and proactive risk management in the AI landscape.We anticipate the following will remain a focus for disputes relating to AI in 2025:Data and data privacyA prominent area of concern is data privacy,where lawsuits are increasingly focusing on the unauthorized use of personal data to tr
139、ain AI models.There are also growing concerns that the data utilized by AI systems is affected by unconscious bias in its processing or gathering.The litigation,regulatory and reputational risk may be particularly acute where the AI(whether or not with human oversight)is used to make decisions or re
140、commendations impacting consumers.Employers should exercise particular caution in using AI to make decisions regarding their employees various jurisdictions have seen litigation regarding discriminatory outcomes resulting from the use of AI in that context.We will continue to see courts grappling wi
141、th the novel challenges AI presents,from defining liability for AI-driven decisions to addressing algorithmic bias that disproportionately affects protected classes.A prominent area of concern is data privacy,where lawsuits are increasingly focusing on the unauthorized use of personal data to train
142、AI models.18 Global artificial intelligence(AI)solutionsEditorsIntellectual propertyVarious jurisdictions have seen a rise in intellectual property(IP)disputes as generative AI systems and their use of data challenge traditional notions of authorship and ownership under copyright law.Lawsuits contin
143、ue to work their way through the courts over AI-generated content that allegedly incorporates copyrighted materials without proper licensing.The use of AI in this way is increasingly raising questions as to whether an AI model developer,trainer or user can be held liable where the AI makes use of IP
144、-protected works in generating content.High-profile disputes,such as those involving news organizations and artists,are testing the limits of fair use and copyright infringement.This legal gray area is prompting calls for clearer legislative and judicial guidelines,at least in some jurisdictions.In
145、Europe,many scholars and judges hold the opinion that the existing legal framework is sufficient to address copyright-related questions concerning AI,particularly regarding training,infringement and rights to the output.These topics have already attracted the attention of European law enforcement ag
146、encies.Consumer protectionConsumer protection lawsuits are an emerging battleground.Claims often involve allegations of deceptive marketing of AI products or services,such as exaggerations about capabilities or failure to disclose risks.A false allegation that a company is using AI to improve its se
147、rvices can constitute a misleading commercial practice,for which the company making the false claim may be held liable.Litigation around autonomous vehicles exemplifies these issues,with lawsuits targeting both the safety and transparency of AI systems in life-critical applications.Additionally,the
148、USs Federal Trade Commission(FTC),for example,has warned companies against deploying AI tools that mislead consumers,further amplifying the potential for regulatory action.Initial decisions in Europe suggest that the user of an AI product may be primarily liable to their contractual partners,even if
149、 they did not develop the AI product themselves.As AI systems become embedded in more consumer-facing products,litigation related to product liability and algorithmic discrimination is expected to increase.CybersecurityAI has significant potential to be used more widely to protect against the global
150、 threat of cyberattacks,by,for example,enhancing phishing protection and detecting insider threats.Equally,however,it also represents a threat,with new technology enabling new and even more difficult to detect threat vectors.Standards of care owed by companies to their customers,suppliers and third
151、parties are all likely to come under close scrutiny in this context as victims of fraud look to recover against identifiable and creditworthy parties who have unwittingly become involved on the peripheries of scams rather than fraudsters themselves,who may be difficult or impossible to trace and aga
152、inst whom enforcement may be impracticable.As AI systems become embedded in more consumer-facing products,litigation related to product liability and algorithmic discrimination is expected to increase.19 Global artificial intelligence(AI)solutionsM&A and investmentsAdvancing AI capabilities through
153、M&AConstantin RehaagPartner,Europe Co-Head of Intellectual Property,Data and Technology Group,FrankfurtD+49 69 45 00 12 Arik BroadbentPartner,VancouverD+1 604 648 The surge in AI adoption has significantly influenced corporate strategies,including in the realm of mergers and acquisitions(M&A).The gr
154、owing M&A activity is focused on companies acquiring related technology and technical talent to rapidly prepare for the disruption that AI is creating.Companies are increasingly leveraging M&A to enhance their AI capabilities,aiming to stay competitive in a rapidly evolving technological landscape.T
155、he role of AI in M&AAIs integration into M&A transactions is multifaceted,encompassing the acquisition of AI technologies,skills and processes.According to our study,nearly two-thirds(64%)of business leaders plan to use M&A to bolster their AI capabilities within the next 12 months,with this figure
156、rising to 70%over the next three years.Acquiring businesses with existing AI capabilities offers a relatively efficient way to onboard advanced technology and expertise,potentially leading to market expansion,enhanced agility and cost reductions.However,the decision to pursue M&A for AI capabilities
157、 is not without challenges.The fast-paced and ever-changing AI landscape means there are significant gaps in the market and the uncertainty regarding which companies will ultimately rise to the top may compel organizations to consider alternative approaches.These alternatives include strategic partn
158、erships with AI vendors and tech firms,taking minority stakes in AI organizations or purchasing third-party AI solutions as a service.AI use also requires a number of inputs that are seeing dramatic increases in demand including increased computing power to run AI models.Major chip manufacturers hav
159、e seen significant increases in demand for the components required to run AI models.AI also requires increased power,which is forcing governments and companies to consider how AI development growth can be supported by adding to existing power sources and energy grids,including renewed interest in nu
160、clear power.Companies are increasingly leveraging M&A to enhance their AI capabilities,aiming to stay competitive in a rapidly evolving technological landscape.Nearly two-thirds(64%)of business leaders plan to use M&A to bolster their AI capabilities within the next 12 months.64%20 Global artificial
161、 intelligence(AI)solutionsEditorsRegulatory considerations:the EU AI ActThe regulatory environment surrounding AI is becoming increasingly stringent,particularly with the introduction of the EU Artificial Intelligence Act (AI Act).This legislation,which recently came into force,imposes comprehensive
162、 compliance requirements on providers,deployers,importers and distributors of AI systems.The AI Act categorizes AI systems based on their perceived risk,with certain high-risk AI systems subject to rigorous regulations,including human oversight,technical documentation and post-market monitoring.Lega
163、l and compliance risksGovernments and regulatory organizations around the world have started developing legal principles and frameworks relating to the regulation of AI,with new regulations coming into effect on a regular basis.These regulations have the potential to impact AI transactions in two wa
164、ys:(i)new opportunities to develop technology to adhere to the regulations,and(ii)new regulations that might negatively impact an AI companys service or strategy.Key themes of these regulations include human rights and equality,human oversight,transparency of AI use,sustainability and security.The u
165、se of AI in M&A transactions also entails significant legal and compliance risks,particularly concerning copyright law.The ownership and licensing of the input,training data and output of AI systems are critical issues.The input and training data,which enable AI systems to learn and perform tasks,ca
166、n be subject to copyright protection.The target company may have obtained these materials from various sources and,depending on the terms and conditions,may have limited rights to use,modify,share or transfer them.The output of AI systems,which may be similar or identical to the input or training da
167、ta,can also be protected by copyright or other statutory provisions.If the target company lacks the necessary rights or licenses to use,exploit,distribute or transfer the output,it may face liability risks,including claims for infringement,damages and injunctions.These risks could extend to the buye
168、r,who may assume the target companys liabilities post-acquisition.In the context of M&A,identifying and categorizing AI systems and General-Purpose AI models within the target company is crucial.The AI Acts tiered approach to regulation means that AI systems employing manipulative techniques or expl
169、oiting vulnerabilities are entirely prohibited,with non-compliance resulting in substantial fines.High-risk AI systems,such as those used in employment or education,are subject to stringent rules,while other AI systems posing limited or no risks may fall outside the AI Acts scope.Due diligence and m
170、itigation strategiesAs executives and professional advisors improve their understanding of value generators and risks of AI-related companies,the due diligence process and purchase agreement negotiations are expanding to capture AI-related concepts of data use and ownership,copyright development and
171、 forthcoming regulatory risk.This underscores the importance for AI-related companies to evaluate their advisors expertise in a rapidly developing specialized transactional marketplace.We also anticipate that there will be a significant increase in data owners enforcing their copyrights in data sets
172、 used without the owners consent or a license to do so.Private equitys increasing involvement in AI M&AOver the past three years,it was estimated that 30%of AI-related M&A transactions were completed by a financial acquiror.7 There are a number of factors that we see supporting this level of private
173、 equity(PE)involvement.Artificial intelligence is poised to impact many traditional industries where PE funds hold ownership positions.The transformational possibilities of AI adoption in those industries can create significant efficiencies in operations,and operational efficiency improvement is a f
174、undamental lever for PE funds to deliver returns to investors,and which can also result in significant value creation for the AI companies in which these PE funds invest.Although there are some indications that the available dry powder held by PE funds has decreased slightly in 2024,available cash f
175、or investments also remains at or near all-time historical highs.In conclusion,while M&A offers a strategic avenue for enhancing AI capabilities,it requires careful consideration of regulatory,legal and compliance risks.Companies must conduct comprehensive due diligence and consider alternative stra
176、tegies to ensure a successful and compliant integration of AI technologies.7.https:/aventis- Global artificial intelligence(AI)solutionsCompetition and antitrust“Killer collaborations”to algorithmic collusionDr.Bertold Br-BouyssirePartner,BrusselsD+32 2 552 2977bertold.baer-In 2025,several trends ar
177、e anticipated in the realm of competition law enforcement as it relates to AI,including:Continued scrutiny from global competition regulators and emergence of AI regulationsAside from attempts to catch or call in“killer acquisitions”,regulators increasingly scrutinize“killer collaborations”between t
178、ech giants and start-ups with foundational AI(large language)models,suspecting a risk to block rivals from accessing new critical AI inputs(e.g.data,cloud infrastructure and GPUs)(“foreclosure”).Some regulators even try to assert jurisdiction over the hiring of“key personnel”.After decades of politi
179、cally neutral and methodologically consensual antitrust enforcement,regulators around the globe are increasingly subject to political pressure or beginning to deviate from orthodoxy to pursue industrial policy goals or protectionist objectives (e.g.“national champions”).Resources for classic ex-post
180、 enforcement of abusive conduct being scarce,the EU has introduced a series of ex-ante regulations that include provisions on the competitive conduct of the companies in scope,in particular“gatekeepers”(DMA,DSA,AI Act,etc.).The designation of companies as gatekeepers and other regulatory threshold f
181、eatures is expected to trigger litigation.Data-rich companies with dominant positions or significant market power that resort to conduct such as discriminatory self-preferencing or biased targeted pricing,or that breach privacy/data rules,may become subject to ex-post enforcement even beyond the sco
182、pe of the ex-ante regulations mentioned above.After decades of politically neutral and methodologically consensual antitrust enforcement,regulators around the globe are increasingly subject to political pressure or beginning to deviate from orthodoxy to pursue industrial policy goals or protectionis
183、t objectives.22 Global artificial intelligence(AI)solutionsEditor23 Global artificial intelligence(AI)solutionsAlgorithmic collusionAlgorithmic collusion is a growing concern among regulators and lawmakers.Competition law historically distinguishes between unlawful collusion and lawful parallel cond
184、uct(bizarrely called“tacit collusion”).Adapting own prices to those of competitors based on independent intelligence is lawful,while a collusive understanding between competitors to align prices is unlawful.Algorithms that monitor and adjust prices push that distinction to its limits.The US Preventi
185、ng Algorithmic Collusion Act of 2024 aims to address gaps in existing laws by banning the use of algorithms trained on non-public competitor data,imposing disclosure and auditing requirements and establishing presumptions of illegal price-fixing in certain algorithmic contexts.US and EU regulators a
186、re scrutinizing cases where competitors use shared algorithms to align prices.US lawsuits like those against RealPage and Yardi Systems involve allegations that algorithms were used to fix rental prices by analyzing and sharing non-public competitor data,with regulators claiming that algorithms enab
187、le or enforce a tacit agreement between competitors without explicit communication.The DOJ has emphasized that even tacit agreements facilitated by algorithms,such as adhering to pricing recommendations based on competitors shared data,can violate antitrust rules.Less radical EU guidelines stipulate
188、 that the shared use of algorithms relying on sensitive pricing information could be an“object”infringement and even algorithm providers could be held liable if their tools foreseeably facilitate collusion.Companies using advanced AI systems should proactively prevent them from independently develop
189、ing collusive behaviors,raising questions about liability in the absence of direct human contact.This has prompted calls for more proactive auditing and transparency measures to prevent inadvertent breaches(“looking under the hood”).Global AI team Chantal BernierOf Counsel,Co-chair Global Privacy&Cy
190、bersecurity Group,OttawaD+1 613 783 Dr.Bertold Br-BouyssirePartner,BrusselsD+32 2 552 2977bertold.baer-Henrietta BakerPartner,Dubai D+971 4 402 Juanita AcostaPartner,BogotaD+57 601 743 Chiara BocchiCounsel,MilanD+39 02 726 269 Our full-service global AI team provides solutions to help you successful
191、ly implement AI technologies to support your organizations strategy,while navigating the complexity of existing and future regulations.With 75+partners and fee earners advising in 80+jurisdictions worldwide,our global AI team provides market-leading legal advice around the world.Our team comprises l
192、eading AI experts advising across all key areas.Visit Dentons AI:Global Solutions Hub for the latest legal insights,webinar recordings and regulatory overviews from around the world.24 Global artificial intelligence(AI)solutionsGlobal AI team Simon Elliott Partner,Head of Data Privacy,Cybersecurity
193、and AI for UK,Ireland and Middle East,LondonD+44 20 7246 Purvis GhaniPartner,Global Chair Employment and Labor Practice,LondonD+44 20 7320 Nusrat Hassan Managing Partner,MumbaiD+91 22 6625 Matt HennessyPartner,MelbourneD+61 3 9194 Kuan Hon Of Counsel,LondonD+44 20 7320 25 Global artificial intellige
194、nce(AI)solutionsElouisa CrichtonPartner,GlasgowD+44 141 271 Todd D.DaubertPartner,WashingtonD+1 202 408 Kagan DoraPartner,IstanbulD+90 212 329 30 Robyn Chatwood Partner,MelbourneD+61 3 9194 Arik BroadbentPartner,VancouverD+1 604 648 Constantin RehaagPartner,Europe Co-Head of Intellectual Property,Da
195、ta and Technology Group,FrankfurtD+49 69 45 00 12 26 Global artificial intelligence(AI)solutionsGiangiacomo Olivi Partner,Europe Co-Head of Intellectual Property,Data and Technology,MilanD+39 02 726 268 Davin OlenAssociate,JohannesburgD+27 11 326 Hayley MillerPartner,Auckland D+64 9 915 Craig Neilso
196、nPartner,LondonD+44 33 0222 Michael ParkPartner,MelbourneD+61 3 9194 Antonis PatrikiosPartner,Co-chair Global Privacy&Cybersecurity Group and Global TMT Sector Lead,LondonD+44 20 7246 Gilbert LeongPartner,SingaporeD+65 6885 Karol LaskowskiPartner,Europe Head of Technology,Media and Telecommunication
197、s,WarsawD+48 22 242 51 Global AI team Zdenk KueraPartner,PragueD+420 236 082 27 Global artificial intelligence(AI)solutionsPeter Z.StockburgerOffice Managing Partner,San DiegoD+1 619 595 Shahid SulaimanSenior Partner,Cape TownD+27 21 686 Kirsten ThompsonPartner,TorontoD+1 416 863 Ambuj SonalPartner,
198、MumbaiD+91 22 6625 Global AI team Betislav imralEurope Insight&Intelligence Director,PragueD+420 236 082 Header 2025 Dentons.All rights reserved.Attorney Advertising.Dentons is a global legal practice providing client services worldwide through its member firms and affiliates.This website and its pu
199、blications are not designed to provide legal or other advice and you should not take,or refrain from taking,action based on its content.ABOUT DENTONSAcross over 80 countries,Dentons helps you grow,protect,operate and finance your organization by providing uniquely global and deeply local legal solutions.Polycentric,purpose-driven and committed to inclusion,diversity,equity and sustainability,we focus on what matters most to