《4. GPT时代的创业机会(宜博).pdf》由会员分享,可在线阅读,更多相关《4. GPT时代的创业机会(宜博).pdf(96页珍藏版)》请在三个皮匠报告上搜索。
1、GPT时代的创业机会宜博宜创科技 创始&CEO个介绍?&CEO?TGO?TGO?16?TGO?AGI?1800?+?)3?1.?N?AI?2.?GPT?3.AI?First?4.?GPT?5?3 3?20+20+?务?:?80%?CRM?HR?BP?SQL?BI?017LLMFarmLLMFarm?ChainChain?Flow?Flow?Step1Step2Step3?SQL?API?8?LLM?LLM?9FeedMaasFeedMaas?0211?150?+150?+?请分析上市银行在2023年第一季度的营收、归母净利润及其驱动因素,以及其收入和成本方面的具体情况。请分析一下中国国有六大银
2、行在2023年第一季度的营收情况分析一下2023年第一季度股份行的业绩状况,包括财务状况、收入来源以及利润情况分析一下城农商行的业绩情况,特别是优质区域银行的景气度延续,特别是优质区域银行的景气度延续分析一下银行在净利息收入、手续费净收入和其他非息收入三个方面的表现情况。分析一下银行在2023年第一季度的资产扩张情况分析一下各个银行的存款情况分析一下银行的贷款结构和表现情况分析一下了2023年第一季度上市银行在对公贷款和零售贷款方面的情况分析一下银行的存款结构情况分析一下中国银行业在不同类型银行和不同存款类型之间的结构分化情况分析一下银行业在2022年以来净息差不断收窄的情况分析一下2023年
3、第一季度银行的利差下降情况分析一下银行在2023年的三个方面情况,包括资产端、成本端和息差方面分析一下国有大行(包括中国银行、工商银行、建设银行、交通银行和邮储银行)在2023年第一季度的息差表现情况。分析下股份行的息差情况,以及招商银行在这个市场中表现的优势。分析下城市商业银行的息差表现分析一下银行的拨备释放、信用成本和减值计提情况。分析一下银行的资产质量、不良率、拨备覆盖率、风险抵补能力、拨贷比和信用成本等情况分析一下中国国有大行和邮政储蓄银行在2023年第一季度的资产质量状况,以及中国银行在同期的不良率和拨备情况。分析一下中国股份行(其中特别提到了中信银行和招商银行)的资产质量状况,主要
4、包括不良率和拨备覆盖率等方面。分析一下优质区域性银行在2023年第一季度资产质量的表现根据银行业在2023年第一季度表现,你有什么投资建议请分析上市银行在第一季度的业绩情况和贷款投放情况。请分析银行在一季度的营收结构变化和净利息收入的贡献情况。请分析上市银行在贷款占总资产比重方面的表现。请分析银行在不良率方面的表现情况和资产质量的改善情况分析一下银行的一季度业绩情况分析一下银行息差下降的情况,包括影响息差下降的原因以及各类银行在季度环比中息差下降的情况。此外,对未来的息差情况进行展望,寻找合意资产投放并避免非理性价格竞争以及负债成本改善空间更大的银行分析一下中国银行业信贷市场的情况,包括银行贷
5、款增速、不同类型银行的表现以及对公和零售贷款结构等方面。分析一下银行的非息收入情况,包括上市银行非息收入同比增长、其他非息增速、投资收益增速以及投资相关非息收入占比等方面。分析一下银行的资产质量情况,包括上市银行不良率、拨备覆盖率、银行整体不良生成率以及关注部分领域尾部风险消化等方面。分析一下银行未来发展的情况,包括息差、存款成本、高收益信贷投放、中收、不良压力、尾部风险、高拨备、经济复苏以及估值等方面。12?2023?2023Q1?+1.4%?+2.2%?的?1?-1.8%?的?12.3%?2?-4.8%?的?去?3?+46.1%?87%?(a)?(b)?26.1%?-6.4%?1.00%?
6、导?=?(2023Q1?-2022Q1?)?/?2022Q1?100%?=?(2023Q1?-2022Q1?)?/?2022Q1?100%?=?(2023Q1?-2022Q1?)?/?2022Q1?100%?=?(2023Q1?-2022Q1?)?/?2022Q1?100%?=?(2023Q1?-2022Q1?)?/?2022Q1?100%?=?2023Q1?/?2023Q1?100%?=?(2023Q1?-2022Q1?)?/?2022Q1?100%?=?(2023Q1?-2022Q1?)?/?2022Q1?100%13?-?-LLM?LLM?1?Pre-training?2?Fine?tu
7、ning?3?Vector?Emebbding?4?SQL?Chain?Flow14LLMFarmLLMFarm?-?研?研?PDF-?Excel-SQL15LLMFarmLLMFarm?ChainChain?Flow?Flow?Step1Step2Step3?SQL?16Step4Step4?研?研?ChainChain?APIAPI?100%100%?研?研?KMS?0318?-?PDF?Word?HTML?19?PDFPDFHTMLHTML?20LLMFarmLLMFarm 务务KMS?KMS?ChainChain?MysqlMysql21?务?务?BotBot?务?务?0423?-AP
8、I?API?-?务?-?api?8?10?api?-?api?95%?java?springboot?controller?service?dao?response?exception?service?controller?dao?Reponse/result?exception?Log?exception?24LLMFarmLLMFarm?ChainChain?Flow?Flow?Step1Step2Step3?controller?Service/DTO?Git25Step3Step3?GitGitcontrollerService?0527Step1Step1?1.?2.?3.?:?4.
9、?llm?5?28Step2Step2?Chain?Chain?29Step3Step3?bot?bot?ChatBI?0631ChatBIChatBI?AI?0733?Email?Email?AIAI?Mail?Mail?34?LLMLLM?PromptPlugin?Function?Call?Prompt?WorkflowRAG?VectorDB)AgentDomain?LLM?SFT)Prompt?OpsFine?tuning?Ops35?100%100%?研?研?InputCalculateOutputMemoryLLM?LLMLLMLLMVector?DB?LLM?LLMAPILLM
10、SQL36LLMFarmLLMFarm?LLMFarm?QA?ChatBI?37?DomainDomain?ModelModel?38?研?研?1.?N?AI?2.?GPT?3.AI?First?4.?GPT?40?41Train.?FLOPsParams.ModelReferenceFew-shot?prompting?abilitiesAddition/subtraction?(3?digit)2.30E+2213B?GPT-3?Brown?et?al.?(2020)Addition/subtraction?(4-5?digit)?3.10E+23175BMMLU?Benchmark?(5
11、7?topic?avg.)?3.10E+23175B?GPT-3?Hendrycks?et?al.?(2021a)?rToxicity?classification?(CivilComments)?1.30E+227.1B?Gopher?Rae?et?al.?(2021)?rTruthfulness?(Truthful?QA)?5.00E+23280BMMLU?Benchmark?(26?topics)?5.00E+23280BGrounded?conceptual?mappings?3.1E+23?175B?GPT-3?Patel?&?Pavlick?(2022)?rMMLU?Benchma
12、rk?(30?topics)?5.00E+2370B?Chinchilla?Hoffmann?et?al.?(2022)?rWord?in?Context?(WiC)?benchmark?2.50E+24540B?PaLMChowdhery?et?al.?(2022)?rMany?BIG-Bench?tasks?(see?Appendix?E)?ManyMany?Many?BIG-Bench?(2022)Augmented?prompting?abilitiesInstruction?following?(finetuning)?1.30E+2368B?FLAN?Wei?et?al.?(202
13、2a)?rScratchpad:?8-digit?addition?(finetuning)?8.90E+1940M?LaMDA?Nye?et?al.?(2021)?rUsing?open-book?knowledge?for?fact?checking?1.30E+227.1B?Gopher?Rae?et?al.?(2021)?rChain-of-thought:?Math?word?problems?1.30E+2368B?LaMDA?Wei?et?al.?(2022b)?rChain-of-thought:?StrategyQA?2.90E+2362B?PaLM?Chowdhery?et
14、?al.?(2022)?rDifferentiable?search?index?3.30E+2211B?T5?Tay?et?al.?(2022b)?rSelf-consistency?decoding?1.30E+2368B?LaMDAWang?et?al.?(2022b)?rLeveraging?explanations?in?prompting?5.00E+23280B?Gopher?Lampinen?et?al.?(2022)?rLeast-to-most?prompting?3.10E+23175B?GPT-3?Zhou?et?al.?(2022)?rZero-shot?chain-
15、of-thought?reasoning?3.10E+23175B?GPT-3?Kojima?et?al.?(2022)?rCalibration?via?P(True)?2.60E+2352B?AnthropicKadavath?et?al.?(2022)?rMultilingual?chain-of-thought?reasoning?2.90E+2362B?PaLM?Shi?et?al.?(2022)?rAsk?me?anything?prompting?1.40E+226B?EleutherAI?Arora?et?al.?(2022)?研?研?42?10?11?150?10?14?15
16、?GPT4?1.8?43?Pre-training?Fine?tuning?Prompt?engineer?KnowhowAI?APPPre-training?Prompt?engineer?AI?APP44?AI?AI?45?H100?8?1?480?4000?=24?1?4000?H100?4?96?700?4620222022?1111?3030?OpenAIOpenAI?ChatGPTChatGPT?AGI?We?will?consider?the?following?general?defifinition of?emergence,?adapted?from?Steinhardt?
17、(2022)?and?rooted?in?a?1972?essay?called?More?Is?Difffferent?by?Nobel?prize-winning?physicist?Philip?Anderson?(Anderson,?1972):?Emergence?is?when?quantitative?changes?in?a?system?result?in?qualitative?Emergence?is?when?quantitative?changes?in?a?system?result?in?qualitative?changes?in?behavior.?chang
18、es?in?behavior.?In?this?paper,?we?will?consider?a?focused?defifinition of?emergent?abilities?of?large?language?models:?An?ability?is?emergent?if?it?is?not?present?in?smaller?models?but?is?present?in?larger?models.?务?务?务?务?1.1.?Emergent?Abilities?of?Large?Language?ModelsEmergent?Abilities?of?Large?La
19、nguage?ModelsTrain.?FLOPs Params.ModelReferenceFew-shot?prompting?abilitiesAddition/subtraction?(3?digit)2.30E+2213B?GPT-3?Brown?et?al.?(2020)Addition/subtraction?(4-5?digit)?3.10E+23175BMMLU?Benchmark?(57?topic?avg.)?3.10E+23175B?GPT-3?Hendrycks?et?al.?(2021a)?rToxicity?classification?(CivilComment
20、s)?1.30E+227.1B?Gopher?Rae?et?al.?(2021)?rTruthfulness?(Truthful?QA)?5.00E+23280BMMLU?Benchmark?(26?topics)?5.00E+23280BGrounded?conceptual?mappings?3.1E+23?175B?GPT-3?Patel?&?Pavlick?(2022)?rMMLU?Benchmark?(30?topics)?5.00E+2370B?Chinchilla?Hoffmann?et?al.?(2022)?rWord?in?Context?(WiC)?benchmark?2.
21、50E+24540B?PaLMChowdhery?et?al.?(2022)?rMany?BIG-Bench?tasks?(see?Appendix?E)?ManyMany?Many?BIG-Bench?(2022)Augmented?prompting?abilitiesInstruction?following?(finetuning)?1.30E+2368B?FLAN?Wei?et?al.?(2022a)?rScratchpad:?8-digit?addition?(finetuning)?8.90E+1940M?LaMDA?Nye?et?al.?(2021)?rUsing?open-b
22、ook?knowledge?for?fact?checking?1.30E+227.1B?Gopher?Rae?et?al.?(2021)?rChain-of-thought:?Math?word?problems?1.30E+2368B?LaMDA?Wei?et?al.?(2022b)?rChain-of-thought:?StrategyQA2.90E+2362B?PaLM?Chowdhery?et?al.?(2022)?rDifferentiable?search?index?3.30E+2211B?T5?Tay?et?al.?(2022b)?rSelf-consistency?deco
23、ding?1.30E+2368B?LaMDAWang?et?al.?(2022b)?rLeveraging?explanations?in?prompting?5.00E+23280B?Gopher?Lampinen?et?al.?(2022)?rLeast-to-most?prompting?3.10E+23175B?GPT-3?Zhou?et?al.?(2022)?rZero-shot?chain-of-thought?reasoning?3.10E+23175B?GPT-3?Kojima?et?al.?(2022)?rCalibration?via?P(True)?2.60E+2352B
24、?AnthropicKadavath?et?al.?(2022)?rMultilingual?chain-of-thought?reasoning?2.90E+2362B?PaLM?Shi?et?al.?(2022)?rAsk?me?anything?prompting?1.40E+226B?EleutherAI?Arora?et?al.?(2022)1.1.?Emergent?Abilities?of?Large?Language?ModelsEmergent?Abilities?of?Large?Language?Models?Translation?60BMath?60BIn-conte
25、xt?Learning?130BChain-of-thought?reasoning?130BKnowledge?combination?530BEmotion?Perception?530B?1.?CPQ?9800?20?499?2.?CPQ?29800?30?899?:?40?CPQ2?95?:?:?:?CPQ2?29800?2?=59600?:?40?30?899?40?30?=10?:?10?*899?*2?=17980?:?40?CPQ2?:59600?+17980?=77580?:?95?:?77580*95%=73701?:?Authing?40?CPQ2?95?73701?:?
26、IBM?50?CPQ3?9?LLMLLM?MathTranslationOne shot/Few shotCoT?5420232023?3 3?1515?OpenAIOpenAI?GPT4GPT455GPTGPT-4 41.?2.?4k?8k-?32k?Prompt?length3.?4.?5.?的?1/10000)56GPT4GPT4-?Tikz&SVGTikz&SVG?57SparksSparks?withwith?GPTGPT-4 4?GPT-4?GPT-4?GPT-4?务?务?AI?&?AGI?58GPTGPT-4V4V?9 9?2525?59GPTGPT-4V4V?9 9?2525?
27、60Dalle3Dalle3?1010?3 3?61ChatGPT can now see,hear,and speak 9?25?-?-?AI?AI?10?1.?N?AI?2.?GPT?3.AI?First?4.?GPT?语模型这次不是新的互联是新的业命?67?坯房精装房领包住装修队装修培训装修具装修全包服务业场景区域能源&材料68AIAI?FirstFirst?GPT?First?-?ChatPDF?Character.AI?-?ChatBI?Cot?In contextlearning?Planning?AI?Agents-?Sandbox?AutoGPTAI?First?-?1.?2.
28、?10?69?截 2023 年 7,1500万户,1700 万 bot70?71?1.?CPQ?9800?20?499?2.?CPQ?29800?30?899?:?LLMFarm?40?CPQ2?95?:?:?:?CPQ2?29800?2?=59600?:?40?30?899?40?30?=10?:?10?*899?*2?=17980?:?40?CPQ2?:59600?+17980?=77580?:?95?:?77580*95%=73701?:?LLMFarm?40?CPQ2?95?73701?40?CPQ?2?95?72?73?74?75?76AIAI?AgentsAgents?AutoAu
29、to-GPTGPT?CommunicativeCommunicative?agents)agents)78AIAI?AgentsAgents?AutoAuto-GPTGPT?CommunicativeCommunicative?agents)agents)79AIAI?AgentsAgents?AutoAuto-GPTGPT?CommunicativeCommunicative?agents)agents)80AIAI?AgentsAgents?AutoAuto-GPTGPT?CommunicativeCommunicative?agents)agents)81AIAI?AgentsAgent
30、s?AutoAuto-GPTGPT?CommunicativeCommunicative?agents)agents)82AgentAgent?ComeCome2011年版本的四象限?84FutureFuture?is is?comingcoming?Every?quadrant?will?give?birth?to?a?great?companyEvery?quadrant?will?give?birth?to?a?great?companyHuman ask AgentsAgents ask HumanAgents ask AgentsAgents Meeting85HumanHuman?
31、askask?AgentsAgentsLots of FansLow FansLow Imitation difficultyHigh Imitation difficultyEntertainment StarsInfluencer StarsReal Famous PeopleGame/Cartoon IPsHistorical PeopleVirtual Friends Virtual PartnerVirtual PetVirtual NPCSelf-AvatarFamily-AvatarC.AIReplicaGlowMyshellChatoXiaoiceCall AnnieCaryn
32、AI86AgentsAgents?askask?HumanHumanWorld?model:?Yann?LeCunWhy need Agents ask Human?Agents know more knowledge different Dimension.Ask human could convergent and fitting the better result.Example:When you want choose a phone,Agents could ask more questionWhich the different dimension of phones.87Agen
33、tsAgents?askask?AgentsAgentsTaskAsk AgentAnswerAgent AQuestion1Question1Answer Agent BQuestion1Question1Agents could automatically ask&answer replicate questions.Summary and make out report.Save lots of time for both side.Example:Job InterviewVC Due DiligenceSoftware RequirementSocial Dating 88Agent
34、sAgents?MeetingMeetingMake the Agents be a team and collaboration together.Example:AI town could simulate the people s life.ChatDev could simulate software water poll development.In future:We need to make more agents who have P&P knowledgework together for one particular task.Example:Simulate the Co
35、urt and Lawyers debate for one particular case.Discussion on a certain advertising idea.1.?N?AI?2.?GPT?3.AI?First?4.?GPT?明年今 GPT5降临今天我们做的事情还有多少有价值91GPT5GPT5?ComingComing GPT4 1.8万亿100倍 接近于脑100万亿参数 VLM原多模态模型 GPT4 64k Claude 100k100M token度92GPT5GPT5?ComingComing个人观点1.非虚拟的,实体的,硬件的,物理的2.私有数据、隐私数据3.念Prompt咒语4.专业/领域模型思考:什么领域不会被GPT-5在未来所折叠诚心正意无为而为活在当下法95?感谢参与THANKS