1、JambaJambaTraining a Training a Foundational LLMFoundational LLM2024 AI21 LabsComing to Databricks Marketplace and External Model Serving 2024 AI21 LabsAgendaJambaTransformer vs.MambaAdvantages of hybrid architectureLLM trainingWhat is the future:compound AI systems2024 AI21 LabsPioneers at the fore
2、front of AI developmentOri GoshenOri GoshenCo-CEO&Co-FounderProf.Yoav ShohamProf.Yoav ShohamCo-CEO&Co-FounderProf.Amnon ShashuaProf.Amnon ShashuaChairmanOur JourneyOur JourneyFounded in 2017:NLP/ML research labWordtune(10M+users)specialized language modelsFocus:enterprise-ready AI systems with small
3、er language models(1B to 60B parameters),Major investors:Google,NVIDIA,Intel Capital2024 AI21 LabsWhat are Large Language Models?TokenizationNext Token Prediction+=LLM Chat Application2024 AI21 LabsWhat changed?2024 AI21 LabsTimeline2007RNNAttention is All You Need2017BERT2018GPT32020ChatGPT(3.5)202
4、2GPT42023AI21 Founded2017SenseBERT2019Jurassic2020MRKL2022Jurassic-22023Mamba2023Jamba20242024 AI21 LabsJamba2024 AI21 LabsJamba architecture under the hood 2024 AI21 LabsSignificant Advancements in LLMsAttention(2016)Mixture of Experts(2021)Mamba(2023)2024 AI21 LabsJamba architecture-How to compare
5、 models?“Vanilla Transformer”-Overall parameters(“models capacity”/“available parameters”)Mixture of Experts-Overall parameters+Active parameters Jamba-Overall parameters+Active parameters+Cache sizeGains:Very long context windowHigh throughputMaintains high quality2024 AI21 LabsLLM 101:What does it
6、 take to build an LLM?Crafting the right model architecturePretraining:learning languages&“common”knowledgeFollowing instructionsKnowledgeChatSafety&Security guardrailsTask excellence(RL)PretrainingAlignmentNext token predictionZero-shot EvalFT-Downstream EvalFew-shot Eval2024 AI21 LabsBase Model-Pr