Current location: Homepage > article > Generative AI Full Course – Gemini Pro, OpenAI, Llama, Langchain, Pinecone, Vector Databases & More - Ep182

Generative AI Full Course – Gemini Pro, OpenAI, Llama, Langchain, Pinecone, Vector Databases & More - Ep182

2025-07-11 11:56:20 [html] Source: ByteGenius
AI okay this is top ai certificationsthe requirementso just click on this download modelthis button and here you just need tofill some of the information like yourfirst name your last name then your dateof birth your email address country andorganization if you're working with thenyou just need to select these are themodel okay then just submit the requestso after uh 30 to 40 minutes actuallythey will accept the request and theywill give give you the access okay sothis is the requirement and if you'renot getting the access as of now okaythere is another alternative I will showyou how we can use this model okay so nono need to worry so what you can do guysyou can open this website uh Lama tometa and just apply for the permissionhereeveryone okay apply for the perpermissionso let me share you the link aswell so guys are you doing with me canyou pleaseconfirmuhhellookay I already have the access okay soyou already have the access then no needto worry about you can directly accessthe model and one particular thingsactually I just need to mention sowhenever you are applying for therequest okay so it will ask for theemail address so I I I believe actuallyyou all have something called huggingface account guys yes or no hugging faceaccount maybe hugging face has beendiscussed previoussession so make sure you try to use thehugging face email address the emailaddress you used for the huggingface okay so this email address you needto givehere because this model are available inthe hugging face website so whenever youwill uh apply for the access they willlike U ask for this email addressallright now let's discuss about this Lama2 little bit more like what they aretelling so if you just go below littlebit so this is the Lama 2 so they'retelling Lama 2 was trained on 40% modedata than Lama 1 I already told youthere was another model called Lama 1and what they did in Lama 2 actuallythey train the model of with the 40%more data okay 40% more data than yourllama 1 and it has a has like doublecontextlength we can download the model intoour local drive as well and can playwith right yes you can do it I will showyou like how we can download the modelallrightokay uh now see guys llama 2 hasactually different variant so it hasactually 7 billion parameter variant sothis is the 7B model and it has also 13billion model that means uh uh the modelhas actually 13 billion billionparameter and there is another modelcalled 7 70b okay so this is like 70billion parameter so if you want to usethese two model you need like goodmachine configuration and uh I'll tellyou like how we can use this 1B modeland B model as well so first of all Iwill like tell you how we can use this13B model okay what would be theapproach to access this model okay Ican't directly load the actual modelbecause actual model you can't ever loadin your low configuration machine okayyou need some good memory some good GPUthere uh but I will show you onealternative there we'll be usingsomething called quantized versionmodelokay yeahand now see guys this is The Benchmarkso this is the data set on this data setactually they have 10 differentdifferent large language model as youcan see M PT uh 7 me this this is themodel and this is the accuracy score uh26.8% and they also trained uh Falconmodel falcon 7 7 billion parameter modeland this is the accur accuracy score26.2 and they also trained on Lama 2 7 7billion model okay and this is the accyscore45.3% now see guys the accuracyImprovement okay isn't it good modelguys what what you can feel like seellama 2 is uh claiming uh this is thismodel is better than your GPT uh 3.5turbo anyone used GPT 3.5 turbo modelbefore maybe you haveusedyes yeah so they're claiming actually uhthis model is better than GPT 3.5 turbookay so that's how actually they havealso given the Benchmark here and seeLama 2 13 billion and this is theaccuracy and they again trained on mpt's13 billion parameter model and this isthe accuracy they got okay now they alsotrained with the Falcon 14 billionparameter this is the model they gotthis accuracy they got okay 15 uh55.4% then uh this is the Lama 1 modeluh this is the accuracy and llama 2 7billion model okay and see this is thehighest accuracy they got uh68.9% okay that's how they train ondifferent different data set differentdifferent open source data set as youcan see these are the data set actuallydata setname all right and um these are actuallypartners and supporters for this L 2okay like hugging face Nvidia they'reIntel they are also using these are themodel all right now guys uh did youappli for the permission um here did youappli for the permissioneveryone uh if not uh no need to worryabout I will show you one onealternative this alternative you canfollow okay no need to worryaboutcan we use llama 2 for translation ofthe codes into python code find tuningcustom data yes you can do it okay Lama2 has different different model variantI will tellyou even in future we'll also see how wecan fine tune the Lama 2 model as wellit is also possible on your customdata even it is already included in ourpaid courses I think you know uh that isalso one paid version of this course sothere actually we have alreadyintroduced the fine tuning technique aswell all right so guys uh so fareverything is clear everything is fineyou can let meknow uh if you have any question you canask me otherwise I will uh continue withthesession what is the name of the coursecould you please give me the informationrun this model over the docker key seeit would be also discussed okay it wouldbe also discussed Asal okay in the paidcourses actually we'll show thedeployment we'll also integrate dockerokay we'll also integrate cicd soeverything would be discussedthere allright now see if you want to play withthis Lama 2 model uh as your chat GPT sothere is another website actually hostedjust

(Editor: java)

Recommended articles
Hot reading