Demo: Rapid prototyping with Gemma and Llama.cpp

Publicado 2024-04-01

Todos los comentarios (20)
  • @banzai316
    Very cool, thank you! I like this format with demos. We are developers!
  • @ayoubachak2154
    I've used gemma for a benchmark in a research project I'm working on, where I compared human results against AI, gemma was the closest after bloom 176B, followed by models like mistral instruct 7Band llama 34B, even the 2b version did pretty well, great work team 👏🏻
  • @flynnmc9748
    This is a fantastic format for a talk, insightful and engaging for a viewer!!!
  • @judevector
    Wow this is so cool 😎, developers changing the world
  • @zencephalon
    Good demo, nice tooling suggestions out of this
  • @arpitkumar4525
    Minimum System Requirements for running a model locally?
  • @voidan
    how do you connect the LM Studio to llama.cpp? you used a preset which was probably custom.
  • @johnkost2514
    Wrapped in the llamafile runtime it is an even better single file .. oh yes!
  • @parisneto
    CODE would be awesome, as well as knowing the SPEC of the notebook as it’s easy to buy a sub1k or 5K+ at apple store depending on so many factors…