Wednesday, September 11, 2024
Show HN: Tune LLaMa3.1 on Google Cloud TPUs https://ift.tt/73lSDsJ
Show HN: Tune LLaMa3.1 on Google Cloud TPUs Hey HN, we wanted to share our repo where we fine-tuned Llama 3.1 on Google TPUs. We’re building AI infra to fine-tune and serve LLMs on non-NVIDIA GPUs (TPUs, Trainium, AMD GPUs). The problem: Right now, 90% of LLM workloads run on NVIDIA GPUs, but there are equally powerful and more cost-effective alternatives out there. For example, training and serving Llama 3.1 on Google TPUs is about 30% cheaper than NVIDIA GPUs. But developer tooling for non-NVIDIA chipsets is lacking. We felt this pain ourselves. We initially tried using PyTorch XLA to train Llama 3.1 on TPUs, but it was rough: xla integration with pytorch is clunky, missing libraries (bitsandbytes didn't work), and cryptic HuggingFace errors. We then took a different route and translated Llama 3.1 from PyTorch to JAX. Now, it’s running smoothly on TPUs! We still have challenges ahead, there is no good LoRA library in JAX, but this feels like the right path forward. Here's a demo ( https://ift.tt/XSFRUHN ) of our managed solution. Would love your thoughts on our repo and vision as we keep chugging along! https://ift.tt/L6OCeR7 September 11, 2024 at 08:44PM
Subscribe to:
Post Comments (Atom)
Show HN: val – An arbitrary precision calculator language https://ift.tt/1JxjPhf
Show HN: val – An arbitrary precision calculator language Wrote this to learn more about the `chumsky` parser combinator library, rustyline,...
-
Show HN: High school robotics code/CAD/design binder release Hello HN! My name is Patrick, and I am a junior at my High School’s FRC robotic...
-
Show HN: D&D meets Siri – Interactive voice adventure Hey HN! I've been building tooling for voice-driven apps over the past few mon...
-
Show HN: I Made an AI Social Media Manager to Automate Content Creation Hey HN, I am a Solopreneur, and I love building apps to automate bor...
No comments:
Post a Comment