Adventures in Training: Axolotl
I wanted to test out very basic training using a standard library across both Nvidia and Mac and thought I’d give Axolotl a shot since it works(technically) on Mac. The…
I wanted to test out very basic training using a standard library across both Nvidia and Mac and thought I’d give Axolotl a shot since it works(technically) on Mac. The…
When doing inference with Llama 3 Instruct on Text Generation Web UI, up front you can get pretty decent inference speeds on a the M1 Mac Ultra, even with a…
I have two dual Nvidia 3090 Linux servers for inference and they’ve worked very well for running large language models. 48GB of VRAM will load models up to 70B at…
In this post I’ll be walking through setting up Text Generation Web UI for inference on GGUF models using llama.cpp for Mac. Future posts will go deeper into optimizing Text…
Coming from the world of Linux, Mac is a little different. The below steps are what I do in order to setup a Mac M1 Ultra for use as a…