Skip to content

Musings on AI

Musings on AI

  • Home
    • Home
    • 2024
Mac Nvidia

Adventures in Training: Axolotl

Dracones April 24, 2024

I wanted to test out very basic training using a standard library across both Nvidia and Mac and thought I’d give Axolotl a shot since it works(technically) on Mac. The…

Llama.cpp Mac Oobabooga

Llama 3 on Web UI

Dracones April 22, 2024

When doing inference with Llama 3 Instruct on Text Generation Web UI, up front you can get pretty decent inference speeds on a the M1 Mac Ultra, even with a…

Llama.cpp Mac Nvidia

The Case for Mac: Power Usage

Dracones April 22, 2024

I have two dual Nvidia 3090 Linux servers for inference and they’ve worked very well for running large language models. 48GB of VRAM will load models up to 70B at…

Mac Oobabooga

Text Generation Web UI(llama.cpp)

Dracones April 21, 2024

In this post I’ll be walking through setting up Text Generation Web UI for inference on GGUF models using llama.cpp for Mac. Future posts will go deeper into optimizing Text…

Mac

Mac for a Linux Nerd

Dracones April 20, 2024

Coming from the world of Linux, Mac is a little different. The below steps are what I do in order to setup a Mac M1 Ultra for use as a…

Recent Posts

  • Adventures in Training: Axolotl
  • Llama 3 on Web UI
  • The Case for Mac: Power Usage
  • Text Generation Web UI(llama.cpp)
  • Mac for a Linux Nerd

Recent Comments

No comments to show.

Musings on AI