Tabitha ☢️[she/her]

  • 0 Posts
  • 15 Comments
Joined 2 years ago
cake
Cake day: January 1st, 2023

help-circle











  • The open source LLMs and diffusion bee already work well enough on Apple Silicon. I know most people have delusional ideas about “AI” but being able to run StarCoder/Mixtral (or whatever model is hot in a year or 2) at a bigger size or faster on an M4 would easily be a hot selling point. The local ones are getting better and the tooling is in it’s infancy.

    To be clear, what I’m excited about is text/code autocompletion (think your phone’s predictive text, but orders of magnitudes smarter and more powerful) that performs great, can use your code repo (or for non-programming tasks folder of documents) for context (this feature is called RAG), locally chat an LLM about your code, and none of it requires your computer to be online or uploading everything to Microsoft.

    ATM most of this stuff is in it’s infancy and very few tools are easy to install, are clear about what you’re suppose to install, are clear about what hardware you need, do more than one thing, or work well together.