NSHipster is back again with a bunch of tips for running AI models on a Mac with Ollama. Also this:
If you wait for Apple to deliver on its promises, you’re going to miss out on the most important technological shift in a generation.
NSHipster is back again with a bunch of tips for running AI models on a Mac with Ollama. Also this:
If you wait for Apple to deliver on its promises, you’re going to miss out on the most important technological shift in a generation.
I’ve given up on Apple’s “AI” pursuits. And while a “commitment to privacy” is good PR (and a convenient excuse for Siri’s sad state), the fact that ChatGPT can both supply exactly what I need and do it in a friendly, familiar way leaves no room for a lady saying, “Here’s what I found on the web.”
thank you. Your post led me to use Llama3 on the CLI, then Ollama WebUI, and finally, LM Studio. My afternoon is shot, but I had fun and now have some local running AI. I’m going to need a faster computer, though; my CPU and RAM were maxed out 😆
Thanks for sharing - nice piece that removes a lot of the “magic” ✨