Really nice FAQ-style format for today’s Stratechery update on DeepSeek. Even if you don’t read the whole thing, you can tell something big is happening. I downloaded a medium-sized R1 model last week to test with Ollama on my Mac. Very impressed.

Seth is a Perpetual Startup ⁂

@manton I’m tempted.

John Spurlock

@manton which one did you try?

I was just trying out the 70b version (43 gigs) over the weekend to stress test my new macbook - really got the gpus working, fans whirring

gonna try out the big guy 671b version (404 gigs) later today

Devon Greene

@js Wow! Did you play with the smaller models? I’ve got 14b set up now, and trying to decide if it’s overkill for summarizing webpages - it definitely takes its time with chat responses answers relative to large online models I’ve used (this is my first Ollama experiment).

Manton Reece

@js I only tried the 14b one and it ran pretty fast. This is an M3 / 48 GB RAM which feels like both a lot and not nearly enough for the huge models.

Manton Reece @manton
Lightbox Image