Thinking about AGI. The big step that is missing is personal AI being able to learn when it answers a question. So if I use deep research and my AI goes off and spends 10 minutes researching an answer, all of that should be fed back into the model for later.

Mark Stosberg

@manton Sounds the process used to train DeepSeek more cheaply.

The challenge though is knowing if all that thinking resulted in a useful answer.

Gemini now has a "Remember" feature. I can tell it that I use Arch Linux for example, and it will fact that into future answers. I tell it "Remember that...".

So it comes up with a good answer, at least for your own use, you could tell it to remember the answer.

Manton Reece

@markstos Yep, ChatGPT has a similar memory feature now. It’s close but I think still too limited compared to what it will eventually be. Perhaps a problem with AGI will actually be remembering useless info, just like humans!

Manton Reece @manton
Lightbox Image