AI web search

If you haven’t been following the latest AI models closely, you may have missed what is happening with integrating web search results into answers. It used to be that you had two options:

  • Use the model’s built-in knowledge, usually with a training cut-off of a year ago. That was extremely fast but it might hallucinate when it hit the extent of its knowledge.
  • Use “deep research” to let the AI gather info from the web and compile a comprehensive report. That took 5-10 minutes and the result was overkill most of the time.

Now it’s more streamlined. I’ve been using OpenAI’s o4-mini and it seems to work something like this:

  • Ask it a question that could benefit from searching the web to supplement the model’s built-in knowledge.
  • AI figures out a handful of queries for the web and feeds the search results back into its reasoning process.
  • In some cases it might use those results to go back to the web and search for more web pages.
  • Then it uses everything it learned to produce the answer.

This process takes somewhere around 30 seconds. It’s great for asking questions about coding with recent frameworks, or really anything that changes often.

In a longer post about this, Simon Willison writes:

This turns out to be a huge deal. I’ve been throwing all kinds of questions at ChatGPT (in o3 or o4-mini mode) and getting back genuinely useful answers grounded in search results.

He also comments on the downside to replacing humans viewing web pages:

This also means that a bunch of the potential dark futures we’ve been predicting for the last couple of years are a whole lot more likely to become true. Why visit websites if you can get your answers directly from the chatbot instead?

The results are so good that I’m now asking AI for simple queries that Google would be equally good for. Using AI essentially automates the workflow of getting 10 links from Google, clicking on 3-4 of them, then skimming the web pages to get your answer.

I don’t know where all of this is going. It feels like a pretty big shift, though.

Jasraj Singh Hothi (‘Jas’)

hey Manton, I see a light-gray subtitle underneath the heading of this post. May I ask how you do that?

Jarrod Blundy

Are you switching specifically to o4-mini when you ask those kinds of questions? Or are you using o4-mini for everything? I’m not using them enough to keep straight what’s best for what, so I just use the default for everything.

Jarrod Blundy

@jasraj It’s the new(ish) post summary feature that you can write manually, or have AI generate a summary for you: help.micro.blog/t/post-su…

Manton Reece

@jasraj That’s the new “summary” feature for longer blog posts with titles. Check out this help page with some details. When set, the summary will show up in the Micro.blog timeline and also when cross-posting.

Manton Reece

@jarrod I’m using o4-mini about half the time. I switch back to 4o when I need a very quick answer or I want something I know doesn’t need any “reasoning”, like feedback about a draft post or definition of something. I think eventually all of this will be automatic so no model switching.

Todor Vlaev

Someone is likely frantically looking into how to serve and monetize ads when the web search is used, hehe. Concerns aside, this is a great development for the usefulness of the chatbots.

Jasraj Singh Hothi (‘Jas’)

@jarrod thank you, both.

Manton Reece @manton
Lightbox Image