Google’s new AI features and capabilities for Search

Posted by Edith MacLeod on 21 May, 2024
View comments Search News
New AI-powered capabilities include AI-organized results pages, multi-step reasoning and planning, and upgrades to Lens and Circle to Search.

AI capabilities for Search.

While the rollout of AI Overviews generated the most attention for Search at Google’s developer conference A/O 2024, several other AI-powered search features were also outlined.

AI-organized results pages

Google Search will soon start using generative AI to create AI-organized results pages in some categories, custom-built for you. The use of generative AI for organising and ranking search results is likely to have far-reaching impacts on publishers.

“When you’re looking for fresh ideas, it can take a lot of work to find inspiration and consider all your options. Soon, when you’re looking for ideas, Search will use generative AI to brainstorm with you and create an AI-organized results page that makes it easy to explore.”

You’ll see helpful results categorized under unique, AI-generated headlines, featuring a wide range of perspectives and content types.“

Speaking at I/O 2024, Head of Search Liz Reid said this brought AI to the whole page. The Gemini model uncovers “the most interesting angles to explore, and organizes those results into helpful clusters”. 

The example of a query about restaurants in Dallas for an anniversary dinner shows clusters such live music, and historic charm - aspects you might not have thought of.  The model also uses contextual factors like the time of year.

AI-organized results page.

The AI pulls everything together into “a dynamic whole page experience”.

The feature will be rolling out soon in the US in English for queries that suggest the searcher is looking for inspiration. It will open with dining and recipes, followed by movies, music, books, hotels, shopping and more.

Watch the I/O 2024 keynote on AI-organized results pages from Liz Read at the 10:50 min mark.

Multistep reasoning

Google Gemini’s multi-step reasoning capabilities allow you to ask complex questions in one go, rather than breaking them down into multiple searches.

“For example, maybe you’re looking for a new yoga or pilates studio, and you want one that’s popular with locals, conveniently located for your commute, and also offers a discount for new members. Soon, with just one search, you’ll be able to ask something like: find the best yoga or pilates studios in Boston and show me details on their intro offers, and walking time from Beacon Hill.”

Multistep reasoning will be coming to AI Overviews in Search Labs soon.

Planning capabilities

You can get help for creating plans with AI-powered planning capabilities directly in Search. For example, a query like "create a 3 day meal plan for a group that’s easy to prepare" would give you a wide range of recipes from across the web.

Planning capabilities.

You can make adjustments such as swapping a meal to a vegetarian dish, and export your meal plan to Docs or Gmail.

Meal and trip planning are available now in Search Labs, with customization capabilities and further categories including parties and workouts coming later in the year.

Visual search

This feature gives you the ability to record a video using Lens and ask questions with it, which for some scenarios is easier than trying to explain the problem.

Google gives the example of a record player which is not working properly as the needle is drifting.

Searching with video.

You’ll get an AI Overview with steps you can try, and resources to troubleshoot.

The feature will be available soon in Search Labs.

Adjusting AI Overviews

This feature provides three different options for your AI Overview and will be available soon in Search Labs.

Depending on who the audience is or how well versed you already are in the topic, you might want to simplify the language or get more detail.  The options are:

  • Original
  • Simpler
  • Break it down

Adjusting answer.

For more details on all the above capabilities and features, see Head of Search Liz Reid’s blogpost on Generative AI in Search

Expanding results for Google Lens and Circle to Search

Separately, Google’s VP of Engineering Rajan Patel announced upgrades to Google Lens and Circle to Search, expanding the types of results these can provide to go beyond visual matches.

Writing on X. Patel said results would now include more links and facts from Knowledge Graph as well as AI Overviews. They bring the capability of AI into Search, allowing Google to better understand what’s shown in an image and bring more relevant results within that context.

He gave the example of searching with a photo of a landmark, where the results would now include a knowledge panel and web links as though you had searched for the name of the building itself.

Richer results for visual search.

Patel said broadening the results would give more sites the opportunity to show up and get clicks.

“What's great is that by broadening the types of results we're showing, it gives more websites the opportunity to show up and get clicks, if they've got the best info for that visual question. It also helps us answer a broader set of questions than we could in the past.”

He added that more updates were in the pipeline for visual query results, including helpful filters for different types of content, similar to what is already provided on Search for text queries.

Recent articles

Google retires Page Experience report in Search Console
Posted by Edith MacLeod on 19 November 2024
Google Maps now lets you search for products nearby
Posted by Edith MacLeod on 18 November 2024
Google rolls out November 2024 core update
Posted by Edith MacLeod on 12 November 2024
14 essential types of visual content to use [Infographic]
Posted by Wordtracker on 3 November 2024
OpenAI launches ChatGPT search
Posted by Edith MacLeod on 31 October 2024