Clicky

Google Ends Support For num=100 in SERPs

Editor’s note: Google added another corpse to the ever-increasing pile of useful things they’ve killed, and this one has pretty big implications for most SEO tools and how the data in your search console looks. Zak Kann wrote a great overview of what’s happening, why it’s happening, and what it means for SEOs.

There’s a possibility your Google Search Console (GSC) average position suddenly leapt up this week.

Google quietly removed the &num=100 parameter, the not-so-secret shortcut that allowed SEOs and tools to retrieve the top 100 search results in one go. 

This isn’t just a minor technical detail. For many SEO practitioners, rank trackers, and competitive intelligence platforms, it upends longstanding workflows, pricing models, and what data we can reliably trust.

What exactly changed

Until September 2025, appending &num=100 to a Google search URL meant you could load (or scrape) positions 1-100 on a single page instead of paging through 10-results at a time. 

That was a necessary efficiency hack for rank trackers and scrapers.

Now?

Google is killing that hack in their latest effort to stop SERP scrapers, especially those used by rival LLMs.

Attempts to use &num=100 now revert to default behaviour (10 results per page) or are ignored altogether. The change appears to have rolled out around September 10-12, first inconsistently, then more broadly a few days later.

Google has made no formal announcement about this tweak, so most of what we know comes from Search Engine Roundtable coverage, vendor updates, and community analysis.

Why this change matters

For SEOs, the removal of &num=100 is painful for several reasons:

  • 10x more work for the same data
    Tools that were collecting the top 100 results in one request must now make 10 separate requests to get the same coverage. That means more bandwidth, more server load, more proxy or VPN usage, more CAPTCHAs or possible blocks. Everything needs to scale up.
  • Cost increases and infrastructure strain
    More requests = more infrastructure. Rank tracking platforms, scraping tools, and competitive intelligence suites are reporting higher expenses. Semrush explained that pulling deeper rankings is now “significantly more resource-intensive,” and some are refreshing certain datasets every 2 days instead of daily.
  • Reporting changes & metric volatility
    Because many tools aren’t tracking deeply anymore, data for deeper rankings is less reliable or updated less often. Google Search Console itself seems to show shifts: impressions dropping, average position rising. Brodie Clark and others argue this is less about real user behavior and more about bot/scraper data falling away.
  • Trust & data accuracy concerns
    Many SEOs rely on tools to see where they rank beyond page 1. When that visibility is reduced or made more expensive, decisions about content and strategy may be based on incomplete data.

How the SEO ecosystem is reacting

A number of prominent tools and vendors have already started to adjust:

  • AccuRanker has announced they will no longer track the Top 100 by default; many users will now see tracking limited to 20 results by default.
  • Semrush posted notices that although top-10 rankings are unaffected, deeper rank updates might be less frequent.
  • SeoClarity claims its users are seeing minimal disruption, likely because they already had infrastructure that didn’t rely on the 100-result parameter.
  • SEO API providers like DataForSEO flagged that costs would go up 10x to get all 100 results.

Community commentary has been robust. Tim Soulo of Ahrefs warned that ranking data below top 20 is “likely going away.”

Ryan Jones noted that scraping pressures from AI tools may have accelerated this: “People are scraping so much, so aggressively for AI that Google is fighting back.”

Glenn Gabe reports:

Broader implications beyond traditional SEO

While SEOs are the most directly affected, this tweak ripples out to adjacent industries:

  • Competitive intelligence and ad tech tools that monitor SERP rankings or ad placements will need to scale up effort, cost, or limit their scope.
  • Market research firms scraping SERPs for content gap analysis will face similar cost/performance trade-offs.
  • AI and data-driven platforms relying on SERP data for training may see pipelines slow down or get more expensive.
  • Agencies will need to explain to clients why rank reports look different and recalibrate expectations.

What you should do now

To adapt (and stay ahead), here are some practical steps:

  • Audit your rank tracking: Check which tools you use and see whether they’ve adjusted settings, limits, or pricing.
  • Adjust your reporting: Be transparent about what depth of ranking you’re showing (Top 20? Top 50?).
  • Focus on high-impact keywords: Prioritize those likely to move into page 1-2.
  • Watch vendor updates: Many will introduce workarounds. Better proxy networks, user panel data, or adjusted pricing tiers.
  • Update benchmarks: Treat sudden “average position” improvements with caution. They may reflect data changes, not SEO wins.

Going forward

The removal of &num=100 from Google’s SERP queries represents a huge shift.

It’s a signal that Google is closing loopholes that allowed easy, large-scale scraping, whether by bots or by tools.

For SEOs, it means focusing on what really matters: visibility in the positions that drive traffic and revenue. Tools will adapt, workflows will change, but the reminder is clear: our access to search data is always borrowed, never guaranteed.

Author

  • Zak Kann is an AI automation specialist who helps small and mid-sized businesses streamline operations with cutting-edge workflows. He is also the creator of Content Raptor, a content optimization SaaS that integrates with Google Search Console to uncover growth opportunities.