Google March 2026 Core Update, Crawl Limits & Gemini Referral Traffic: Weekly SEO Roundup
This week brought a cluster of meaningful developments across the search world. Google launched its first broad core update of 2026, its engineering team shed new light on how Googlebot fetches and processes pages, and fresh third-party data revealed that Gemini’s referral traffic has more than doubled in just two months. Here is a breakdown of what happened and why it matters for your SEO strategy.
Google’s March 2026 Core Update Is Now Live
Google confirmed the rollout of its March 2026 broad core update, marking the first such update of the year. Rollouts of this kind typically take up to two weeks to fully propagate across search results worldwide.
What We Know So Far
Google described this as a standard broad core update focused on surfacing content that is more relevant and satisfying across all website categories. It began rolling out just two days after the March spam update concluded — a spam update that wrapped up in under 20 hours.
The last broad core update before this was in December 2025, which finished on December 29. That means search rankings had not seen a major recalibration for roughly three months. A separate February 2026 update only touched Google Discover, leaving Search results untouched since late December.
What SEOs Should Watch
Ranking fluctuations may continue appearing in waves throughout early April. Google advises waiting at least one full week after the rollout completes before drawing conclusions from Search Console performance data. When analysing changes, use a baseline period predating March 27 for accurate comparisons.
John Mueller of Google’s Search Relations team clarified that core updates and spam updates serve entirely different purposes. On why core updates roll out gradually rather than all at once, Mueller explained that multiple internal teams and systems contribute separate changes, each requiring its own staged deployment. That is why volatility tends to arrive in bursts rather than a single wave.
For detailed coverage of the March 2026 Spam Update and its record-breaking completion time, read our article: Google Tests AI Headlines & Completes March 2026 Spam Update in Record Time
Googlebot’s 2 MB Crawl Limit: What the Architecture Actually Looks Like
Google’s Gary Illyes published a detailed blog post this week revealing how Googlebot fits into Google’s broader, centralised crawling infrastructure. The post adds important technical context to the 2 MB crawl limit that Google began discussing earlier this year.
The Architecture Behind the Crawl
Illyes explained that Googlebot is just one client of a shared crawling platform used across multiple Google products, including Google Shopping and AdSense. Each product runs its own crawler under a different name but routes requests through the same underlying system. Crucially, HTTP request headers count toward the 2 MB byte limit, and external resources such as CSS and JavaScript files have their own separate byte counters.
The platform’s default fetch limit is 15 MB. Googlebot for Search overrides this downward to 2 MB. When that threshold is reached, the crawler does not discard the page. Instead, it stops fetching and passes the truncated content to Google’s indexing pipeline as though it were the complete file. Anything beyond the 2 MB mark is simply never indexed.
