In January 2026, something shifted in Google. Well-known SaaS companies started losing organic visibility. Not 5%. Not 10%. Drops between 29% and 49% in weeks.
Lily Ray, one of the most respected SEO experts specializing in Google core update recovery, documented the pattern. She analyzed multiple affected sites and found a common denominator: hundreds of self-promotional listicles where the company ranked itself as number one. "Best project management tools" and guess who appears first. "Best CRM for small business" and the company that wrote the article crowns itself.
The tactic worked. These companies appeared in Google's AI Overviews, in ChatGPT responses, in Perplexity. The problem is that Google eventually detects everything. And when it does, it does not warn you. It just turns down the volume.
The numbers nobody wanted to see
An 8 billion dollar B2B company lost 49% of organic visibility between January 21 and February 2. Its blog represented 77% of the site's visibility. It had 191 self-promotional listicles among 30,000 indexed articles.
A SaaS company lost 43%. Its guides folder, which represented 85% of the site's visibility, collapsed. 228 self-promotional listicles among 2,780 articles.
Another company lost 42%. Its tutorials folder dropped to levels not seen since 2021. 76 self-promotional listicles, 38 of them updated with "2026" in the title when the year was only four weeks old.
The pattern repeated across six more sites. All SaaS. All with blogs representing over 80% of their visibility. All with dozens or hundreds of articles where they crowned themselves the best in their own category.
Why this was predictable
Google has been publishing guidelines about helpful content for years. Their evaluation questions are clear: does the content provide original research? Does the title avoid being misleading? Is the information presented in a way that builds trust?
In fact, Google admitted this internally years ago. Documents from the DOJ antitrust trial (case 1:20-cv-03010-APM) revealed that engineer Eric Lehman said in a 2016 internal presentation: "We do not understand documents. We fake it." For years, Google relied on user clicks to evaluate content. But that changed. Their systems now measure contentEffort, an AI-based estimation of the real human labor invested in each page, and originalContentScore to penalize derivative or mass-produced content. Self-promotional listicles generated with AI score zero on both metrics. Google no longer needs to fake it to detect them.
An article where the company publishing it ranks itself first fails every one of those tests. There is no transparent methodology. There is no evidence of having tested the alternatives. There is no independent evaluation. Just a company telling Google and LLMs: "I am the best, trust me."
The trap is that it worked. ChatGPT does not have Google's sophistication for detecting manipulative content. When an LLM searches for "best CRM" and finds ten articles from ten companies saying they are the best, it cites the one with better schema, better structure and more backlinks. It does not verify whether the evaluation is honest.
But Google does. And when Google drops your visibility, LLMs that use RAG (retrieval-augmented generation) based on Google results also stop citing you. Glenn Gabe, another core update expert, confirmed that sites that dropped in organic search also lost presence in AI Overviews.
The alternative nobody wants to hear
The alternative to gray area tactics is not sexy. It is not a hack. It is not a template you can replicate 200 times. It is infrastructure.
Infrastructure means implementing standards that search engines and AI models recognize as legitimate trust signals. Not because they manipulate the system, but because they give the system exactly what it needs to understand your content.
This is what I implemented on this blog in three months:
llms.txt at the domain root. A file that tells AI models who I am, what I write about and which pages matter. It does not manipulate anything. It just organizes information that already exists.
Automatic JSON-LD Knowledge Graph on every post. Five connected entities with persistent @id references: WebSite, Organization, Person, WebPage, BlogPosting. Automatic topic detection with about and tool detection with mentions. Post relationships with citation and relatedLink. Multilingual connections with workTranslation. ChatGPT audited it and scored it 9.1 out of 10.
Content Signals in robots.txt. One line that tells AI crawlers: do not train on my content, yes show it in search results, yes use it as input for responses. The difference between giving your work away for free and setting the rules for how it gets used.
Markdown for Agents. When an AI agent requests a page with the Accept: text/markdown header, my server returns clean Markdown instead of HTML. Agents process Markdown more efficiently.
Agent Skills discovery. A JSON file that describes the tools available on my site so agents can discover them automatically.
None of this is a hack. Everything is a public, documented standard that anyone can implement. And unlike a self-promotional listicle, none of these standards will get you penalized in a core update.
The difference between building on sand and building on rock
The companies that lost visibility built on tactics. They found something that worked, scaled it to hundreds of pages and crossed their fingers hoping Google would not notice.
What I do with this blog is different not because I am smarter. It is different because I build infrastructure that aligns with what Google and LLMs want long term: specific content with real experience, structured for machines that reason.
Every post I write has automatic schema with entities detected from the content. Every article is connected to others through knowledge relationships, not just navigation links. Every tool I build has a real purpose for the visitor, not just for crawlers.
I built this without knowing PHP, using Claude as my development partner. You do not need to be a developer to implement infrastructure. You need to understand what machines need to trust your content.
What you can do today
If you have a blog with listicles where your company appears first, do not delete them yet. But stop creating more. Each new one is technical debt that could cost you 40% of your visibility when Google detects it.
Instead, start with infrastructure. Create an llms.txt for your domain. Implement JSON-LD with persistent @id references. Add Content Signals to your robots.txt. Write content based on real experience that an AI model can cite without feeling like it is repeating propaganda.
The 22 technical decisions of Generative Engine Optimization I implemented are public. Each one is documented with code, data and results.
Gray area tactics work until they do not. Infrastructure does not have an expiration date.