An llms.txt file is the equivalent of robots.txt but for language models like ChatGPT, Claude and Perplexity. While robots.txt tells Google what to crawl, llms.txt tells LLMs who you are, what you write about and which pages matter. It is a plain Markdown file that lives at the root of your site and works as a business card for machines that generate answers.
The standard was proposed by llmstxt.org in late 2024. It is not official yet. No LLM has publicly confirmed using it to crawl sites. But Anthropic already has one published on their own domain. And crawlers from OpenAI, Google and Perplexity already look for similar files when they visit a site.
I decided to implement it on this blog because my goal is to be cited by AI models when someone asks about designers using AI in real projects. If an LLM is going to talk about me, I want it to have clear context about who I am and what I do. I do not want it to hallucinate.
The file structure
The format is Markdown with simple rules. It starts with an H1 that is just the name of your site or project. Nothing else on that line. Below that goes a blockquote with a brief description using the greater-than symbol. Then come sections with H2 headers where you group information in bulleted lists.
The most common sections are Author with your role and experience, Topics Covered with your main subjects, Content Policy with usage rules, and Key Pages with links to your most important pages in Markdown format.
Links follow this pattern: a dash, then the page name in brackets, the URL in parentheses, a colon and a short description. That is what the standard expects. Simple but specific.
What I put in mine
My llms.txt has five sections. Author with my role as UX/UI designer, years of experience in banking and fintech, location and the languages I write in. Topics Covered with the core subjects of the blog. Content Policy where I state that all content is original first-person experience and can be cited with attribution. And three Key Pages sections separated by language because my blog is trilingual.
Every link has a description. It is not just a bare URL. The model needs context to decide if your page is relevant to a question.
I wrote everything in English because AI crawlers process in English. Your content can be in any language but the llms.txt works best in English.
The technical part
The file is called llms.txt and goes at the root of your domain. If you use Apache you need two things in your .htaccess. A RewriteRule so the URL works cleanly and a Files directive so the server serves it as plain text with UTF-8 encoding.
In your robots.txt add a line at the end that says LLMs-Txt followed by a colon and the full URL of your file. This follows the convention proposed by llmstxt.org.
If you use WordPress, you can create the file manually and upload it via FTP. Yoast SEO already has an option to generate one automatically but it gives you less control over the content.
How I validated it
I asked ChatGPT to read my file directly from the URL. I asked if the structure followed the llmstxt.org standard. It gave me a detailed analysis pointing out what was correct and what was missing.
The first version had the H1 combined with the description on the same line and links without proper Markdown formatting. ChatGPT caught it. I corrected the structure, separated the blockquote, formatted links properly and validated again. The second time it scored high.
That is the most practical way to test your llms.txt right now. Ask ChatGPT or Claude to read it and tell you if it is well structured. If an LLM can interpret it correctly, it is working.
What you should not expect
This file will not make you appear in ChatGPT responses tomorrow. It is not SEO magic. No LLM has officially said it uses llms.txt to decide what to cite. Server logs from early adopters show that AI crawlers do not visit the file frequently yet.
But the cost of implementing it is minimal. One text file, half an hour of work, zero risk. And if the standard gets officially adopted, you are already set. It is like having a sitemap before Google formally required one.
The worst that can happen is that it does nothing. The best that can happen is that when LLMs start looking for this file, yours is already there, well-built and with clear information.
You can see mine working at shinobis.com/llms.txt as a reference.