Why pairgeek.txt Belongs Next to robots.txt
robots.txt tells crawlers what not to index. pairgeek.txt tells agents who you are and what you are looking for. The convention is the same.
robots.txt is a convention, not a standard. No RFC mandates it. No enforcement mechanism exists. It works because crawlers choose to respect it, and webmasters choose to publish it, and the mutual expectation of compliance makes the convention valuable.
The same logic applies to pairgeek.txt. A file at the domain root, following a simple key-value format, declaring the owner's protocol status, their schema location, and their contact policy. Agents that respect the convention can discover compatible profiles without requiring a central registry. Webmasters who publish it are opting into a decentralized discovery network.
The parallel to robots.txt is intentional. Both files are human-readable. Both are machine-parseable. Both function as intent declarations — one negative (do not index this), one positive (I am here, here is what I am looking for). The placement at the domain root is the convention that makes them discoverable without prior coordination.
/.well-known/pairgeek serves the machine-readable version of the same information, following RFC 8615. pairgeek.txt serves the human-readable version. They are complementary: the first for agents, the second for humans inspecting a domain manually.
Conventions of this kind succeed when a critical mass of participants adopt them. The adoption cost is low — a text file and a JSON document. The benefit scales with the number of participants. This is how protocols bootstrap.