This is DEE. In 2002, my parents founded MEACHEAL — a mid-tier Chinese women's apparel brand that's been running for over two decades. Pattern cutting, supplier negotiations, distribution deals — I used to think software had almost nothing to do with this world.
Until last year, when I spent a month systematically running sourcing queries against ChatGPT and Claude. I asked the kinds of questions a small DTC founder would ask: "Find me a factory in Guangdong that makes mid-tier cotton knitwear with MOQ under 500." I logged every response.
The pattern was consistent. Factories that didn't exist. Factories that had closed years ago. Middlemen marketing themselves as manufacturers. Generic-sounding names that couldn't be verified against any official registry.
That was the moment I realized something obvious in hindsight: AI agents have a data problem, and nowhere is it more severe than in Chinese manufacturing — an industry that makes more than a third of the world's apparel but is almost invisible to every frontier model currently in production.
Over the past year I've been building MRC Data — an MCP server that takes this data that has always been scattered across public filings, certification bodies, and government registries but is completely invisible to AI agents, and integrates it into a structure agents can query and reason about. Every AI agent that makes decisions based on industry knowledge needs a clean data source. LinkedIn for employment. Crunchbase for startups. Bloomberg Terminal for finance.
For Chinese apparel — arguably the most complex manufacturing ecosystem on Earth — the data has always been there: mandatory disclosures from global listed brands, public databases from international certification bodies (OEKO-TEX, WRAP, ZDHC, SA8000), annual reports from listed Chinese apparel manufacturers, MIIT and Customs registries, GSXT credit records. The problem isn't that the data doesn't exist. It's that nobody has integrated it into a structure AI agents can retrieve, trust, and reason about.
Here are the five things I learned in the process that weren't obvious before I started.
1. AI agents are the new consumers of industry data
For two decades, industry data was built for human analysts. A buyer opens a database, reads supplier profiles, cross-references against certification bodies, flies over to verify, negotiates. Every layer of decision-making is gated by human judgment.
Agents don't work that way.
When an AI agent helps plan a DTC brand launch, it doesn't just search — it recommends. "You should work with Factory A in Humen, which has the capacity and certifications you need. Expect a 45-day lead time." Behind that one sentence sit multiple layers of inference, each citing the one before it.
The moment agents move from retrieval to recommendation, two things break about traditional industry data: first, human judgment exits the fact-checking loop; second, the data source you pick isn't just a knowledge base — it's the substrate of every decision your agent makes downstream.
2. Self-reported data becomes poison when AI consumes it
On every B2B platform I know of, the supply-side data is almost entirely self-reported. Factories fill in their own capacity, their own certifications, their own list of brand clients. There's usually no third-party check before it goes live.
For human buyers, this is suspicious but navigable. You visit. You sample. You ask three pointed questions to separate the real factories from the resellers. Self-reported data is a starting point, not a conclusion.
For agents, self-reported data is structurally dangerous. Consider a simple claim: "BSCI certified." For a human buyer, this phrase triggers a reflex — ask for the certificate number, check expiration, verify on the BSCI portal. For an agent, "BSCI certified" is just a true statement in its context window. It gets used. It gets cited. It influences the next tool call.
And because reasoning compounds, a single unverified claim cascades through a conversation. The agent recommends a factory, then calculates a timeline, then drafts a compliance memo — each step assuming the first claim was ground truth.
The fix isn't to ban self-reported data. It's to make verification status part of the data model itself. Every fact an agent receives should arrive with a traceable source and a confidence tier. If a claim hasn't been independently verified, the agent should know, and factor that uncertainty into its reasoning.
3. MCP is the protocol that makes this solvable
If you haven't come across Model Context Protocol yet: MCP is a protocol introduced by Anthropic in late 2024 that standardizes how AI agents call tools and retrieve structured data. A server serves typed tools (functions with JSON schemas); an agent discovers and calls them; responses come back as structured facts, not free-form text.
Why does this matter for industry data?
RAG is still the right tool for unstructured knowledge (long-form content, documentation, blog posts, entire books), and it's still evolving rapidly. But for structured data that an agent needs to directly reason about — a factory's capacity, certification status, disclosure consistency — RAG isn't the optimal primitive. You can't do boolean reasoning over a verified_dims field embedded in a vector, and you can't have the agent ask "is this factory's disclosure consistent with the SEC filing?" and get back an actionable yes/no.
MCP isn't a replacement for RAG — it's a parallel primitive. RAG pulls context from a sea of text; MCP's structured tools return facts the agent can reason about directly. That's the shift: industry data providers now have a universal interface to plug into any agent that speaks MCP — Claude Desktop, Cursor, Windsurf, Cline, and a growing list. Agents get structured facts, not guesses.
4. Apparel is one of the hardest industries to start with
Chinese apparel manufacturing employs tens of millions of people. Supplier claims intersect eight certification systems (BSCI, OEKO-TEX, WRAP, SA8000, GOTS, GRS, Bluesign, ZDHC), each with its own verification portal. Factories with nearly identical marketing can have quality outputs that differ by an order of magnitude. Export compliance now depends on three overlapping regulatory regimes — the US UFLPA, the EU CSDDD, and the EU Forced Labor Regulation — that aren't uniformly interpreted.
This is a hard first market. That's partly why I picked it.
I grew up in the apparel world. I know how many genuinely excellent factories exist in this industry — craft is tight, compliance is clean, they've served the most demanding brands — but most of them don't show up on Google, don't buy B2B ads, and rarely speak in English-language media. Their value lives in a handful of veteran buyers' contact lists. That's why AI agents can't find them today. It's not that good factories don't exist. It's that the industry has no registry an AI can read.
If a data source can be built that AI agents can trust for Chinese apparel, the same template works for every other manufacturing category. Start with the hardest real-world problem you have domain authority in. Solve that. Generalize later.
So the real proposition here isn't "Chinese apparel" — it's "how to take any vertical industry's fragmented data and turn it into a layer AI agents can trust." Apparel is just my first proof-of-domain because it's hard enough — tens of millions of workers, 8 certification systems, 3 overlapping export regimes — if it works for apparel, it works for cosmetics, food, electronics, components.
What MEACHEAL Research Center is doing is defining the methodology for a new category — verifiable, AI-native vertical industry data — built on 4 source layers and 7 verification layers. MRC Data is the first product of that methodology.
5. The moat isn't volume. It's verification.
The instinct when building an industry data product is to race for volume. More factories, more records, a bigger number on the landing page.
Volume isn't the moat. Directories with millions of entries already exist. They also happen to be the source of most of the hallucinations I was trying to fix in the first place.
The real moat is verification.
Every record in MRC Data runs through a multi-layer verification pipeline before it's tagged verified: cross-brand disclosure check, declared vs. disclosed capacity, fabric spec vs. lab test, status check across 8+ certification systems, market-access readiness (UFLPA / CSDDD / JIS / KC), business registration & penalty records, brand-supplier integrity check.
Every record returns a verified_dims: "X/Y" field as part of the response. With proper prompt guidance, the agent can use this field to decide how much to trust each record, skip low-verification entries, and avoid treating unchecked claims as ground truth.
This is the ethical design principle I keep coming back to for AI-consumable industry data: don't lie, don't hide uncertainty, and make your verification surface area part of the data model. The moat isn't the records. It's that each one is honest about what it claims.
Closing
You don't need to care about apparel. But if you're building AI agents, it's worth asking: where does the industry data in your domain come from? If the answer is "scraped B2B platforms" or "public documents + embeddings" — that's a problem worth solving, regardless of which industry you're in.
The latest agents won't be bottlenecked by the model. They'll be bottlenecked by the data they can trust.









![Defluffer - reduce token usage 📉 by 45% using this one simple trick! [Earthday challenge]](https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiekbgepcutl4jse0sfs0.png)


