Two years of daily Claude + ChatGPT. They've seen probably a million tokens of my writing. Every response still opens with "Certainly!" or "Great question!" and closes with "In conclusionβ¦".
Nobody writes like that. The model has no idea who you are β you're just another session.
So I built chatlectify. Point it at your exported chat history (Claude / ChatGPT / Gemini JSON, or a folder of your own writing β blog posts, emails, notes). It outputs a SKILL.md + system_prompt.txt that makes the model write like you.
How it works
- Extracts ~20 stylometric features from your messages β sentence-length distribution, contraction rate, bullet usage, hedge words, typo rate, punctuation histograms, question-vs-imperative ratio, top sentence starters
- Picks a stratified sample of your messages across length buckets as exemplars
- One LLM call distills it all into a portable style file
Privacy
Runs locally. Exactly one outbound LLM call to your configured model β the synth step that writes the style file. That call includes your feature summary and ~40 exemplar messages (the stratified sample). Nothing else leaves your machine. No telemetry, no cloud backend, no account.
Usage
pip install chatlectify
chatlectify all ./conversations.json --out-dir ./my_skill
Drop the folder into ~/.claude/skills/ or paste system_prompt.txt into any model that takes one.
Repo: https://github.com/0x1Adi/chatlectify
Curious what people think. Also β which export formats should I add next? Slack, iMessage, email, Discord, Obsidian vault?









![Defluffer - reduce token usage π by 45% using this one simple trick! [Earthday challenge]](https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiekbgepcutl4jse0sfs0.png)


