🔢 Token Counter
Estimate how many tokens your text uses across major LLM tokenizers. Runs entirely in your browser — no text is sent anywhere.
Characters: 0 · Words: 0 · Lines: 0
GPT-4 / GPT-3.5
cl100k_base (tiktoken approximation)
0
tokens
Claude 3.5 / Claude 4
Anthropic tokenizer (approximation)
0
tokens
Context window reference
| Model | Context | % Used | Remaining |
|---|
How token counting works
This tool uses a rule-based approximation of the cl100k_base tokenizer (used by GPT-4 and GPT-3.5-turbo) and a similar heuristic for Claude's tokenizer. The actual token count from the official API may differ by ±5–10% for most text.
Token counts matter because LLMs have a fixed context window — sending more tokens than the limit will cause the model to truncate your message or return an error.
Note: Tokens are not characters. In English, 1 token ≈ 4 characters or ¾ of a word. Code and non-English text typically uses more tokens per character.