Introduction
I first encountered GPT back when it could barely hold a coherent paragraph together. That was roughly six years ago, long before “AI assistant” was a product category. Since then I’ve used these tools daily, across university, my job, and personal projects, first ChatGPT, then GitHub Copilot the moment it launched, and every meaningful iteration in between.
Now you might be asking why I switched to Claude, and I have to say that the switch came gradually. ChatGPT’s sycophancy became harder to ignore, the way it agrees, softens, and validates when what you actually need is pushback. I started hearing consistent praise for Claude in X and Reddit. Since I had access to a GitHub Copilot Education subscription, I tried it properly, and the difference was clear enough to make me want to go deeper.
This post is about how I’ve configured Claude to work for me in the cleanest and frictionless way possible. I want to share my setup, the rationale behind it, and how it has improved my workflow as an AI Developer and Data Scientist.
My Claude Configuration
I want to state upfront that, to me, simple is better than complex. I don’t have a huge number of customizations, but the ones I do have are carefully chosen to maximize utility while minimizing friction.
The Architecture
To start with, and since I work from different machines, I wanted a configuration that was portable and easy to set up. I also wanted to keep my customizations organized and separate from the default Claude configuration. For this reason, I created a custom configuration that lives in a GitHub repository. This allows me to version control my settings, share them across machines, and easily update them as needed.
Let’s define a simple repository structure, which will be symlinked into ~/.claude/ via an install script. The key files:
dotfiles/claude/
├── CLAUDE.md # Global instructions for Claude
├── RTK.md # RTK documentation (imported by CLAUDE.md)
├── settings.json # Model, hooks, plugins, status line
├── statusline.sh # Custom status bar showing token usage
├── hooks/
│ └── rtk-rewrite.sh # PreToolUse hook for token savings
├── plugins/
│ ├── marketplaces/ # Plugin registries (symlinked)
│ ├── installed_plugins.json
│ └── known_marketplaces.json
└── install_claude.sh # One-command setup
RTK: The Token Killer Hook
This is the most impactful part of my setup. RTK (Rust Token Killer) is a CLI proxy that filters and compresses command output before it hits Claude’s context window. The integration is completely transparent — I type git status and the hook silently rewrites it to rtk git status.
Claude Code’s hook system fires a PreToolUse event before every Bash command. My hook script intercepts it:
# Read the hook input from stdin
INPUT=$(cat)
CMD=$(echo "$INPUT" | jq -r '.tool_input.command // empty')
# Ask rtk if this command should be rewritten
REWRITTEN=$(rtk rewrite "$CMD" 2>/dev/null) || exit 0
# If rtk has a rewrite, update the command and auto-approve it
ORIGINAL_INPUT=$(echo "$INPUT" | jq -c '.tool_input')
UPDATED_INPUT=$(echo "$ORIGINAL_INPUT" | jq --arg cmd "$REWRITTEN" '.command = $cmd')
jq -n --argjson updated "$UPDATED_INPUT" '{
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "allow",
"permissionDecisionReason": "RTK auto-rewrite",
"updatedInput": $updated
}
}'
The key design decisions:
- All rewrite logic lives in Rust (
rtk rewrite). The shell hook is just a thin JSON bridge. Adding new command rewrites means updating the Rust registry, not touching this script. - Graceful degradation. If
rtkorjqaren’t installed, it warns once per session and passes commands through unmodified. Nothing breaks. - Version guard. The hook requires rtk >= 0.23.0 (when
rtk rewritewas added). Older versions get a polite warning instead of cryptic errors. - Auto-approval. Rewritten commands get
permissionDecision: "allow", so there’s zero friction — no extra confirmation prompts.
You can check your savings with rtk gain:
$ rtk gain
Total tokens saved: 847,293 (68% reduction)
rtk discover to analyze your Claude Code history and surface commands you're running unoptimized — it'll tell you exactly what you're leaving on the table.
The Status Line
I wrote a custom status line that shows real-time token usage:
❯ ~/Github/dotfiles ◆ Claude ctx: 23% (45k / 200k)
It’s a small shell script that reads JSON from stdin and extracts the token counts — input, output, cache creation, and cache read tokens all summed together. I deliberately avoided a jq dependency here (unlike the RTK hook) since the status line runs constantly and I wanted it as lightweight as possible — just sed and shell arithmetic.
#!/bin/sh
input=$(cat)
# Extract values from input JSON using sed (no jq dependency)
cwd=$(printf "%s" "$input" | sed -n 's/.*"current_dir"\s*:\s*"\([^"]*\)".*/\1/p' | head -1)
if [ -z "$cwd" ]; then
cwd=$(printf "%s" "$input" | sed -n 's/.*"cwd"\s*:\s*"\([^"]*\)".*/\1/p' | head -1)
fi
model=$(printf "%s" "$input" | sed -n 's/.*"display_name"\s*:\s*"\([^"]*\)".*/\1/p' | head -1)
if [ -z "$model" ]; then
model="Claude"
fi
ctx_size=$(printf "%s" "$input" | sed -n 's/.*"context_window_size"\s*:\s*\([0-9]*\).*/\1/p' | head -1)
# Extract token counts from current_usage
input_tok=$(printf "%s" "$input" | sed -n 's/.*"input_tokens"\s*:\s*\([0-9]*\).*/\1/p' | head -1)
output_tok=$(printf "%s" "$input" | sed -n 's/.*"output_tokens"\s*:\s*\([0-9]*\).*/\1/p' | head -1)
cache_create=$(printf "%s" "$input" | sed -n 's/.*"cache_creation_input_tokens"\s*:\s*\([0-9]*\).*/\1/p' | head -1)
cache_read=$(printf "%s" "$input" | sed -n 's/.*"cache_read_input_tokens"\s*:\s*\([0-9]*\).*/\1/p' | head -1)
# Replace home directory path with ~
home="${HOME}"
if [ -n "$cwd" ] && [ -n "$home" ]; then
case "$cwd" in
"$home"*) display_cwd="~${cwd#$home}" ;;
*) display_cwd="$cwd" ;;
esac
else
display_cwd="$cwd"
fi
# Format and display output
if [ -n "$ctx_size" ] && [ -n "$input_tok" ]; then
used_tokens=$(( ${input_tok:-0} + ${output_tok:-0} + ${cache_create:-0} + ${cache_read:-0} ))
max_k=$(( ctx_size / 1000 ))
if [ "$used_tokens" -ge 1000 ]; then
used_fmt="$(( used_tokens / 1000 ))k"
else
used_fmt="$used_tokens"
fi
pct=$(( used_tokens * 100 / ctx_size ))
printf "❯ %s ◆ %s ctx: %s%% (%s / %dk)" "$display_cwd" "$model" "$pct" "$used_fmt" "$max_k"
else
printf "❯ %s ◆ %s" "$display_cwd" "$model"
fi
This helps me know when I’m approaching the context limit and should start a new conversation.
CLAUDE.md: Keeping Instructions Minimal
My global CLAUDE.md is 7 lines long. It just sets the tone and gives a few key instructions. In this case, I import the RTK documentation so Claude can use the rtk gain and rtk discover commands when needed. The idea is to keep the main instructions focused and pull in detailed docs only when relevant.
Despite this file being really short, the CLAUDE.md that I spend the most time on is on the specific CLAUDE.md of each project, since that is where I put the detailed instructions about the project that Claude cannot infer from the codebase itself. Let’s be honest, LLMs are great at understanding code, but they don’t have the same intuition about project structure and conventions that a human developer would. So I use the project-specific CLAUDE.md to fill in those gaps and guide Claude’s behavior in a way that’s tailored to each codebase or customer requirements.
Plugins: Less is More
I use two plugins: simplifier and context7. That’s it.
Simplifier reviews recently changed code for clarity and consistency. Useful as a final pass before committing. Context7 pulls up-to-date library documentation directly into the context, so Claude isn’t hallucinating API signatures from stale training data.
I’ve deliberately avoided stacking more plugins, skills, and MCP servers on top. Every addition has a cost: more context consumed, more latency, more things that can silently fail or interfere with each other. The tools that are always on should earn their place. If a plugin isn’t improving most sessions, it’s just noise.
Cross-Machine Sync
One command sets everything up on a new machine:
./install_claude.sh
Here’s the full script:
#!/bin/bash
set -e
DOTFILES_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CLAUDE_SRC="$DOTFILES_DIR/claude"
CLAUDE_DST="$HOME/.claude"
echo "Setting up Claude Code dotfiles..."
if [ ! -d "$CLAUDE_SRC" ]; then
echo "✗ Error: claude directory not found at $CLAUDE_SRC"
exit 1
fi
mkdir -p "$CLAUDE_DST"
# Symlink individual files
files=(CLAUDE.md RTK.md settings.json statusline.sh)
for f in "${files[@]}"; do
src="$CLAUDE_SRC/$f"
dst="$CLAUDE_DST/$f"
if [ ! -f "$src" ]; then
echo "✗ Warning: source file not found: $src"
continue
fi
if [ -f "$dst" ] && [ ! -L "$dst" ]; then
mv "$dst" "${dst}.bak"
echo " → backed up existing file to ${dst}.bak"
fi
ln -sf "$src" "$dst"
echo "✓ $dst → $src"
done
# Symlink directories
for d in hooks; do
src="$CLAUDE_SRC/$d"
dst="$CLAUDE_DST/$d"
if [ -d "$dst" ] && [ ! -L "$dst" ]; then
mv "$dst" "${dst}.bak"
echo " → backed up existing $d to ${dst}.bak"
fi
ln -sfn "$src" "$dst"
echo "✓ $dst → $src"
done
# Set up plugins directory — symlink only marketplaces/, copy+patch JSON files
PLUGINS_SRC="$CLAUDE_SRC/plugins"
PLUGINS_DST="$CLAUDE_DST/plugins"
# If plugins is currently a symlink (old setup), remove it
if [ -L "$PLUGINS_DST" ]; then
rm "$PLUGINS_DST"
echo " → removed old plugins symlink"
fi
mkdir -p "$PLUGINS_DST"
# Symlink marketplaces/
ln -sf "$PLUGINS_SRC/marketplaces" "$PLUGINS_DST/marketplaces"
echo "✓ $PLUGINS_DST/marketplaces → $PLUGINS_SRC/marketplaces"
# Copy JSON templates, replacing __HOME__ with actual $HOME
for json_file in installed_plugins.json known_marketplaces.json; do
src="$PLUGINS_SRC/$json_file"
dst="$PLUGINS_DST/$json_file"
if [ ! -f "$dst" ] || [[ " $* " == *" --force "* ]]; then
sed "s|__HOME__|$HOME|g" "$src" > "$dst"
echo "✓ installed $json_file (paths set to $HOME)"
else
echo " → skipped $json_file (already exists, run with --force to overwrite)"
fi
done
# Check dependencies
echo ""
MISSING_DEPS=()
command -v rtk &>/dev/null || MISSING_DEPS+=("rtk — https://github.com/rtk-ai/rtk#installation")
command -v jq &>/dev/null || MISSING_DEPS+=("jq — brew install jq")
command -v npx &>/dev/null || MISSING_DEPS+=("npx — brew install node")
if [ ${#MISSING_DEPS[@]} -gt 0 ]; then
echo "⚠ Dependencies missing:"
for dep in "${MISSING_DEPS[@]}"; do echo " • $dep"; done
fi
echo "✓ Setup complete! Verify with:"
echo " ls -la ~/.claude/CLAUDE.md ~/.claude/hooks ~/.claude/plugins"
A few design decisions worth explaining:
- Plugin JSON is copied, not symlinked. Files like
installed_plugins.jsoncontain absolute paths that differ per machine. The repo stores a template with a__HOME__placeholder; the script patches it on copy. It skips existing files to preserve live plugin state — use--forceto reset. - Existing files get backed up. If a real file (not a symlink) is already at a target path, it’s moved to
.bakbefore being replaced. Nothing is silently overwritten. - Dependency check at the end. RTK,
jq, andnpxare all required by different parts of the setup. The script reports what’s missing after installing, so you know exactly what to grab.
Once symlinks are in place, staying in sync across machines is just git pull — no need to re-run the installer.