Troubleshooting
Common issues and their solutions.
Installation Issues
"Command not found: devlog"
Cause: Binary not in system PATH
Solutions:
- Use full path:
./target/release/devlog - Install to system:
cargo install --path . - Add to PATH:
export PATH="$HOME/.cargo/bin:$PATH"
Build Fails with "linker error"
Cause: Missing system dependencies
Solutions:
# Ubuntu/Debian
sudo apt-get install build-essential libssl-dev pkg-config
# macOS
xcode-select --install
# Fedora/RHEL
sudo dnf install gcc openssl-devel
LLM Connection Issues
"Failed to connect to Ollama"
Check if Ollama is running:
curl http://localhost:11434/api/tags
Start Ollama:
ollama serve
Check firewall: Ensure port 11434 is not blocked
"Model not found"
Pull the model:
ollama pull llama3.2
List available models:
ollama list
"Connection refused to llama.cpp"
Check server is running:
curl http://localhost:8080/health
Start llama.cpp server:
./server -m models/your-model.gguf -c 2048
Git Repository Issues
"No commits found in range"
Check tags exist:
git tag -l
Use commit hashes instead:
devlog --from abc1234 --to def5678
Check you're in a git repo:
git status
"Failed to open repository"
Ensure you're in git directory:
cd /path/to/git/repo
devlog --repo .
Check git is initialized:
git log --oneline
API Key Issues
"OpenAI API key not found"
Set environment variable:
export OPENAI_API_KEY="sk-..."
Or specify in command (not recommended):
devlog --llm openai --llm-model gpt-4
# Will prompt for key
"Anthropic authentication failed"
Set API key:
export ANTHROPIC_API_KEY="sk-ant-..."
Verify key is correct:
echo $ANTHROPIC_API_KEY
Performance Issues
"Analysis is very slow"
Use smaller model:
# Instead of llama3.2:70b, use:
ollama pull llama3.2 # Default 7B model
devlog --llm ollama --llm-model llama3.2
Limit commit range:
devlog --limit 50 # Instead of full history
Use plain mode for quick results:
devlog --from v1.0.0 --to v2.0.0 # No LLM
"Out of memory"
Increase system memory
Use llama.cpp with quantized model:
# Download smaller quantized model (Q4 instead of F16)
Process in smaller batches:
# Split into smaller ranges
devlog --from v1.0.0 --to v1.5.0
devlog --from v1.5.0 --to v2.0.0
Output Issues
"No output generated"
Check for errors:
devlog --from v1.0.0 --to v2.0.0 2>&1 | tee debug.log
Enable debug logging:
export RUST_LOG="devlog=debug"
devlog --from v1.0.0 --to v2.0.0
"Output is garbled"
Specify output file:
devlog --output CHANGELOG.md
Check terminal encoding: Ensure UTF-8 support
Privacy & Security
"Dry-run shows sensitive data"
Use stricter privacy mode:
devlog --llm openai --privacy-level strict --dry-run
Only use local LLMs for sensitive code:
devlog --llm ollama --diff-analysis
"Data sanitization too aggressive"
Use moderate mode:
devlog --llm openai --privacy-level moderate
Or use relaxed with local LLM:
devlog --llm ollama --privacy-level relaxed
Getting Help
Still having issues?
- Documentation: GitLab Pages
- GitLab Issues: https://gitlab.com/aice/devlog/-/issues
- Discussions: GitLab Discussions
When reporting issues, include:
- Operating system and version
- Rust version (
rustc --version) - Devlog version (
devlog --version) - Full error message
- Command you ran