Best Practices

Recommendations for getting the most out of Devlog.


Commit Message Best Practices

Use Conventional Commits

Follow the Conventional Commits specification for best results:

<type>(<scope>): <description>

[optional body]

[optional footer]

Good Examples:

feat(auth): add JWT token validation
fix(api): correct rate limiting logic
docs(readme): update installation instructions

Avoid:

updated stuff
misc changes
WIP

Be Descriptive

Write clear, descriptive commit messages that explain what and why:

feat(auth): add JWT token validation

Implements JWT token validation middleware to secure API endpoints.
Uses RS256 algorithm for signature verification.

Closes #123

Using Devlog Effectively

Choose the Right Mode

Plain Mode - Quick changelog without AI:

devlog -f v1.0.0 -t v2.0.0

Use when: Speed is priority, simple overview needed

AI-Enhanced - Improved summaries (diff analysis auto-enabled):

devlog -f v1.0.0 -t v2.0.0 --llm ollama:llama3.2

Use when: Better readability needed, commit messages are unclear

AI-Enhanced without diffs - Summaries only, no diff analysis:

devlog -f v1.0.0 -t v2.0.0 --llm ollama:llama3.2 --no-diff

Use when: Need technical details, understanding code changes is important

Feature-Grouped - High-level overview:

devlog -f v1.0.0 -t v2.0.0 --llm ollama:llama3.2 --group-features

Use when: Presenting to stakeholders, creating release notes

Optimize Performance

Limit Scope:

# Instead of full history
devlog --llm ollama:llama3.2

# Use reasonable ranges
devlog -f v1.0.0 -t v2.0.0 --llm ollama:llama3.2

Use Smaller Models:

devlog --llm ollama:llama3.2  # 7B model, faster

Cache Results:

# Save to file for reuse
devlog -f v1.0.0 -t v2.0.0 --llm ollama:llama3.2 -o changelog.md

Privacy Recommendations

For Open Source Projects

Use local LLMs to protect contributor privacy:

devlog --llm ollama:llama3.2

For Private/Enterprise Code

Never use cloud LLMs with diff analysis:

# Good - local only
devlog --llm ollama:llama3.2

# Bad - sends code to cloud
devlog --llm openai:gpt-4-turbo  # This will fail by design for diff analysis

For commit message summaries (no code):

# Use strict mode (default)
devlog --llm openai:gpt-4-turbo --privacy strict

# Or moderate if file paths are not sensitive
devlog --llm openai:gpt-4-turbo --privacy moderate

Always Verify with Dry-Run

Before using cloud providers:

devlog --llm openai:gpt-4-turbo --dry-run

CI/CD Best Practices

Pin Versions

# Don't use latest
- cargo install devlog  # Bad

# Pin to specific version or commit
- cargo install --git https://gitlab.com/aice/devlog.git --rev abc1234  # Good

Use -y

Skip interactive prompts in CI:

devlog --llm ollama:llama3.2 -y -o CHANGELOG.md

Cache Dependencies

Speed up CI builds by caching Ollama models:

- name: Cache Ollama models
  uses: actions/cache@v3
  with:
    path: ~/.ollama
    key: ollama-llama3.2

Output Organization

Maintain CHANGELOG.md

Keep a versioned changelog file:

CHANGELOG.md
├── ## [Unreleased]
├── ## [2.0.0] - 2024-01-15
├── ## [1.0.0] - 2023-12-01
└── ...

Separate Release Notes

Create per-release files:

releases/
├── v2.0.0.md
├── v1.0.0.md
└── ...

Use Consistent Format

Stick to one format across your project:

# Always use the same flags
devlog -f $PREV_TAG -t $NEW_TAG \
       --llm ollama:llama3.2 \
       -o RELEASE_NOTES.md

Model Selection

For Speed

devlog --llm ollama:llama3.2  # Small, fast

For Quality

devlog --llm ollama:codellama  # Better code understanding

For Privacy

devlog --llm llamacpp  # Maximum control, fully local

Common Pitfalls

Don't Over-Analyze

Analyzing thousands of commits is slow and often unnecessary:

# Bad - entire history
devlog --llm ollama:llama3.2

# Good - reasonable range
devlog -f v1.0.0 -t v2.0.0 --llm ollama:llama3.2

Don't Skip Testing

Always test generated changelogs before publishing:

# Generate to file
devlog -f v1.0.0 -t v2.0.0 --llm ollama:llama3.2 -o DRAFT.md

# Review
cat DRAFT.md

# Publish if good
mv DRAFT.md CHANGELOG.md

Don't Ignore Privacy

Never send sensitive code to cloud providers:

# If code is sensitive, use local LLM only
devlog --llm ollama:llama3.2

Documentation

Keep README Updated

Document your changelog generation process:

## Generating Changelog

We use Devlog to generate changelogs:

\`\`\`bash
devlog -f v1.0.0 -t v2.0.0 --llm ollama:llama3.2 -o CHANGELOG.md
\`\`\`

Add Pre-Release Scripts

Create scripts for common tasks:

#!/bin/bash
# scripts/generate-changelog.sh
PREV_TAG=$(git describe --tags --abbrev=0)
devlog -f $PREV_TAG -t HEAD \
       --llm ollama:llama3.2 \
       -o RELEASE_NOTES.md

Version Control

Commit Generated Changelogs

git add CHANGELOG.md
git commit -m "docs: update changelog for v2.0.0"

Tag Releases Properly

Use semantic versioning:

git tag -a v2.0.0 -m "Release v2.0.0"
git push origin v2.0.0

← Back to Home