feat: update README to reflect model pricing display, enhanced context compaction, and new interactive/JSON-based configuration.
This commit is contained in:
26
README.md
26
README.md
@@ -24,7 +24,8 @@ This eliminates:
|
|||||||
- **Interactive REPL**: Chat with an AI model to edit files, manage directories, and execute commands
|
- **Interactive REPL**: Chat with an AI model to edit files, manage directories, and execute commands
|
||||||
- **Slash Commands**: `/setup`, `/help`, `/exit`, `/clear`, `/status`, `/compact`
|
- **Slash Commands**: `/setup`, `/help`, `/exit`, `/clear`, `/status`, `/compact`
|
||||||
- **Token Tracking**: Real-time token usage and cost per response, plus session totals
|
- **Token Tracking**: Real-time token usage and cost per response, plus session totals
|
||||||
- **Context Compaction**: Automatic conversation history compression when approaching context limits
|
- **Model Pricing Display**: Shows current model pricing from OpenRouter in the header
|
||||||
|
- **Context Compaction**: Automatic conversation history compression when approaching context limits, including stale tool result compaction
|
||||||
- **Comprehensive Toolset**: 15 tools for file operations, editing, directory management, and command execution
|
- **Comprehensive Toolset**: 15 tools for file operations, editing, directory management, and command execution
|
||||||
- **AOT-Ready**: Native AOT compilation for ~12 MB binaries with no .NET runtime dependency
|
- **AOT-Ready**: Native AOT compilation for ~12 MB binaries with no .NET runtime dependency
|
||||||
- **Rich CLI**: Beautiful terminal output using Spectre.Console with tables, rules, and colored text
|
- **Rich CLI**: Beautiful terminal output using Spectre.Console with tables, rules, and colored text
|
||||||
@@ -40,13 +41,11 @@ This eliminates:
|
|||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Option 1: Set environment variables
|
# Run the application
|
||||||
export ANCHOR_API_KEY=your_key_here
|
|
||||||
export ANCHOR_MODEL=qwen3.5-27b # optional, default: gpt-4o
|
|
||||||
|
|
||||||
dotnet run --project AnchorCli
|
dotnet run --project AnchorCli
|
||||||
|
|
||||||
# Option 2: Use interactive setup
|
# First time? The app will prompt you to run /setup
|
||||||
|
# Or run it explicitly:
|
||||||
dotnet run --project AnchorCli
|
dotnet run --project AnchorCli
|
||||||
/setup
|
/setup
|
||||||
```
|
```
|
||||||
@@ -60,20 +59,11 @@ dotnet publish AnchorCli -r linux-x64 -c Release
|
|||||||
|
|
||||||
The resulting binary is ~12 MB, has no .NET runtime dependency, and starts instantly.
|
The resulting binary is ~12 MB, has no .NET runtime dependency, and starts instantly.
|
||||||
|
|
||||||
## Environment Variables
|
|
||||||
|
|
||||||
| Variable | Default | Description |
|
|
||||||
|---|---|---|
|
|
||||||
| `ANCHOR_API_KEY` | *(required)* | API key for the LLM provider |
|
|
||||||
| `ANCHOR_ENDPOINT` | `https://openrouter.ai/api/v1` | Any OpenAI-compatible endpoint |
|
|
||||||
| `ANCHOR_MODEL` | `gpt-4o` | Model name |
|
|
||||||
| `ANCHOR_MAX_TOKENS` | `4096` | Max response tokens |
|
|
||||||
|
|
||||||
## Slash Commands
|
## Slash Commands
|
||||||
|
|
||||||
| Command | Description |
|
| Command | Description |
|
||||||
|---|---|
|
|---|---|
|
||||||
| `/setup` | Run interactive TUI to configure API key, model, endpoint |
|
| `/setup` | Run interactive TUI to configure API key and model (also accessible via `anchor setup` subcommand) |
|
||||||
| `/help` | Show available tools and commands |
|
| `/help` | Show available tools and commands |
|
||||||
| `/exit` | Exit the application |
|
| `/exit` | Exit the application |
|
||||||
| `/clear` | Clear the conversation history |
|
| `/clear` | Clear the conversation history |
|
||||||
@@ -113,7 +103,7 @@ The resulting binary is ~12 MB, has no .NET runtime dependency, and starts insta
|
|||||||
```
|
```
|
||||||
AnchorCli/
|
AnchorCli/
|
||||||
├── Program.cs # Entry point + REPL loop + AI client setup
|
├── Program.cs # Entry point + REPL loop + AI client setup
|
||||||
├── AnchorConfig.cs # Environment variable configuration
|
├── AnchorConfig.cs # JSON file-based configuration (~APPDATA~\anchor\config.json)
|
||||||
├── ContextCompactor.cs # Conversation history compression
|
├── ContextCompactor.cs # Conversation history compression
|
||||||
├── AppJsonContext.cs # Source-generated JSON context (AOT)
|
├── AppJsonContext.cs # Source-generated JSON context (AOT)
|
||||||
├── Hashline/
|
├── Hashline/
|
||||||
@@ -137,7 +127,7 @@ AnchorCli/
|
|||||||
|
|
||||||
## How It Works
|
## How It Works
|
||||||
|
|
||||||
1. **Setup**: Configure API credentials via environment variables or `/setup` command
|
1. **Setup**: Configure API credentials via the `/setup` command (or `anchor setup` subcommand)
|
||||||
2. **REPL Loop**: You interact with the AI through a conversational interface
|
2. **REPL Loop**: You interact with the AI through a conversational interface
|
||||||
3. **Tool Calling**: The AI can call any of the available tools to read/edit files, manage directories, or execute commands
|
3. **Tool Calling**: The AI can call any of the available tools to read/edit files, manage directories, or execute commands
|
||||||
4. **Hashline Validation**: All file edits are validated using the Hashline technique to ensure precision
|
4. **Hashline Validation**: All file edits are validated using the Hashline technique to ensure precision
|
||||||
|
|||||||
Reference in New Issue
Block a user