1
0

feat: introduce slash commands, token tracking, context compaction, grep_recursive tool, and update README with new features and project structure.

This commit is contained in:
2026-03-04 08:24:14 +01:00
parent 3ceb0e4884
commit 218c9cebb6
2 changed files with 45 additions and 10 deletions

View File

@@ -22,11 +22,15 @@ This eliminates:
## Features
- **Interactive REPL**: Chat with an AI model to edit files, manage directories, and execute commands
- **Comprehensive Toolset**: 12+ tools for file operations, editing, directory management, and command execution
- **Slash Commands**: `/setup`, `/help`, `/exit`, `/clear`, `/status`, `/compact`
- **Token Tracking**: Real-time token usage and cost per response, plus session totals
- **Context Compaction**: Automatic conversation history compression when approaching context limits
- **Comprehensive Toolset**: 15 tools for file operations, editing, directory management, and command execution
- **AOT-Ready**: Native AOT compilation for ~12 MB binaries with no .NET runtime dependency
- **Rich CLI**: Beautiful terminal output using Spectre.Console with tables, rules, and colored text
- **Streaming Responses**: Real-time AI response streaming in the terminal
- **OpenAI-Compatible**: Works with any OpenAI-compatible API (OpenAI, Ollama, Cerebras, Groq, OpenRouter, etc.)
- **Ctrl+C Support**: Cancel in-progress responses without exiting
## Requirements
@@ -36,11 +40,15 @@ This eliminates:
## Quick Start
```bash
# Option 1: Set environment variables
export ANCHOR_API_KEY=your_key_here
export ANCHOR_MODEL=qwen3.5-27b # optional, default: gpt-4o
export ANCHOR_ENDPOINT=https://api.openai.com/v1 # optional
dotnet run --project AnchorCli
# Option 2: Use interactive setup
dotnet run --project AnchorCli
/setup
```
## Native AOT Build
@@ -57,15 +65,27 @@ The resulting binary is ~12 MB, has no .NET runtime dependency, and starts insta
| Variable | Default | Description |
|---|---|---|
| `ANCHOR_API_KEY` | *(required)* | API key for the LLM provider |
| `ANCHOR_ENDPOINT` | `https://api.openai.com/v1` | Any OpenAI-compatible endpoint |
| `ANCHOR_ENDPOINT` | `https://openrouter.ai/api/v1` | Any OpenAI-compatible endpoint |
| `ANCHOR_MODEL` | `gpt-4o` | Model name |
| `ANCHOR_MAX_TOKENS` | `4096` | Max response tokens |
## Slash Commands
| Command | Description |
|---|---|
| `/setup` | Run interactive TUI to configure API key, model, endpoint |
| `/help` | Show available tools and commands |
| `/exit` | Exit the application |
| `/clear` | Clear the conversation history |
| `/status` | Show session token usage and cost |
| `/compact` | Manually trigger context compaction |
## Available Tools
**File Operations:**
- `read_file` - Read a file (or a window) with Hashline-tagged lines
- `grep_file` - Search a file by regex — results are pre-tagged for immediate editing
- `grep_recursive` - Search for a regex pattern across all files in a directory tree
- `find_files` - Search for files matching glob patterns
- `get_file_info` - Get detailed file information (size, permissions, etc.)
@@ -93,25 +113,37 @@ The resulting binary is ~12 MB, has no .NET runtime dependency, and starts insta
```
AnchorCli/
├── Program.cs # Entry point + REPL loop + AI client setup
├── Config/AppConfig.cs # Environment variable configuration
├── AnchorConfig.cs # Environment variable configuration
├── ContextCompactor.cs # Conversation history compression
├── AppJsonContext.cs # Source-generated JSON context (AOT)
├── Hashline/
│ ├── HashlineEncoder.cs # Adler-8 + position-seed hashing
│ └── HashlineValidator.cs # Anchor resolution + validation
├── Tools/
│ ├── FileTools.cs # read_file, grep_file, find_files, get_file_info
│ ├── FileTools.cs # read_file, grep_file, grep_recursive, find_files, get_file_info
│ ├── EditTools.cs # replace_lines, insert_after, delete_range, create/delete/rename/copy/append
│ ├── DirTools.cs # list_dir, create_dir, rename_dir, delete_dir
│ └── CommandTool.cs # execute_command
── Json/AppJsonContext.cs # Source-generated JSON context (AOT)
── Commands/
│ ├── ExitCommand.cs # /exit command
│ ├── HelpCommand.cs # /help command
│ ├── ClearCommand.cs # /clear command
│ ├── StatusCommand.cs # /status command
│ └── CompactCommand.cs # /compact command
├── OpenRouter/
│ └── PricingProvider.cs # Fetch model pricing from OpenRouter
└── SetupTui.cs # Interactive setup TUI
```
## How It Works
1. **Setup**: The AI client is configured with your API credentials and model preferences
1. **Setup**: Configure API credentials via environment variables or `/setup` command
2. **REPL Loop**: You interact with the AI through a conversational interface
3. **Tool Calling**: The AI can call any of the available tools to read/edit files, manage directories, or execute commands
4. **Hashline Validation**: All file edits are validated using the Hashline technique to ensure precision
5. **Safe Execution**: Commands require explicit user approval before running
5. **Token Tracking**: Responses show token usage and cost; session totals are maintained
6. **Context Compaction**: When approaching context limits, conversation history is automatically compressed
7. **Safe Execution**: Commands require explicit user approval before running
## Supported Models