feat: add design and requirements documents for AI sidebar enhancements
This commit is contained in:
432
.kiro/specs/ai-sidebar-enhancements/design.md
Normal file
432
.kiro/specs/ai-sidebar-enhancements/design.md
Normal file
@@ -0,0 +1,432 @@
|
||||
# Design Document: AI Sidebar Enhancements
|
||||
|
||||
## Overview
|
||||
|
||||
This design document outlines the technical approach for enhancing the AI sidebar module with streaming responses, improved UI, conversation management commands, persistence features, and reasoning mode controls. The enhancements build upon the existing GTK4-based architecture using the Ollama Python SDK.
|
||||
|
||||
The current implementation uses:
|
||||
- GTK4 for UI with gtk4-layer-shell for Wayland integration
|
||||
- Ollama Python SDK for LLM interactions
|
||||
- JSON-based conversation persistence via ConversationManager
|
||||
- Threading for async operations with GLib.idle_add for UI updates
|
||||
|
||||
## Architecture
|
||||
|
||||
### Current Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ SidebarWindow (GTK4) │
|
||||
│ ┌────────────────────────────────────────────────────┐ │
|
||||
│ │ Header (Title + Model Label) │ │
|
||||
│ ├────────────────────────────────────────────────────┤ │
|
||||
│ │ ScrolledWindow │ │
|
||||
│ │ └─ Message List (Gtk.Box vertical) │ │
|
||||
│ ├────────────────────────────────────────────────────┤ │
|
||||
│ │ Input Box (Entry + Send Button) │ │
|
||||
│ └────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
┌──────────────────┐ ┌──────────────────┐
|
||||
│ ConversationMgr │ │ OllamaClient │
|
||||
│ - Load/Save │ │ - chat() │
|
||||
│ - Messages │ │ - stream_chat() │
|
||||
└──────────────────┘ └──────────────────┘
|
||||
```
|
||||
|
||||
### Enhanced Architecture
|
||||
|
||||
The enhancements will introduce:
|
||||
|
||||
1. **CommandProcessor**: New component to parse and execute slash commands
|
||||
2. **StreamingHandler**: Manages token streaming and UI updates
|
||||
3. **ConversationArchive**: Extends ConversationManager for multi-conversation management
|
||||
4. **ReasoningController**: Manages reasoning mode state and formatting
|
||||
5. **Enhanced Input Widget**: Multi-line text view replacing single-line entry
|
||||
|
||||
## Components and Interfaces
|
||||
|
||||
### 1. Streaming Response Display
|
||||
|
||||
#### StreamingHandler Class
|
||||
|
||||
```python
|
||||
class StreamingHandler:
|
||||
"""Manages streaming response display with token-by-token updates."""
|
||||
|
||||
def __init__(self, message_widget: Gtk.Label, scroller: Gtk.ScrolledWindow):
|
||||
self._widget = message_widget
|
||||
self._scroller = scroller
|
||||
self._buffer = ""
|
||||
self._is_streaming = False
|
||||
|
||||
def start_stream(self) -> None:
|
||||
"""Initialize streaming state."""
|
||||
|
||||
def append_token(self, token: str) -> None:
|
||||
"""Add token to buffer and update UI via GLib.idle_add."""
|
||||
|
||||
def finish_stream(self) -> str:
|
||||
"""Finalize streaming and return complete content."""
|
||||
```
|
||||
|
||||
#### Integration Points
|
||||
|
||||
- Modify `_request_response()` to use `ollama_client.stream_chat()` instead of `chat()`
|
||||
- Use GLib.idle_add to schedule UI updates for each token on the main thread
|
||||
- Create message widget before streaming starts, update label text progressively
|
||||
- Maintain smooth scrolling by calling `_scroll_to_bottom()` periodically (not per token)
|
||||
|
||||
#### Technical Considerations
|
||||
|
||||
- Token updates must occur on GTK main thread via GLib.idle_add
|
||||
- Buffer tokens to reduce UI update frequency (e.g., every 3-5 tokens or 50ms)
|
||||
- Handle stream interruption and error states gracefully
|
||||
- Show visual indicator (e.g., cursor or "..." suffix) during active streaming
|
||||
|
||||
### 2. Improved Text Input Field
|
||||
|
||||
#### TextView Widget Replacement
|
||||
|
||||
Replace `Gtk.Entry` with `Gtk.TextView` wrapped in `Gtk.ScrolledWindow`:
|
||||
|
||||
```python
|
||||
# Current: Gtk.Entry (single line)
|
||||
self._entry = Gtk.Entry()
|
||||
|
||||
# Enhanced: Gtk.TextView (multi-line)
|
||||
self._text_view = Gtk.TextView()
|
||||
self._text_buffer = self._text_view.get_buffer()
|
||||
text_scroller = Gtk.ScrolledWindow()
|
||||
text_scroller.set_child(self._text_view)
|
||||
text_scroller.set_min_content_height(40)
|
||||
text_scroller.set_max_content_height(200)
|
||||
```
|
||||
|
||||
#### Features
|
||||
|
||||
- Automatic text wrapping with `set_wrap_mode(Gtk.WrapMode.WORD_CHAR)`
|
||||
- Dynamic height expansion up to max height (200px), then scroll
|
||||
- Shift+Enter for new lines, Enter alone to submit
|
||||
- Placeholder text using CSS or empty buffer state
|
||||
- Maintain focus behavior with proper event controllers
|
||||
|
||||
#### Key Bindings
|
||||
|
||||
- **Enter**: Submit message (unless Shift is held)
|
||||
- **Shift+Enter**: Insert newline
|
||||
- **Ctrl+A**: Select all text
|
||||
|
||||
### 3. Conversation Management Commands
|
||||
|
||||
#### CommandProcessor Class
|
||||
|
||||
```python
|
||||
class CommandProcessor:
|
||||
"""Parses and executes slash commands."""
|
||||
|
||||
COMMANDS = {
|
||||
"/new": "start_new_conversation",
|
||||
"/clear": "start_new_conversation", # Alias for /new
|
||||
"/models": "list_models",
|
||||
"/model": "switch_model",
|
||||
"/resume": "resume_conversation",
|
||||
"/list": "list_conversations",
|
||||
}
|
||||
|
||||
def is_command(self, text: str) -> bool:
|
||||
"""Check if text starts with a command."""
|
||||
|
||||
def execute(self, text: str) -> CommandResult:
|
||||
"""Parse and execute command, return result."""
|
||||
```
|
||||
|
||||
#### Command Implementations
|
||||
|
||||
**`/new` and `/clear`**
|
||||
- Save current conversation with timestamp-based ID
|
||||
- Reset conversation manager to new default conversation
|
||||
- Clear message list UI
|
||||
- Show confirmation message
|
||||
|
||||
**`/models`**
|
||||
- Query `ollama_client.list_models()`
|
||||
- Display formatted list in message area
|
||||
- Highlight current model
|
||||
|
||||
**`/model <name>`**
|
||||
- Validate model name against available models
|
||||
- Update `_current_model` attribute
|
||||
- Update model label in header
|
||||
- Show confirmation message
|
||||
|
||||
**`/list`**
|
||||
- Scan conversation storage directory
|
||||
- Display conversations with ID, timestamp, message count
|
||||
- Format as selectable list
|
||||
|
||||
**`/resume <id>`**
|
||||
- Load specified conversation via ConversationManager
|
||||
- Clear and repopulate message list
|
||||
- Update window title/header with conversation ID
|
||||
|
||||
#### UI Integration
|
||||
|
||||
- Check for commands in `_on_submit()` before processing as user message
|
||||
- Display command results as system messages (distinct styling)
|
||||
- Provide command help via `/help` command
|
||||
- Support tab completion for commands (future enhancement)
|
||||
|
||||
### 4. Conversation Persistence and Resume
|
||||
|
||||
#### ConversationArchive Extension
|
||||
|
||||
Extend ConversationManager with multi-conversation capabilities:
|
||||
|
||||
```python
|
||||
class ConversationArchive:
|
||||
"""Manages multiple conversation files."""
|
||||
|
||||
def __init__(self, storage_dir: Path):
|
||||
self._storage_dir = storage_dir
|
||||
|
||||
def list_conversations(self) -> List[ConversationMetadata]:
|
||||
"""Return metadata for all saved conversations."""
|
||||
|
||||
def archive_conversation(self, conversation_id: str) -> str:
|
||||
"""Save conversation with timestamp-based archive ID."""
|
||||
|
||||
def load_conversation(self, archive_id: str) -> ConversationState:
|
||||
"""Load archived conversation by ID."""
|
||||
|
||||
def generate_archive_id(self) -> str:
|
||||
"""Create unique ID: YYYYMMDD_HHMMSS_<short-hash>"""
|
||||
```
|
||||
|
||||
#### File Naming Convention
|
||||
|
||||
- Active conversation: `default.json`
|
||||
- Archived conversations: `archive_YYYYMMDD_HHMMSS_<hash>.json`
|
||||
- Metadata includes: id, created_at, updated_at, message_count, first_message_preview
|
||||
|
||||
#### Workflow
|
||||
|
||||
1. User types `/new` or `/clear`
|
||||
2. Current conversation saved as archive file
|
||||
3. New ConversationManager instance created with "default" ID
|
||||
4. UI cleared and reset
|
||||
5. Confirmation message shows archive ID
|
||||
|
||||
6. User types `/list`
|
||||
7. System scans storage directory for archive files
|
||||
8. Displays formatted list with metadata
|
||||
|
||||
9. User types `/resume <id>`
|
||||
10. ConversationManager loads specified archive
|
||||
11. UI repopulated with conversation history
|
||||
12. User can continue conversation
|
||||
|
||||
### 5. Reasoning Mode Toggle
|
||||
|
||||
#### ReasoningController Class
|
||||
|
||||
```python
|
||||
class ReasoningController:
|
||||
"""Manages reasoning mode state and API parameters."""
|
||||
|
||||
def __init__(self):
|
||||
self._enabled = False
|
||||
self._preference_file = Path.home() / ".config" / "aisidebar" / "preferences.json"
|
||||
|
||||
def is_enabled(self) -> bool:
|
||||
"""Check if reasoning mode is active."""
|
||||
|
||||
def toggle(self) -> bool:
|
||||
"""Toggle reasoning mode and persist preference."""
|
||||
|
||||
def get_chat_options(self) -> dict:
|
||||
"""Return Ollama API options for reasoning mode."""
|
||||
```
|
||||
|
||||
#### UI Components
|
||||
|
||||
Add toggle button to header area:
|
||||
|
||||
```python
|
||||
self._reasoning_toggle = Gtk.ToggleButton(label="🧠 Reasoning")
|
||||
self._reasoning_toggle.connect("toggled", self._on_reasoning_toggled)
|
||||
```
|
||||
|
||||
#### Ollama Integration
|
||||
|
||||
When reasoning mode is enabled, pass additional options to Ollama:
|
||||
|
||||
```python
|
||||
# Standard mode
|
||||
ollama.chat(model=model, messages=messages)
|
||||
|
||||
# Reasoning mode (model-dependent)
|
||||
ollama.chat(
|
||||
model=model,
|
||||
messages=messages,
|
||||
options={
|
||||
"temperature": 0.7,
|
||||
# Model-specific reasoning parameters
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
#### Message Formatting
|
||||
|
||||
When reasoning is enabled and model supports it:
|
||||
- Display thinking process in distinct style (italic, gray text)
|
||||
- Separate reasoning from final answer with visual divider
|
||||
- Use expandable/collapsible section for reasoning (optional)
|
||||
|
||||
#### Persistence
|
||||
|
||||
- Save reasoning preference to `~/.config/aisidebar/preferences.json`
|
||||
- Load preference on startup
|
||||
- Apply to all new conversations
|
||||
|
||||
## Data Models
|
||||
|
||||
### ConversationMetadata
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class ConversationMetadata:
|
||||
"""Metadata for conversation list display."""
|
||||
archive_id: str
|
||||
created_at: str
|
||||
updated_at: str
|
||||
message_count: int
|
||||
preview: str # First 50 chars of first user message
|
||||
```
|
||||
|
||||
### CommandResult
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class CommandResult:
|
||||
"""Result of command execution."""
|
||||
success: bool
|
||||
message: str
|
||||
data: dict | None = None
|
||||
```
|
||||
|
||||
### PreferencesState
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class PreferencesState:
|
||||
"""User preferences for sidebar behavior."""
|
||||
reasoning_enabled: bool = False
|
||||
default_model: str | None = None
|
||||
theme: str = "default"
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Streaming Errors
|
||||
|
||||
- **Connection Lost**: Display partial response + error message, allow retry
|
||||
- **Model Unavailable**: Fall back to non-streaming mode with error notice
|
||||
- **Stream Timeout**: Cancel after 60s, show timeout message
|
||||
|
||||
### Command Errors
|
||||
|
||||
- **Invalid Command**: Show available commands with `/help`
|
||||
- **Invalid Arguments**: Display command usage syntax
|
||||
- **File Not Found**: Handle missing conversation archives gracefully
|
||||
- **Permission Errors**: Show clear error message for storage access issues
|
||||
|
||||
### Conversation Loading Errors
|
||||
|
||||
- **Corrupted JSON**: Log error, skip file, continue with other conversations
|
||||
- **Missing Files**: Remove from list, show warning
|
||||
- **Version Mismatch**: Attempt migration or show incompatibility notice
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
|
||||
1. **StreamingHandler**
|
||||
- Token buffering logic
|
||||
- Thread-safe UI updates
|
||||
- Stream completion handling
|
||||
|
||||
2. **CommandProcessor**
|
||||
- Command parsing (valid/invalid formats)
|
||||
- Each command execution path
|
||||
- Error handling for malformed commands
|
||||
|
||||
3. **ConversationArchive**
|
||||
- Archive ID generation uniqueness
|
||||
- List/load/save operations
|
||||
- File system error handling
|
||||
|
||||
4. **ReasoningController**
|
||||
- Toggle state management
|
||||
- Preference persistence
|
||||
- API option generation
|
||||
|
||||
### Integration Tests
|
||||
|
||||
1. **End-to-End Streaming**
|
||||
- Mock Ollama stream response
|
||||
- Verify UI updates occur
|
||||
- Check final message persistence
|
||||
|
||||
2. **Command Workflows**
|
||||
- `/new` → archive → `/list` → `/resume` flow
|
||||
- Model switching with active conversation
|
||||
- Command execution during streaming (edge case)
|
||||
|
||||
3. **Multi-line Input**
|
||||
- Text wrapping behavior
|
||||
- Submit vs newline key handling
|
||||
- Height expansion limits
|
||||
|
||||
### Manual Testing Checklist
|
||||
|
||||
- [ ] Stream response displays smoothly without flicker
|
||||
- [ ] Multi-line input expands and wraps correctly
|
||||
- [ ] All commands execute successfully
|
||||
- [ ] Conversation archives persist across restarts
|
||||
- [ ] Resume loads correct conversation history
|
||||
- [ ] Reasoning toggle affects model behavior
|
||||
- [ ] UI remains responsive during streaming
|
||||
- [ ] Error states display helpful messages
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
### GTK4 Threading Considerations
|
||||
|
||||
- All UI updates must occur on main thread via `GLib.idle_add()`
|
||||
- Worker threads for Ollama API calls to prevent UI blocking
|
||||
- Use `GLib.PRIORITY_DEFAULT` for normal updates, `GLib.PRIORITY_HIGH` for critical UI state
|
||||
|
||||
### Performance Optimizations
|
||||
|
||||
- Buffer tokens (3-5 at a time) to reduce GLib.idle_add overhead
|
||||
- Limit scroll updates to every 100ms during streaming
|
||||
- Cache conversation metadata to avoid repeated file I/O
|
||||
- Lazy-load conversation content only when resuming
|
||||
|
||||
### Backward Compatibility
|
||||
|
||||
- Existing `default.json` conversation file remains compatible
|
||||
- New archive files use distinct naming pattern
|
||||
- Preferences file is optional; defaults work without it
|
||||
- Graceful degradation if gtk4-layer-shell unavailable
|
||||
|
||||
### Future Enhancements
|
||||
|
||||
- Command history with up/down arrow navigation
|
||||
- Conversation search functionality
|
||||
- Export conversations to markdown
|
||||
- Custom keyboard shortcuts
|
||||
- Syntax highlighting for code in messages
|
||||
- Image/file attachment support
|
||||
73
.kiro/specs/ai-sidebar-enhancements/requirements.md
Normal file
73
.kiro/specs/ai-sidebar-enhancements/requirements.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# Requirements Document
|
||||
|
||||
## Introduction
|
||||
|
||||
This document outlines the requirements for enhancing the AI sidebar module for Ignis. The AI sidebar is a working MVP that provides an interface for interacting with LLM models. The enhancements focus on improving user experience through streaming responses, better UI layout, conversation management commands, conversation persistence, and model reasoning controls.
|
||||
|
||||
## Glossary
|
||||
|
||||
- **AI Sidebar**: The Ignis module located at /home/pinj/.config/ignis/modules/aisidebar that provides an interface for AI interactions
|
||||
- **LLM**: Large Language Model - the AI system that generates responses
|
||||
- **Streaming**: The progressive display of response text as tokens are generated, rather than showing the complete response at once
|
||||
- **Conversation Session**: A continuous exchange of messages between the user and the LLM
|
||||
- **Reasoning Mode**: An optional LLM feature that shows the model's thinking process before providing the final answer
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement 1: Streaming Response Display
|
||||
|
||||
**User Story:** As a user, I want to see AI responses appear progressively as they are generated, so that I can start reading the response immediately and understand that the system is actively processing my request.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN the LLM begins generating a response, THE AI Sidebar SHALL display each token as it is received from the model
|
||||
2. WHILE the response is being generated, THE AI Sidebar SHALL append new tokens to the existing response text in real-time
|
||||
3. THE AI Sidebar SHALL maintain smooth visual rendering without flickering during token streaming
|
||||
4. WHEN the response generation is complete, THE AI Sidebar SHALL indicate that streaming has finished
|
||||
|
||||
### Requirement 2: Improved Text Input Field
|
||||
|
||||
**User Story:** As a user, I want the text input field to expand and wrap text naturally, so that I can compose longer messages comfortably and see my full input clearly.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN the user types text that exceeds the width of the input field, THE AI Sidebar SHALL automatically wrap the text to the next line
|
||||
2. WHILE the user is typing, THE AI Sidebar SHALL expand the input field height to accommodate multiple lines of text
|
||||
3. THE AI Sidebar SHALL provide a visually comfortable text input area with appropriate padding and spacing
|
||||
4. THE AI Sidebar SHALL maintain input field usability across different message lengths
|
||||
|
||||
### Requirement 3: Conversation Management Commands
|
||||
|
||||
**User Story:** As a user, I want to use slash commands to manage conversations and models, so that I can quickly perform common actions without navigating through menus.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN the user enters "/new", THE AI Sidebar SHALL save the current conversation and start a fresh conversation session
|
||||
2. WHEN the user enters "/clear", THE AI Sidebar SHALL save the current conversation and start a fresh conversation session
|
||||
3. WHEN the user enters a command to list models, THE AI Sidebar SHALL display all available LLM models
|
||||
4. WHEN the user enters a command to switch models, THE AI Sidebar SHALL change the active LLM model for subsequent messages
|
||||
5. THE AI Sidebar SHALL recognize slash commands at the beginning of user input and execute them instead of sending them as messages
|
||||
|
||||
### Requirement 4: Conversation Persistence and Resume
|
||||
|
||||
**User Story:** As a user, I want to save and reopen previous conversations, so that I can continue past discussions or review my conversation history.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN the user starts a new conversation using "/new" or "/clear", THE AI Sidebar SHALL save the current conversation to a log file with a unique identifier
|
||||
2. THE AI Sidebar SHALL store conversation log files in a persistent location accessible across sessions
|
||||
3. WHEN the user enters a resume command, THE AI Sidebar SHALL display a list of saved conversations with identifiers or timestamps
|
||||
4. WHEN the user selects a saved conversation to resume, THE AI Sidebar SHALL load the conversation history and allow continuation
|
||||
5. THE AI Sidebar SHALL preserve all message content, timestamps, and model information in saved conversations
|
||||
|
||||
### Requirement 5: Reasoning Mode Toggle
|
||||
|
||||
**User Story:** As a user, I want to enable or disable the model's reasoning output, so that I can choose whether to see the thinking process or just the final answer based on my needs.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. THE AI Sidebar SHALL provide a toggle button or control to enable reasoning mode
|
||||
2. WHEN reasoning mode is enabled, THE AI Sidebar SHALL request and display the model's thinking process before the final answer
|
||||
3. WHEN reasoning mode is disabled, THE AI Sidebar SHALL request and display only the final answer without intermediate reasoning
|
||||
4. THE AI Sidebar SHALL persist the reasoning mode preference across conversation sessions
|
||||
5. THE AI Sidebar SHALL visually distinguish reasoning content from final answer content when reasoning mode is enabled
|
||||
114
.kiro/specs/ai-sidebar-enhancements/tasks.md
Normal file
114
.kiro/specs/ai-sidebar-enhancements/tasks.md
Normal file
@@ -0,0 +1,114 @@
|
||||
# Implementation Plan
|
||||
|
||||
- [ ] 1. Implement streaming response infrastructure
|
||||
- Create StreamingHandler class in new file `streaming_handler.py` with token buffering, UI update methods, and stream state management
|
||||
- Add `_handle_stream_token()` method to SidebarWindow that uses GLib.idle_add for thread-safe UI updates
|
||||
- Implement token buffering logic (accumulate 3-5 tokens before UI update) to reduce overhead
|
||||
- _Requirements: 1.1, 1.2, 1.3, 1.4_
|
||||
|
||||
- [ ] 2. Integrate streaming into SidebarWindow
|
||||
- Modify `_request_response()` to use `ollama_client.stream_chat()` instead of blocking `chat()`
|
||||
- Update worker thread to iterate over stream and call `_handle_stream_token()` for each chunk
|
||||
- Add streaming state indicator (visual feedback during generation)
|
||||
- Handle stream errors and interruptions gracefully with try-except blocks
|
||||
- _Requirements: 1.1, 1.2, 1.3, 1.4_
|
||||
|
||||
- [ ] 3. Replace single-line Entry with multi-line TextView
|
||||
- Replace `Gtk.Entry` with `Gtk.TextView` wrapped in `Gtk.ScrolledWindow` in `_build_ui()`
|
||||
- Configure text view with word wrapping, min height 40px, max height 200px
|
||||
- Implement key event controller to handle Enter (submit) vs Shift+Enter (newline)
|
||||
- Add placeholder text handling for empty buffer state
|
||||
- Update `_on_submit()` to extract text from TextView buffer instead of Entry
|
||||
- _Requirements: 2.1, 2.2, 2.3, 2.4_
|
||||
|
||||
- [ ] 4. Create command processing system
|
||||
- Create `command_processor.py` with CommandProcessor class
|
||||
- Implement command parsing logic with `is_command()` and `execute()` methods
|
||||
- Define CommandResult dataclass for structured command responses
|
||||
- Add command registry dictionary mapping command strings to handler methods
|
||||
- _Requirements: 3.1, 3.2, 3.3, 3.4, 3.5_
|
||||
|
||||
- [ ] 5. Implement conversation management commands
|
||||
- [ ] 5.1 Implement `/new` and `/clear` commands
|
||||
- Add `_cmd_new_conversation()` method to save current conversation and reset to fresh state
|
||||
- Clear message list UI and show confirmation message
|
||||
- _Requirements: 3.1, 3.2_
|
||||
|
||||
- [ ] 5.2 Implement `/models` command
|
||||
- Add `_cmd_list_models()` method to query and display available models
|
||||
- Format model list with current model highlighted
|
||||
- _Requirements: 3.3_
|
||||
|
||||
- [ ] 5.3 Implement `/model` command
|
||||
- Add `_cmd_switch_model()` method to validate and switch active model
|
||||
- Update model label in header UI
|
||||
- _Requirements: 3.4_
|
||||
|
||||
- [ ] 5.4 Integrate CommandProcessor into SidebarWindow
|
||||
- Add CommandProcessor instance to SidebarWindow initialization
|
||||
- Modify `_on_submit()` to check for commands before processing as user message
|
||||
- Display command results as system messages with distinct styling
|
||||
- _Requirements: 3.5_
|
||||
|
||||
- [ ] 6. Implement conversation archive system
|
||||
- [ ] 6.1 Create ConversationArchive class
|
||||
- Create `conversation_archive.py` with ConversationArchive class
|
||||
- Implement `list_conversations()` to scan storage directory and return metadata
|
||||
- Implement `archive_conversation()` to save with timestamp-based ID format
|
||||
- Implement `generate_archive_id()` using YYYYMMDD_HHMMSS_hash pattern
|
||||
- Define ConversationMetadata dataclass
|
||||
- _Requirements: 4.1, 4.2_
|
||||
|
||||
- [ ] 6.2 Implement conversation loading
|
||||
- Add `load_conversation()` method to ConversationArchive
|
||||
- Handle JSON parsing errors and missing files gracefully
|
||||
- Return ConversationState compatible with existing ConversationManager
|
||||
- _Requirements: 4.4_
|
||||
|
||||
- [ ] 6.3 Implement `/list` and `/resume` commands
|
||||
- Add `_cmd_list_conversations()` to display archived conversations with metadata
|
||||
- Add `_cmd_resume_conversation()` to load and display selected conversation
|
||||
- Update SidebarWindow to repopulate message list from loaded conversation
|
||||
- _Requirements: 4.3, 4.4, 4.5_
|
||||
|
||||
- [ ] 7. Implement reasoning mode toggle
|
||||
- [ ] 7.1 Create ReasoningController class
|
||||
- Create `reasoning_controller.py` with ReasoningController class
|
||||
- Implement preference persistence to `~/.config/aisidebar/preferences.json`
|
||||
- Add `toggle()`, `is_enabled()`, and `get_chat_options()` methods
|
||||
- Define PreferencesState dataclass
|
||||
- _Requirements: 5.4_
|
||||
|
||||
- [ ] 7.2 Add reasoning toggle UI
|
||||
- Add ToggleButton to header area in `_build_ui()`
|
||||
- Connect toggle signal to `_on_reasoning_toggled()` callback
|
||||
- Update button state from persisted preference on startup
|
||||
- _Requirements: 5.1_
|
||||
|
||||
- [ ] 7.3 Integrate reasoning mode with Ollama calls
|
||||
- Modify `_request_response()` to include reasoning options when enabled
|
||||
- Pass model-specific parameters via `get_chat_options()`
|
||||
- Handle both streaming and non-streaming modes with reasoning
|
||||
- _Requirements: 5.2, 5.3_
|
||||
|
||||
- [ ] 7.4 Implement reasoning content formatting
|
||||
- Add visual distinction for reasoning content (italic, gray text, or expandable section)
|
||||
- Separate reasoning from final answer with visual divider
|
||||
- Update message rendering to handle reasoning metadata
|
||||
- _Requirements: 5.5_
|
||||
|
||||
- [ ] 8. Add error handling and edge cases
|
||||
- Implement stream timeout handling (60s limit) with cancellation
|
||||
- Add connection error recovery for streaming failures
|
||||
- Handle command execution during active streaming
|
||||
- Add validation for conversation archive file corruption
|
||||
- Implement graceful degradation for missing preferences file
|
||||
- _Requirements: 1.4, 3.5, 4.3, 4.4_
|
||||
|
||||
- [ ] 9. Polish and integration
|
||||
- Add CSS styling for system messages, reasoning content, and streaming indicator
|
||||
- Implement `/help` command to display available commands
|
||||
- Add visual feedback for command execution (loading states)
|
||||
- Ensure all UI updates maintain smooth scrolling behavior
|
||||
- Test keyboard focus management across all new widgets
|
||||
- _Requirements: 1.3, 2.3, 3.5, 5.5_
|
||||
Reference in New Issue
Block a user