Files
niri-ai-sidebar/.kiro/specs/ai-sidebar-enhancements/tasks.md
Melvin Ragusa 239242e2fc refactor(aisidebar): restructure project and implement reasoning mode toggle
- Reorganize project structure and file locations
- Add ReasoningController to manage model selection and reasoning mode
- Update design and requirements for reasoning mode toggle
- Implement model switching between Qwen3-4B-Instruct and Qwen3-4B-Thinking models
- Remove deprecated files and consolidate project layout
- Add new steering and specification documentation
- Clean up and remove unnecessary files and directories
- Prepare for enhanced AI sidebar functionality with more flexible model handling
2025-10-26 09:10:31 +01:00

146 lines
7.1 KiB
Markdown

# Implementation Plan
- [x] 1. Implement streaming response infrastructure
- Create StreamingHandler class in new file `streaming_handler.py` with token buffering, UI update methods, and stream state management
- Add `_handle_stream_token()` method to SidebarWindow that uses GLib.idle_add for thread-safe UI updates
- Implement token buffering logic (accumulate 3-5 tokens before UI update) to reduce overhead
- _Requirements: 1.1, 1.2, 1.3, 1.4_
- [x] 2. Integrate streaming into SidebarWindow
- Modify `_request_response()` to use `ollama_client.stream_chat()` instead of blocking `chat()`
- Update worker thread to iterate over stream and call `_handle_stream_token()` for each chunk
- Add streaming state indicator (visual feedback during generation)
- Handle stream errors and interruptions gracefully with try-except blocks
- _Requirements: 1.1, 1.2, 1.3, 1.4_
- [x] 3. Replace single-line Entry with multi-line TextView
- Replace `Gtk.Entry` with `Gtk.TextView` wrapped in `Gtk.ScrolledWindow` in `_build_ui()`
- Configure text view with word wrapping, min height 40px, max height 200px
- Implement key event controller to handle Enter (submit) vs Shift+Enter (newline)
- Add placeholder text handling for empty buffer state
- Update `_on_submit()` to extract text from TextView buffer instead of Entry
- _Requirements: 2.1, 2.2, 2.3, 2.4_
- [x] 4. Create command processing system
- Create `command_processor.py` with CommandProcessor class
- Implement command parsing logic with `is_command()` and `execute()` methods
- Define CommandResult dataclass for structured command responses
- Add command registry dictionary mapping command strings to handler methods
- _Requirements: 3.1, 3.2, 3.3, 3.4, 3.5_
- [x] 5. Implement conversation management commands
- [x] 5.1 Implement `/new` and `/clear` commands
- Add `_cmd_new_conversation()` method to save current conversation and reset to fresh state
- Clear message list UI and show confirmation message
- _Requirements: 3.1, 3.2_
- [x] 5.2 Implement `/models` command
- Add `_cmd_list_models()` method to query and display available models
- Format model list with current model highlighted
- _Requirements: 3.3_
- [x] 5.3 Implement `/model` command
- Add `_cmd_switch_model()` method to validate and switch active model
- Update model label in header UI
- _Requirements: 3.4_
- [x] 5.4 Integrate CommandProcessor into SidebarWindow
- Add CommandProcessor instance to SidebarWindow initialization
- Modify `_on_submit()` to check for commands before processing as user message
- Display command results as system messages with distinct styling
- _Requirements: 3.5_
- [x] 6. Implement conversation archive system
- [x] 6.1 Create ConversationArchive class
- Create `conversation_archive.py` with ConversationArchive class
- Implement `list_conversations()` to scan storage directory and return metadata
- Implement `archive_conversation()` to save with timestamp-based ID format
- Implement `generate_archive_id()` using YYYYMMDD_HHMMSS_hash pattern
- Define ConversationMetadata dataclass
- _Requirements: 4.1, 4.2_
- [x] 6.2 Implement conversation loading
- Add `load_conversation()` method to ConversationArchive
- Handle JSON parsing errors and missing files gracefully
- Return ConversationState compatible with existing ConversationManager
- _Requirements: 4.4_
- [x] 6.3 Implement `/list` and `/resume` commands
- Add `_cmd_list_conversations()` to display archived conversations with metadata
- Add `_cmd_resume_conversation()` to load and display selected conversation
- Update SidebarWindow to repopulate message list from loaded conversation
- _Requirements: 4.3, 4.4, 4.5_
- [x] 7. Implement reasoning mode toggle
- [x] 7.1 Create ReasoningController class
- Create `reasoning_controller.py` with ReasoningController class
- Implement preference persistence to `~/.config/aisidebar/preferences.json`
- Add `toggle()`, `is_enabled()`, and `get_chat_options()` methods
- Define PreferencesState dataclass
- _Requirements: 5.4_
- [x] 7.2 Add reasoning toggle UI
- Add ToggleButton to header area in `_build_ui()`
- Connect toggle signal to `_on_reasoning_toggled()` callback
- Update button state from persisted preference on startup
- _Requirements: 5.1_
- [x] 7.3 Integrate reasoning mode with Ollama calls
- Modify `_request_response()` to include reasoning options when enabled
- Pass model-specific parameters via `get_chat_options()`
- Handle both streaming and non-streaming modes with reasoning
- _Requirements: 5.2, 5.3_
- [x] 7.4 Implement reasoning content formatting
- Add visual distinction for reasoning content (italic, gray text, or expandable section)
- Separate reasoning from final answer with visual divider
- Update message rendering to handle reasoning metadata
- _Requirements: 5.5_
- [x] 8. Implement graceful Ollama unavailability handling
- [x] 8.1 Update OllamaClient initialization
- Modify `__init__()` to never raise exceptions during initialization
- Add connection check that sets internal availability flag
- Update `list_models()` to return empty list instead of raising on connection failure
- Update `chat()` and `stream_chat()` to return error messages instead of raising
- _Requirements: 6.1, 6.3, 6.5_
- [x] 8.2 Create OllamaAvailabilityMonitor
- Create `ollama_monitor.py` with OllamaAvailabilityMonitor class
- Implement periodic availability checking using GLib.timeout_add (30s interval)
- Add callback mechanism to notify UI of state changes
- Ensure checks are non-blocking and don't impact UI responsiveness
- _Requirements: 6.4_
- [x] 8.3 Update SidebarWindow for Ollama unavailability
- Initialize OllamaAvailabilityMonitor in SidebarWindow
- Display "Ollama not running" status message when unavailable at startup
- Update model label to show connection status
- Disable input field when Ollama unavailable, show helpful message
- Add callback to re-enable features when Ollama becomes available
- _Requirements: 6.1, 6.2, 6.4_
- [x] 8.4 Add user-friendly error messages
- Display clear instructions when user tries to chat without Ollama
- Show notification when Ollama connection is restored
- Update all command handlers to check Ollama availability
- Provide actionable error messages (e.g., "Start Ollama with: ollama serve")
- _Requirements: 6.2, 6.3_
- [ ] 9. Add error handling and edge cases
- Implement stream timeout handling (60s limit) with cancellation
- Add connection error recovery for streaming failures
- Handle command execution during active streaming
- Add validation for conversation archive file corruption
- Implement graceful degradation for missing preferences file
- _Requirements: 1.4, 3.5, 4.3, 4.4_
- [ ] 10. Polish and integration
- Add CSS styling for system messages, reasoning content, and streaming indicator
- Implement `/help` command to display available commands
- Add visual feedback for command execution (loading states)
- Ensure all UI updates maintain smooth scrolling behavior
- Test keyboard focus management across all new widgets
- Add status indicator in UI showing Ollama connection state
- _Requirements: 1.3, 2.3, 3.5, 5.5, 6.2_