- Reorganize project structure and file locations - Add ReasoningController to manage model selection and reasoning mode - Update design and requirements for reasoning mode toggle - Implement model switching between Qwen3-4B-Instruct and Qwen3-4B-Thinking models - Remove deprecated files and consolidate project layout - Add new steering and specification documentation - Clean up and remove unnecessary files and directories - Prepare for enhanced AI sidebar functionality with more flexible model handling
7.1 KiB
Implementation Plan
-
1. Implement streaming response infrastructure
- Create StreamingHandler class in new file
streaming_handler.pywith token buffering, UI update methods, and stream state management - Add
_handle_stream_token()method to SidebarWindow that uses GLib.idle_add for thread-safe UI updates - Implement token buffering logic (accumulate 3-5 tokens before UI update) to reduce overhead
- Requirements: 1.1, 1.2, 1.3, 1.4
- Create StreamingHandler class in new file
-
2. Integrate streaming into SidebarWindow
- Modify
_request_response()to useollama_client.stream_chat()instead of blockingchat() - Update worker thread to iterate over stream and call
_handle_stream_token()for each chunk - Add streaming state indicator (visual feedback during generation)
- Handle stream errors and interruptions gracefully with try-except blocks
- Requirements: 1.1, 1.2, 1.3, 1.4
- Modify
-
3. Replace single-line Entry with multi-line TextView
- Replace
Gtk.EntrywithGtk.TextViewwrapped inGtk.ScrolledWindowin_build_ui() - Configure text view with word wrapping, min height 40px, max height 200px
- Implement key event controller to handle Enter (submit) vs Shift+Enter (newline)
- Add placeholder text handling for empty buffer state
- Update
_on_submit()to extract text from TextView buffer instead of Entry - Requirements: 2.1, 2.2, 2.3, 2.4
- Replace
-
4. Create command processing system
- Create
command_processor.pywith CommandProcessor class - Implement command parsing logic with
is_command()andexecute()methods - Define CommandResult dataclass for structured command responses
- Add command registry dictionary mapping command strings to handler methods
- Requirements: 3.1, 3.2, 3.3, 3.4, 3.5
- Create
-
5. Implement conversation management commands
-
5.1 Implement
/newand/clearcommands- Add
_cmd_new_conversation()method to save current conversation and reset to fresh state - Clear message list UI and show confirmation message
- Requirements: 3.1, 3.2
- Add
-
5.2 Implement
/modelscommand- Add
_cmd_list_models()method to query and display available models - Format model list with current model highlighted
- Requirements: 3.3
- Add
-
5.3 Implement
/modelcommand- Add
_cmd_switch_model()method to validate and switch active model - Update model label in header UI
- Requirements: 3.4
- Add
-
5.4 Integrate CommandProcessor into SidebarWindow
- Add CommandProcessor instance to SidebarWindow initialization
- Modify
_on_submit()to check for commands before processing as user message - Display command results as system messages with distinct styling
- Requirements: 3.5
-
6. Implement conversation archive system
-
6.1 Create ConversationArchive class
- Create
conversation_archive.pywith ConversationArchive class - Implement
list_conversations()to scan storage directory and return metadata - Implement
archive_conversation()to save with timestamp-based ID format - Implement
generate_archive_id()using YYYYMMDD_HHMMSS_hash pattern - Define ConversationMetadata dataclass
- Requirements: 4.1, 4.2
- Create
-
6.2 Implement conversation loading
- Add
load_conversation()method to ConversationArchive - Handle JSON parsing errors and missing files gracefully
- Return ConversationState compatible with existing ConversationManager
- Requirements: 4.4
- Add
-
6.3 Implement
/listand/resumecommands- Add
_cmd_list_conversations()to display archived conversations with metadata - Add
_cmd_resume_conversation()to load and display selected conversation - Update SidebarWindow to repopulate message list from loaded conversation
- Requirements: 4.3, 4.4, 4.5
- Add
-
7. Implement reasoning mode toggle
-
7.1 Create ReasoningController class
- Create
reasoning_controller.pywith ReasoningController class - Implement preference persistence to
~/.config/aisidebar/preferences.json - Add
toggle(),is_enabled(), andget_chat_options()methods - Define PreferencesState dataclass
- Requirements: 5.4
- Create
-
7.2 Add reasoning toggle UI
- Add ToggleButton to header area in
_build_ui() - Connect toggle signal to
_on_reasoning_toggled()callback - Update button state from persisted preference on startup
- Requirements: 5.1
- Add ToggleButton to header area in
-
7.3 Integrate reasoning mode with Ollama calls
- Modify
_request_response()to include reasoning options when enabled - Pass model-specific parameters via
get_chat_options() - Handle both streaming and non-streaming modes with reasoning
- Requirements: 5.2, 5.3
- Modify
-
7.4 Implement reasoning content formatting
- Add visual distinction for reasoning content (italic, gray text, or expandable section)
- Separate reasoning from final answer with visual divider
- Update message rendering to handle reasoning metadata
- Requirements: 5.5
-
8. Implement graceful Ollama unavailability handling
-
8.1 Update OllamaClient initialization
- Modify
__init__()to never raise exceptions during initialization - Add connection check that sets internal availability flag
- Update
list_models()to return empty list instead of raising on connection failure - Update
chat()andstream_chat()to return error messages instead of raising - Requirements: 6.1, 6.3, 6.5
- Modify
-
8.2 Create OllamaAvailabilityMonitor
- Create
ollama_monitor.pywith OllamaAvailabilityMonitor class - Implement periodic availability checking using GLib.timeout_add (30s interval)
- Add callback mechanism to notify UI of state changes
- Ensure checks are non-blocking and don't impact UI responsiveness
- Requirements: 6.4
- Create
-
8.3 Update SidebarWindow for Ollama unavailability
- Initialize OllamaAvailabilityMonitor in SidebarWindow
- Display "Ollama not running" status message when unavailable at startup
- Update model label to show connection status
- Disable input field when Ollama unavailable, show helpful message
- Add callback to re-enable features when Ollama becomes available
- Requirements: 6.1, 6.2, 6.4
-
8.4 Add user-friendly error messages
- Display clear instructions when user tries to chat without Ollama
- Show notification when Ollama connection is restored
- Update all command handlers to check Ollama availability
- Provide actionable error messages (e.g., "Start Ollama with: ollama serve")
- Requirements: 6.2, 6.3
-
9. Add error handling and edge cases
- Implement stream timeout handling (60s limit) with cancellation
- Add connection error recovery for streaming failures
- Handle command execution during active streaming
- Add validation for conversation archive file corruption
- Implement graceful degradation for missing preferences file
- Requirements: 1.4, 3.5, 4.3, 4.4
-
10. Polish and integration
- Add CSS styling for system messages, reasoning content, and streaming indicator
- Implement
/helpcommand to display available commands - Add visual feedback for command execution (loading states)
- Ensure all UI updates maintain smooth scrolling behavior
- Test keyboard focus management across all new widgets
- Add status indicator in UI showing Ollama connection state
- Requirements: 1.3, 2.3, 3.5, 5.5, 6.2