# Requirements Document ## Introduction This document outlines the requirements for enhancing the AI sidebar module for Ignis. The AI sidebar is a working MVP that provides an interface for interacting with LLM models. The enhancements focus on improving user experience through streaming responses, better UI layout, conversation management commands, conversation persistence, and model reasoning controls. ## Glossary - **AI Sidebar**: The Ignis module located at /home/pinj/.config/ignis/modules/aisidebar that provides an interface for AI interactions - **LLM**: Large Language Model - the AI system that generates responses - **Streaming**: The progressive display of response text as tokens are generated, rather than showing the complete response at once - **Conversation Session**: A continuous exchange of messages between the user and the LLM - **Reasoning Mode**: An optional LLM feature that shows the model's thinking process before providing the final answer ## Requirements ### Requirement 1: Streaming Response Display **User Story:** As a user, I want to see AI responses appear progressively as they are generated, so that I can start reading the response immediately and understand that the system is actively processing my request. #### Acceptance Criteria 1. WHEN the LLM begins generating a response, THE AI Sidebar SHALL display each token as it is received from the model 2. WHILE the response is being generated, THE AI Sidebar SHALL append new tokens to the existing response text in real-time 3. THE AI Sidebar SHALL maintain smooth visual rendering without flickering during token streaming 4. WHEN the response generation is complete, THE AI Sidebar SHALL indicate that streaming has finished ### Requirement 2: Improved Text Input Field **User Story:** As a user, I want the text input field to expand and wrap text naturally, so that I can compose longer messages comfortably and see my full input clearly. #### Acceptance Criteria 1. WHEN the user types text that exceeds the width of the input field, THE AI Sidebar SHALL automatically wrap the text to the next line 2. WHILE the user is typing, THE AI Sidebar SHALL expand the input field height to accommodate multiple lines of text 3. THE AI Sidebar SHALL provide a visually comfortable text input area with appropriate padding and spacing 4. THE AI Sidebar SHALL maintain input field usability across different message lengths ### Requirement 3: Conversation Management Commands **User Story:** As a user, I want to use slash commands to manage conversations and models, so that I can quickly perform common actions without navigating through menus. #### Acceptance Criteria 1. WHEN the user enters "/new", THE AI Sidebar SHALL save the current conversation and start a fresh conversation session 2. WHEN the user enters "/clear", THE AI Sidebar SHALL save the current conversation and start a fresh conversation session 3. WHEN the user enters a command to list models, THE AI Sidebar SHALL display all available LLM models 4. WHEN the user enters a command to switch models, THE AI Sidebar SHALL change the active LLM model for subsequent messages 5. THE AI Sidebar SHALL recognize slash commands at the beginning of user input and execute them instead of sending them as messages ### Requirement 4: Conversation Persistence and Resume **User Story:** As a user, I want to save and reopen previous conversations, so that I can continue past discussions or review my conversation history. #### Acceptance Criteria 1. WHEN the user starts a new conversation using "/new" or "/clear", THE AI Sidebar SHALL save the current conversation to a log file with a unique identifier 2. THE AI Sidebar SHALL store conversation log files in a persistent location accessible across sessions 3. WHEN the user enters a resume command, THE AI Sidebar SHALL display a list of saved conversations with identifiers or timestamps 4. WHEN the user selects a saved conversation to resume, THE AI Sidebar SHALL load the conversation history and allow continuation 5. THE AI Sidebar SHALL preserve all message content, timestamps, and model information in saved conversations ### Requirement 5: Reasoning Mode Toggle **User Story:** As a user, I want to enable or disable the model's reasoning output, so that I can choose whether to see the thinking process or just the final answer based on my needs. #### Acceptance Criteria 1. THE AI Sidebar SHALL provide a toggle button or control to enable reasoning mode 2. WHEN reasoning mode is enabled, THE AI Sidebar SHALL request and display the model's thinking process before the final answer 3. WHEN reasoning mode is disabled, THE AI Sidebar SHALL request and display only the final answer without intermediate reasoning 4. THE AI Sidebar SHALL persist the reasoning mode preference across conversation sessions 5. THE AI Sidebar SHALL visually distinguish reasoning content from final answer content when reasoning mode is enabled ### Requirement 6: Graceful Ollama Unavailability Handling **User Story:** As a user, I want the AI Sidebar to start and function even when Ollama is not running, so that Ignis can launch successfully and I can start Ollama when I'm ready to use the AI features. #### Acceptance Criteria 1. WHEN Ollama is not running at startup, THE AI Sidebar SHALL initialize successfully without blocking Ignis startup 2. WHEN Ollama is unavailable, THE AI Sidebar SHALL display a clear message instructing the user to start Ollama 3. WHEN the user attempts to send a message while Ollama is unavailable, THE AI Sidebar SHALL display a helpful error message instead of crashing 4. WHEN Ollama becomes available after startup, THE AI Sidebar SHALL detect the availability and enable chat functionality without requiring a restart 5. THE AI Sidebar SHALL handle Ollama connection failures gracefully during model listing, switching, and chat operations