- Add comprehensive Ollama connection error handling strategy - Implement OllamaClient with non-blocking initialization and connection checks - Create OllamaAvailabilityMonitor for periodic Ollama connection tracking - Update design and requirements to support graceful Ollama unavailability - Add new project structure for AI sidebar module with initial implementation - Enhance error handling to prevent application crashes when Ollama is not running - Prepare for future improvements in AI sidebar interaction and resilience
5.8 KiB
Requirements Document
Introduction
This document outlines the requirements for enhancing the AI sidebar module for Ignis. The AI sidebar is a working MVP that provides an interface for interacting with LLM models. The enhancements focus on improving user experience through streaming responses, better UI layout, conversation management commands, conversation persistence, and model reasoning controls.
Glossary
- AI Sidebar: The Ignis module located at /home/pinj/.config/ignis/modules/aisidebar that provides an interface for AI interactions
- LLM: Large Language Model - the AI system that generates responses
- Streaming: The progressive display of response text as tokens are generated, rather than showing the complete response at once
- Conversation Session: A continuous exchange of messages between the user and the LLM
- Reasoning Mode: An optional LLM feature that shows the model's thinking process before providing the final answer
Requirements
Requirement 1: Streaming Response Display
User Story: As a user, I want to see AI responses appear progressively as they are generated, so that I can start reading the response immediately and understand that the system is actively processing my request.
Acceptance Criteria
- WHEN the LLM begins generating a response, THE AI Sidebar SHALL display each token as it is received from the model
- WHILE the response is being generated, THE AI Sidebar SHALL append new tokens to the existing response text in real-time
- THE AI Sidebar SHALL maintain smooth visual rendering without flickering during token streaming
- WHEN the response generation is complete, THE AI Sidebar SHALL indicate that streaming has finished
Requirement 2: Improved Text Input Field
User Story: As a user, I want the text input field to expand and wrap text naturally, so that I can compose longer messages comfortably and see my full input clearly.
Acceptance Criteria
- WHEN the user types text that exceeds the width of the input field, THE AI Sidebar SHALL automatically wrap the text to the next line
- WHILE the user is typing, THE AI Sidebar SHALL expand the input field height to accommodate multiple lines of text
- THE AI Sidebar SHALL provide a visually comfortable text input area with appropriate padding and spacing
- THE AI Sidebar SHALL maintain input field usability across different message lengths
Requirement 3: Conversation Management Commands
User Story: As a user, I want to use slash commands to manage conversations and models, so that I can quickly perform common actions without navigating through menus.
Acceptance Criteria
- WHEN the user enters "/new", THE AI Sidebar SHALL save the current conversation and start a fresh conversation session
- WHEN the user enters "/clear", THE AI Sidebar SHALL save the current conversation and start a fresh conversation session
- WHEN the user enters a command to list models, THE AI Sidebar SHALL display all available LLM models
- WHEN the user enters a command to switch models, THE AI Sidebar SHALL change the active LLM model for subsequent messages
- THE AI Sidebar SHALL recognize slash commands at the beginning of user input and execute them instead of sending them as messages
Requirement 4: Conversation Persistence and Resume
User Story: As a user, I want to save and reopen previous conversations, so that I can continue past discussions or review my conversation history.
Acceptance Criteria
- WHEN the user starts a new conversation using "/new" or "/clear", THE AI Sidebar SHALL save the current conversation to a log file with a unique identifier
- THE AI Sidebar SHALL store conversation log files in a persistent location accessible across sessions
- WHEN the user enters a resume command, THE AI Sidebar SHALL display a list of saved conversations with identifiers or timestamps
- WHEN the user selects a saved conversation to resume, THE AI Sidebar SHALL load the conversation history and allow continuation
- THE AI Sidebar SHALL preserve all message content, timestamps, and model information in saved conversations
Requirement 5: Reasoning Mode Toggle
User Story: As a user, I want to enable or disable the model's reasoning output, so that I can choose whether to see the thinking process or just the final answer based on my needs.
Acceptance Criteria
- THE AI Sidebar SHALL provide a toggle button or control to enable reasoning mode
- WHEN reasoning mode is enabled, THE AI Sidebar SHALL request and display the model's thinking process before the final answer
- WHEN reasoning mode is disabled, THE AI Sidebar SHALL request and display only the final answer without intermediate reasoning
- THE AI Sidebar SHALL persist the reasoning mode preference across conversation sessions
- THE AI Sidebar SHALL visually distinguish reasoning content from final answer content when reasoning mode is enabled
Requirement 6: Graceful Ollama Unavailability Handling
User Story: As a user, I want the AI Sidebar to start and function even when Ollama is not running, so that Ignis can launch successfully and I can start Ollama when I'm ready to use the AI features.
Acceptance Criteria
- WHEN Ollama is not running at startup, THE AI Sidebar SHALL initialize successfully without blocking Ignis startup
- WHEN Ollama is unavailable, THE AI Sidebar SHALL display a clear message instructing the user to start Ollama
- WHEN the user attempts to send a message while Ollama is unavailable, THE AI Sidebar SHALL display a helpful error message instead of crashing
- WHEN Ollama becomes available after startup, THE AI Sidebar SHALL detect the availability and enable chat functionality without requiring a restart
- THE AI Sidebar SHALL handle Ollama connection failures gracefully during model listing, switching, and chat operations