feat(aisidebar): implement Ollama availability handling and graceful startup

- Add comprehensive Ollama connection error handling strategy
- Implement OllamaClient with non-blocking initialization and connection checks
- Create OllamaAvailabilityMonitor for periodic Ollama connection tracking
- Update design and requirements to support graceful Ollama unavailability
- Add new project structure for AI sidebar module with initial implementation
- Enhance error handling to prevent application crashes when Ollama is not running
- Prepare for future improvements in AI sidebar interaction and resilience
This commit is contained in:
Melvin Ragusa
2025-10-25 22:28:54 +02:00
parent 935b800221
commit 58bd935af0
11 changed files with 895 additions and 11 deletions

124
aisidebar/README.md Normal file
View File

@@ -0,0 +1,124 @@
## AI Sidebar for Ignis
A sleek AI chat sidebar that integrates with your Ignis desktop, sliding in from the left side with Ollama AI integration.
### Features
- **Slide-in Animation**: Smoothly slides in from the left side (opposite of QuickCenter)
- **Ollama Integration**: Chat with local AI models via Ollama
- **Conversation Persistence**: Your conversations are automatically saved and restored
- **Material Design 3**: Matches your existing Ignis theme perfectly
- **Keyboard Toggle**: Bind a key to toggle the sidebar visibility
### How to Use
#### Open/Close the Sidebar
You can toggle the sidebar using:
1. **Python/Script**: Call `window_manager.toggle_window("AISidebar")`
2. **Keyboard Shortcut**: Add a binding in your window manager config
#### Setting up a Keyboard Shortcut
For **Niri**, add this to your `~/.config/niri/config.kdl`:
```kdl
binds {
// ... your other bindings
// Toggle AI Sidebar with Super+A (or any key you prefer)
Mod+A { spawn "ignis" "run" "ignis.window_manager.WindowManager.get_default().toggle_window('AISidebar')"; }
}
```
For **Hyprland**, add this to your `~/.config/hypr/hyprland.conf`:
```conf
# Toggle AI Sidebar with Super+A
bind = SUPER, A, exec, ignis run "ignis.window_manager.WindowManager.get_default().toggle_window('AISidebar')"
```
For **Sway**, add this to your `~/.config/sway/config`:
```
# Toggle AI Sidebar with Super+A
bindsym $mod+A exec ignis run "ignis.window_manager.WindowManager.get_default().toggle_window('AISidebar')"
```
### Requirements
- **Ignis** desktop environment
- **Python 3.10+**
- **Ollama** with at least one model installed
- **ollama Python package**: `pip install ollama`
### Configuration
The sidebar will automatically:
- Detect your default Ollama model
- Store conversations in `~/.config/ignis/modules/aisidebar/data/conversations/`
- Apply your current Ignis theme colors
### Customization
#### Change Width
Edit `aisidebar.py` line 19:
```python
self.content_box.width_request = 400 # Change to desired width
```
#### Change Animation Speed
Edit `aisidebar.py` line 24:
```python
transition_duration=300, # Change to desired milliseconds
```
#### Custom CSS Styling
Edit `~/.config/ignis/styles/aisidebar.scss` to customize:
- Colors (uses Material Design 3 color tokens)
- Border radius
- Padding/margins
- Message bubble styling
### Troubleshooting
**Sidebar doesn't appear:**
- Restart Ignis: `ignis reload`
- Check Ollama is running: `curl http://127.0.0.1:11434/api/tags`
- Check console for errors: `ignis`
**No AI responses:**
- Ensure Ollama is running
- Ensure `ollama` Python package is installed in Ignis's Python environment
- Check that you have at least one model: `ollama list`
**CSS not applying:**
- Restart Ignis: `ignis reload`
- Check SCSS compilation: Look for errors in Ignis console output
### Architecture
```
~/.config/ignis/modules/aisidebar/
├── __init__.py # Module exports
├── aisidebar.py # Main RevealerWindow class
├── chat_widget.py # Chat UI widget
├── ollama_client.py # Ollama API wrapper
├── conversation_manager.py # Conversation persistence
└── data/
└── conversations/ # Saved conversations (auto-created)
```
### Visual Design
The AI Sidebar follows the same visual language as QuickCenter:
- Material Design 3 color system
- 20px border radius on container
- Surface elevation with shadows
- Smooth slide-in transitions
- Translucent overlay backdrop
Clicking outside the sidebar will close it (same as QuickCenter behavior).

3
aisidebar/__init__.py Normal file
View File

@@ -0,0 +1,3 @@
from .aisidebar import AISidebar
__all__ = ["AISidebar"]

73
aisidebar/aisidebar.py Normal file
View File

@@ -0,0 +1,73 @@
from ignis import widgets
from ignis.window_manager import WindowManager
from ignis.services.niri import NiriService
from .chat_widget import ChatWidget
window_manager = WindowManager.get_default()
class AISidebar(widgets.RevealerWindow):
"""AI Chat Sidebar that slides in from the left side"""
def __init__(self):
# Create chat interface
self.chat_widget = ChatWidget()
# Content box - 400px wide to match QuickCenter
self.content_box = widgets.Box(
vertical=True,
spacing=0,
hexpand=False,
css_classes=["ai-sidebar"],
child=[self.chat_widget],
)
self.content_box.width_request = 400
self.content_box.set_halign("start") # Align to left side
# Revealer for slide animation
revealer = widgets.Revealer(
child=self.content_box,
transition_duration=300,
transition_type="slide_right", # Slide in from left
halign="start", # Align revealer to left
)
# Close button overlay (click outside to close)
close_button = widgets.Button(
vexpand=True,
hexpand=True,
can_focus=False,
on_click=lambda x: window_manager.close_window("AISidebar"),
)
main_overlay = widgets.Overlay(
css_classes=["popup-close"],
child=close_button,
overlays=[revealer],
)
super().__init__(
revealer=revealer,
child=main_overlay,
css_classes=["popup-close"],
hide_on_close=True,
visible=False,
namespace="AISidebar",
popup=True,
layer="overlay", # Same as QuickCenter
kb_mode="exclusive", # Same as QuickCenter
anchor=["left", "right", "top", "bottom"], # Anchor to ALL edges like QuickCenter
)
self.window_manager = window_manager
self.revealer = revealer
self.niri = NiriService.get_default()
self.connect("notify::visible", self._toggle_revealer)
def _toggle_revealer(self, *_):
"""Toggle revealer when window visibility changes"""
self.revealer.reveal_child = self.visible
if self.visible:
# Focus on input when opened
self.chat_widget.focus_input()

192
aisidebar/chat_widget.py Normal file
View File

@@ -0,0 +1,192 @@
import threading
from ignis import widgets, app
from gi.repository import GLib
from .ollama_client import OllamaClient
from .conversation_manager import ConversationManager
class ChatWidget(widgets.Box):
"""Chat interface widget with Ollama integration"""
def __init__(self):
self._conversation_manager = ConversationManager()
self._ollama_client = OllamaClient()
self._current_model = self._ollama_client.default_model
# Header with title and model
header_title = widgets.Label(
label="AI Sidebar",
halign="start",
css_classes=["title-2"],
)
model_name = self._current_model or "No local model detected"
self._model_label = widgets.Label(
label=f"Model: {model_name}",
halign="start",
css_classes=["dim-label"],
)
header_box = widgets.Box(
vertical=True,
spacing=4,
child=[header_title, self._model_label],
)
# Message list
self._message_list = widgets.Box(
vertical=True,
spacing=8,
hexpand=True,
vexpand=True,
valign="start",
)
# Scrolled window for messages
self._scroller = widgets.Scroll(
hexpand=True,
vexpand=True,
min_content_height=300,
child=self._message_list,
)
# Input entry
self._entry = widgets.Entry(
hexpand=True,
placeholder_text="Ask a question…",
on_accept=lambda x: self._on_submit(),
)
# Send button
self._send_button = widgets.Button(
label="Send",
on_click=lambda x: self._on_submit(),
)
# Input box
input_box = widgets.Box(
spacing=8,
hexpand=True,
child=[self._entry, self._send_button],
)
# Main container
super().__init__(
vertical=True,
spacing=12,
hexpand=True,
vexpand=True,
child=[header_box, self._scroller, input_box],
css_classes=["ai-sidebar-content"],
)
# Set margins
self.set_margin_top(16)
self.set_margin_bottom(16)
self.set_margin_start(16)
self.set_margin_end(16)
# Load initial messages
self._populate_initial_messages()
def _populate_initial_messages(self):
"""Load conversation history"""
for message in self._conversation_manager.messages:
self._append_message(message["role"], message["content"], persist=False)
if not self._conversation_manager.messages:
self._append_message(
"assistant",
"Welcome! Ask a question to start a conversation.",
persist=True,
)
def _append_message(self, role: str, content: str, *, persist: bool = True):
"""Add a message bubble to the chat"""
label_prefix = "You" if role == "user" else "Assistant"
message_label = widgets.Label(
label=f"{label_prefix}: {content}",
halign="start",
xalign=0.0,
wrap=True,
wrap_mode="word_char", # Fixed: use underscore not hyphen
justify="left",
)
self._message_list.append(message_label)
self._scroll_to_bottom()
if persist and self._conversation_manager:
self._conversation_manager.append_message(role, content)
def _scroll_to_bottom(self):
"""Scroll to the latest message"""
def _scroll():
adjustment = self._scroller.get_vadjustment()
if adjustment:
adjustment.set_value(adjustment.get_upper() - adjustment.get_page_size())
return False
GLib.idle_add(_scroll)
def _set_input_enabled(self, enabled: bool):
"""Enable/disable input controls"""
self._entry.set_sensitive(enabled)
self._send_button.set_sensitive(enabled)
def _on_submit(self):
"""Handle message submission"""
text = self._entry.text.strip()
if not text:
return
self._entry.text = ""
self._append_message("user", text, persist=True)
self._request_response()
def _request_response(self):
"""Request AI response in background thread"""
model = self._current_model or self._ollama_client.default_model
if not model:
self._append_message(
"assistant",
"No Ollama models are available. Install a model to continue.",
persist=True,
)
return
history = self._conversation_manager.chat_messages
self._set_input_enabled(False)
def _worker(messages):
response = self._ollama_client.chat(model=model, messages=list(messages))
GLib.idle_add(self._handle_response, response, priority=GLib.PRIORITY_DEFAULT)
thread = threading.Thread(target=_worker, args=(history,), daemon=True)
thread.start()
def _handle_response(self, response):
"""Handle AI response"""
self._set_input_enabled(True)
if not response:
self._append_message(
"assistant",
"The model returned an empty response.",
persist=True,
)
return False
role = response.get("role", "assistant")
content = response.get("content") or ""
if not content:
content = "[No content received from Ollama]"
self._append_message(role, content, persist=True)
return False
def focus_input(self):
"""Focus the input entry"""
self._entry.grab_focus()

View File

@@ -0,0 +1,173 @@
"""Conversation state management and persistence helpers."""
from __future__ import annotations
import json
import os
import tempfile
from dataclasses import dataclass, field
from datetime import datetime, timezone
from pathlib import Path
from typing import ClassVar, Dict, Iterable, List, MutableMapping
DEFAULT_CONVERSATION_ID = "default"
@dataclass
class ConversationState:
"""In-memory representation of a conversation transcript."""
conversation_id: str
created_at: str
updated_at: str
messages: List[Dict[str, str]] = field(default_factory=list)
class ConversationManager:
"""Load and persist conversation transcripts as JSON files."""
VALID_ROLES: ClassVar[set[str]] = {"system", "user", "assistant"}
def __init__(
self,
storage_dir: str | Path | None = None,
conversation_id: str | None = None,
) -> None:
module_root = Path(__file__).resolve().parent
default_storage = module_root / "data" / "conversations"
self._storage_dir = Path(storage_dir) if storage_dir else default_storage
self._storage_dir.mkdir(parents=True, exist_ok=True)
self._conversation_id = conversation_id or DEFAULT_CONVERSATION_ID
self._path = self._storage_dir / f"{self._conversation_id}.json"
self._state = self._load_state()
# ------------------------------------------------------------------ properties
@property
def conversation_id(self) -> str:
return self._state.conversation_id
@property
def messages(self) -> List[Dict[str, str]]:
return list(self._state.messages)
@property
def chat_messages(self) -> List[Dict[str, str]]:
"""Return messages formatted for the Ollama chat API."""
return [
{"role": msg["role"], "content": msg["content"]}
for msg in self._state.messages
]
# ------------------------------------------------------------------ public API
def append_message(self, role: str, content: str) -> Dict[str, str]:
"""Append a new message and persist the updated transcript."""
normalized_role = role.lower()
if normalized_role not in self.VALID_ROLES:
raise ValueError(f"Invalid role '{role}'. Expected one of {self.VALID_ROLES}.")
timestamp = datetime.now(timezone.utc).isoformat()
message = {
"role": normalized_role,
"content": content,
"timestamp": timestamp,
}
self._state.messages.append(message)
self._state.updated_at = timestamp
self._write_state()
return message
def replace_messages(self, messages: Iterable[Dict[str, str]]) -> None:
"""Replace the transcript contents. Useful for loading fixtures."""
normalized: List[Dict[str, str]] = []
for item in messages:
role = item.get("role", "").lower()
content = item.get("content", "")
if role not in self.VALID_ROLES:
continue
normalized.append(
{
"role": role,
"content": content,
"timestamp": item.get("timestamp")
or datetime.now(timezone.utc).isoformat(),
}
)
now = datetime.now(timezone.utc).isoformat()
self._state.messages = normalized
self._state.created_at = self._state.created_at or now
self._state.updated_at = now
self._write_state()
# ------------------------------------------------------------------ persistence
def _load_state(self) -> ConversationState:
"""Load the transcript from disk or create a fresh default."""
if self._path.exists():
try:
with self._path.open("r", encoding="utf-8") as fh:
payload = json.load(fh)
return self._state_from_payload(payload)
except (json.JSONDecodeError, OSError):
pass
timestamp = datetime.now(timezone.utc).isoformat()
return ConversationState(
conversation_id=self._conversation_id,
created_at=timestamp,
updated_at=timestamp,
messages=[],
)
def _state_from_payload(self, payload: MutableMapping[str, object]) -> ConversationState:
"""Normalize persisted data into ConversationState instances."""
conversation_id = str(payload.get("id") or self._conversation_id)
created_at = str(payload.get("created_at") or datetime.now(timezone.utc).isoformat())
updated_at = str(payload.get("updated_at") or created_at)
messages_payload = payload.get("messages", [])
messages: List[Dict[str, str]] = []
if isinstance(messages_payload, list):
for item in messages_payload:
if not isinstance(item, dict):
continue
role = str(item.get("role", "")).lower()
content = str(item.get("content", ""))
if role not in self.VALID_ROLES:
continue
timestamp = str(
item.get("timestamp") or datetime.now(timezone.utc).isoformat()
)
messages.append({"role": role, "content": content, "timestamp": timestamp})
return ConversationState(
conversation_id=conversation_id,
created_at=created_at,
updated_at=updated_at,
messages=messages,
)
def _write_state(self) -> None:
"""Persist the conversation state atomically."""
payload = {
"id": self._state.conversation_id,
"created_at": self._state.created_at,
"updated_at": self._state.updated_at,
"messages": self._state.messages,
}
with tempfile.NamedTemporaryFile(
"w",
encoding="utf-8",
dir=self._storage_dir,
delete=False,
prefix=f"{self._conversation_id}.",
suffix=".tmp",
) as tmp_file:
json.dump(payload, tmp_file, indent=2, ensure_ascii=False)
tmp_file.flush()
os.fsync(tmp_file.fileno())
os.replace(tmp_file.name, self._path)

File diff suppressed because one or more lines are too long

130
aisidebar/ollama_client.py Normal file
View File

@@ -0,0 +1,130 @@
"""Client utilities for interacting with the Ollama API via direct HTTP calls."""
from __future__ import annotations
import json
from typing import Any, Dict, Iterable, Iterator
from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError
class OllamaClientError(RuntimeError):
"""Base exception raised when Ollama operations fail."""
class OllamaUnavailableError(OllamaClientError):
"""Raised when the Ollama server is not available."""
class OllamaClient:
"""HTTP client for interacting with Ollama's REST API."""
def __init__(self, host: str | None = None) -> None:
self._host = host or "http://localhost:11434"
self._cached_models: list[str] | None = None
# ------------------------------------------------------------------ helpers
@property
def is_available(self) -> bool:
"""Check if Ollama server is reachable."""
try:
req = Request(f"{self._host}/api/tags", method="GET")
with urlopen(req, timeout=2) as response:
return response.status == 200
except (URLError, HTTPError, TimeoutError):
return False
@property
def default_model(self) -> str | None:
"""Get the first available model."""
models = self.list_models()
return models[0] if models else None
def list_models(self, force_refresh: bool = False) -> list[str]:
"""Return the available model names, caching the result for quick reuse."""
if self._cached_models is not None and not force_refresh:
return list(self._cached_models)
try:
req = Request(f"{self._host}/api/tags", method="GET")
with urlopen(req, timeout=5) as response:
data = json.loads(response.read().decode())
except (URLError, HTTPError, TimeoutError) as exc:
raise OllamaClientError(f"Failed to list models: {exc}") from exc
models: list[str] = []
for item in data.get("models", []):
name = item.get("name") or item.get("model")
if name:
models.append(name)
self._cached_models = models
return list(models)
# ------------------------------------------------------------------ chat APIs
def chat(
self,
*,
model: str,
messages: Iterable[Dict[str, str]],
) -> dict[str, str] | None:
"""Execute a blocking chat call against Ollama."""
payload = {
"model": model,
"messages": list(messages),
"stream": False,
}
try:
req = Request(
f"{self._host}/api/chat",
data=json.dumps(payload).encode("utf-8"),
headers={"Content-Type": "application/json"},
method="POST",
)
with urlopen(req, timeout=120) as response:
result = json.loads(response.read().decode())
except (URLError, HTTPError, TimeoutError) as exc:
return {
"role": "assistant",
"content": f"Unable to reach Ollama: {exc}",
}
# Parse the response
message = result.get("message")
if not message:
return {"role": "assistant", "content": ""}
role = message.get("role", "assistant")
content = message.get("content", "")
return {"role": role, "content": content}
def stream_chat(
self, *, model: str, messages: Iterable[Dict[str, str]]
) -> Iterator[dict[str, Any]]:
"""Placeholder for streaming API - not yet implemented."""
raise NotImplementedError("Streaming chat is not yet implemented")
# ------------------------------------------------------------------ internals
def _make_request(
self, endpoint: str, method: str = "GET", data: dict | None = None
) -> dict:
"""Make an HTTP request to the Ollama API."""
url = f"{self._host}{endpoint}"
if data:
req = Request(
url,
data=json.dumps(data).encode("utf-8"),
headers={"Content-Type": "application/json"},
method=method,
)
else:
req = Request(url, method=method)
try:
with urlopen(req, timeout=30) as response:
return json.loads(response.read().decode())
except (URLError, HTTPError) as exc:
raise OllamaClientError(f"Request failed: {exc}") from exc