Administration
Admin Panel Overview
The Admin panel at /admin provides comprehensive system management. It is organized into the following sections accessible from the left sidebar:
| Section | Description |
|---|---|
| General | MCP endpoint, server identity, network interfaces, platform info |
| System Health | Service status, uptime, document counts |
| Embedding | Configure embedding provider and model |
| Reranking | Configure search result reranking |
| GPU Fleet | GPU detection and utilization (if available) |
| OCR | OCR engine settings and language packs |
| Local AI | Local model management (Ollama, Transformers.js) |
| AI Keys | API key management for cloud providers |
| Workers | Background worker pool configuration |
| Redis Cache | Redis connection and cache statistics |
| Cache Manager | Application cache controls |
| Filing Types | Define and manage filing type classifications |
| Jobs | View and manage processing jobs |
| Action Log | System event audit trail |
| Drafts | Manage saved draft documents |
Backup & Restore
Database Location
The primary database is an SQLite file at:
prisma/data/sound-suite.db
The vector database (LanceDB) is stored at:
data/lancedb/
Manual Backup
# Backup SQLite database
cp prisma/data/sound-suite.db prisma/data/sound-suite.db.bak
# Backup LanceDB vectors
cp -r data/lancedb/ data/lancedb-backup/
# Or use the built-in backup command
npm run db:backup
Automated Backups
Sound Suite can create automatic backups on a schedule. When 7-Zip is installed, backups are compressed automatically. The backup hook runs after every 20 commits when using the development workflow.
Restoring from Backup
# Restore SQLite database
cp prisma/data/sound-suite.db.bak prisma/data/sound-suite.db
# Restore LanceDB vectors
cp -r data/lancedb-backup/ data/lancedb/
# Or use the built-in restore command
npm run db:restore
What's Included in Backups
- SQLite database — Cases, documents, job logs, configuration, action logs
- LanceDB vectors — All embedding vectors and chunk data
- Exhibit images — Extracted exhibit images from
public/exhibits/
Health Monitoring
System Health Page
Navigate to Admin > System Health to see:
- Uptime — OS uptime and process uptime
- Record counts — Total cases, documents, and action logs
- Service status for each component:
- File Watcher — File monitoring service
- Job Queue — Document processing queue
- MCP Server — MCP tool server
- Redis — Cache server (shows memory usage and key count)
- Documents by Status — Breakdown of discovered vs. indexed documents
Health API
Programmatically check health:
curl -s http://localhost:3000/api/health | jq .
Service Management
Control services using the built-in scripts:
# Start all services
npm run svc:start
# Stop all services
npm run svc:stop
# Restart all services
npm run svc:restart
# Check health
npm run svc:health
Platform Info
The Admin > General page displays:
- Server Identity — Hostname, port, primary IP, process uptime
- Network Interfaces — All IPv4 and IPv6 addresses (internal/external)
- Platform — OS, architecture, Node.js version, CPU count, total/free memory
Performance Tuning
Embedding Performance
- Use Ollama with GPU for fastest local embeddings
- Increase
JOB_CONCURRENCYif your machine has sufficient RAM (4+ GB free) - Use smaller embedding models (e.g.,
all-MiniLM-L6-v2at 384 dimensions) for faster indexing - Use larger models (e.g.,
qwen3-embedding:0.6bat 1024 dimensions) for better search quality
Search Performance
- Enable Redis for caching frequently accessed chunks and search results
- Adjust
CHUNK_SIZE— Smaller chunks (256 tokens) give more precise results; larger chunks (1024) give more context - Use reranking to improve result quality (Admin > Reranking)
OCR Performance
- OCR is the slowest part of the pipeline — disable it if your documents are all digital PDFs
- Adjust the OCR density threshold in Processing Settings — higher thresholds mean fewer pages get OCR'd
- For large volumes of scanned documents, consider increasing worker count
Memory Management
- SQLite is lightweight and requires minimal RAM
- LanceDB loads embedding indexes into memory during search — ensure sufficient free RAM for your vector count
- Cache Manager (Admin > Cache Manager) lets you clear application caches if memory is tight
- Monitor memory usage on the Admin > General page
Troubleshooting
Common Issues
Documents stuck in QUEUED status
- Check that the Job Queue service is running (Admin > System Health)
- Verify Processing Settings concurrency is > 0
- Check the Jobs page (Admin > Jobs) for error details
- Restart services:
npm run svc:restart
OCR not working
- Verify OCR is enabled in Processing Settings
- Check that
tesseract.jswas installed correctly:npm ls tesseract.js - On Linux, ensure system libraries are installed (libpng, libjpeg)
- Check logs at
logs/directory for detailed error messages
MCP Server not responding
- Verify the MCP server is running (Admin > System Health)
- Check the MCP endpoint URL on Admin > General
- Test with curl:
curl -s http://localhost:3000/api/mcp/execute -H "Content-Type: application/json" -d '{"tool":"list_tools","params":{}}' - If using auth, ensure your client includes the correct authorization header
Search returning no results
- Verify documents have been indexed (check status on dashboard)
- Confirm the embedding provider is configured and running
- Try Direct Search first to rule out AI provider issues
- Check the Vectors page to confirm chunks exist in the database
High memory usage
- Reduce
JOB_CONCURRENCYto process fewer documents simultaneously - Clear caches via Admin > Cache Manager
- Restart the application to release memory:
npm run svc:restart - Consider using a smaller embedding model
Logs
Application logs are stored in the logs/ directory:
logs/app.log— General application logslogs/error.log— Error-only logslogs/access.log— HTTP request logs
The Action Log in the admin panel (Admin > Action Log) provides a searchable audit trail of system events.