Troubleshooting
Installation Problems
Compilation Fails with llama.cpp Errors
Problem: The compilation process fails when compiling llama.cpp dependencies.
Solutions:
Make sure you have the necessary compilation tools:
# Ubuntu/Debian sudo apt-get install build-essential cmake # macOS xcode-select --install # Fedora sudo dnf install gcc-c++ cmakeCheck the Go version (requires Go 1.20+):
go versionClean and recompile:
make clean make build
Missing GPU Support
Problem: GPU acceleration doesn’t work even though you have a compatible GPU.
Solutions:
NVIDIA (CUDA):
# Check CUDA installation nvidia-smi nvcc --version # Make sure CUDA toolkit is installed # Ubuntu/Debian sudo apt-get install nvidia-cuda-toolkitAMD (ROCm):
# Check ROCm installation rocm-smi # Install ROCm if missing # Follow instructions at https://rocm.docs.amd.com/Apple Silicon (Metal): Metal should work automatically on macOS with Apple Silicon. Make sure you’re running the native ARM64 compilation.
Runtime Problems
Out of Memory (OOM) Errors
Problem: The server crashes with memory errors when processing embeddings.
Solutions:
Reduce GPU layers:
# Use fewer GPU layers --gguf-gpu-layers 16 # instead of 32 # Or disable GPU completely --gguf-gpu-layers 0Use a smaller model:
- Switch from
nomic-embed-text-v1.5toall-MiniLM-L6-v2 - Use a more quantized version (Q4_K_M instead of Q8_0)
- Switch from
Reduce batch size (if applicable):
--gguf-batch-size 256 # default is usually higher
Model Won’t Load
Problem: The server won’t start with “model not found” errors or similar.
Solutions:
Check file path and permissions:
ls -lh ./model.gguf chmod +r ./model.ggufVerify the model file is not corrupted:
# Check that file size matches expected ls -lh ./model.gguf # Re-download if necessary wget https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF/resolve/main/nomic-embed-text-v1.5.Q4_K_M.ggufUse absolute path:
--gguf-model-path /complete/path/to/model.gguf
Slow Performance
Problem: Embedding generation or search is slower than expected.
Solutions:
Enable GPU acceleration:
--gguf-gpu-layers 32Increase thread count:
--gguf-threads 8 # adjust according to your CPU coresUse a faster model:
all-MiniLM-L6-v2is significantly faster thannomic-embed-text-v1.5
Check thermal throttling:
# Monitor CPU/GPU temperatures # NVIDIA nvidia-smi -l 1 # CPU sensors # Linux
Database Problems
Database Connection Fails
Problem: Cannot connect to SurrealDB (embedded or external).
Solutions:
For embedded database:
# Check file permissions ls -la ./remembrances.db # Make sure directory exists and has write permissions mkdir -p ./data chmod 755 ./data --db-path ./data/remembrances.dbFor external SurrealDB:
# Verify SurrealDB is running curl http://localhost:8000/health # Check connection parameters --surrealdb-url ws://localhost:8000 --surrealdb-user root --surrealdb-pass root
Database Corruption
Problem: Database errors or inconsistent data after a crash.
Solutions:
Backup and recreate:
# Backup existing data cp ./remembrances.db ./remembrances.db.backup # Remove corrupted database rm ./remembrances.db # Restart - will create new database ./remembrances-mcp --gguf-model-path ./model.ggufRun with debug logging to identify issues:
--log-level debug
MCP Connection Problems
Claude Desktop Won’t Connect
Problem: Claude Desktop doesn’t recognize or connect to Remembrances MCP.
Solutions:
Check configuration file location:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Linux:
~/.config/claude/claude_desktop_config.json
- macOS:
Verify JSON syntax:
# Validate JSON cat ~/.config/claude/claude_desktop_config.json | python -m json.toolUse absolute paths in configuration:
{ "mcpServers": { "remembrances": { "command": "/usr/local/bin/remembrances-mcp", "args": [ "--gguf-model-path", "/home/user/models/nomic-embed-text-v1.5.Q4_K_M.gguf" ] } } }Restart Claude Desktop after configuration changes.
MCP Streamable HTTP / HTTP API Problems
Problem: Cannot connect via MCP Streamable HTTP (MCP tools) or the HTTP JSON API.
Solutions:
Check if port is in use:
# Check port availability lsof -i :3000 # MCP Streamable HTTP default lsof -i :8080 # HTTP defaultUse a different port:
--mcp-http --mcp-http-addr ":3001" --http --http-addr ":8081"Check firewall configuration:
# Allow port (Linux with ufw) sudo ufw allow 8080/tcp
Embedding Problems
Inconsistent Search Results
Problem: Search results vary or don’t match expected content.
Solutions:
Ensure consistent embedding model - don’t mix embeddings from different models
Verify embedding dimensions match:
nomic-embed-text-v1.5: 768 dimensionsall-MiniLM-L6-v2: 384 dimensions
Re-index after model change:
# You may need to re-generate embeddings for all content if you change models
Embeddings Not Generated
Problem: Content is stored but embeddings are empty or missing.
Solutions:
Check embedder configuration:
# Verify model is specified --gguf-model-path ./model.gguf # Or --ollama-model nomic-embed-text # Or --openai-key sk-xxxEnable debug logging:
--log-level debug
Getting Help
If you’re still experiencing problems:
Check logs with debug mode:
--log-level debugSearch existing issues in GitHub Issues
Open a new issue with:
- Operating system and version
- Go version (
go version) - GPU type (if applicable)
- Complete error message
- Steps to reproduce
See Also
- Getting Started - Installation guide
- Configuration - Configuration options
- GGUF Models - Model selection and optimization