Vault QA (Basic)
How to chat with your entire vault.
Vault QA is a powerful feature in Copilot for Obsidian that allows you to interact with your entire vault using natural language queries. It uses RAG (Retrieval Augmented Generation) to understand your questions and retrieve relevant information from your notes.
How to Use Vault QA (Basic)
-
Make sure you have a working embedding model and chat model in your Copilot settings!
-
Activate Vault QA Mode: In the Copilot interface, choose "Vault QA (Basic)" to activate this mode. Note that it may trigger indexing of your vault, which may take a while for large vaults. Indexing uses your embedding model, so it may generate costs if you're using a paid embedding provider. Use "Count total tokens" Copilot command to estimate the cost first if you have a large vault.
-
Ask Questions: Once indexing is successfully completed and you are in Vault QA mode, simply type your questions or queries into the chat input. Copilot will do a local search through your vault and passes relevant parts to the chat model to generate answers.
-
Receive Cited Responses: The AI will respond with relevant information and include citations to the specific notes where the information was found. This allows you to easily track the sources within your vault.
Vault QA (Basic) Settings
To customize your Vault QA experience, you can adjust several settings:
-
Auto-Index Strategy: This determines when your vault is indexed for searching. Options include:
- NEVER: Index only when you manually run the "Index vault for QA" command.
- ON STARTUP: Refresh the index each time you load or reload the plugin.
- ON MODE SWITCH (Recommended): Update the index when switching to Vault QA mode.
-
Embedding Model: Choose the model used for creating embeddings (vector representations) of your notes. Options may include OpenAI embedding models, Google or Cohere, or local models like those from Ollama. Same as chat model, you can use any 3rd party embedding model you want as long as it has an OpenAI compatible API.
-
Requests per Second: Adjust this if you're experiencing rate limiting from your embedding provider. Default is 10.
-
Indexing Exclusions: Specify folders, tags, or note name patterns you want to exclude from indexing. This helps manage large vaults or keep certain information private so they are never sent to the LLM.
Tips for Effective Use of Vault QA (Basic)
- Vault QA is best for specific questions. Beware of its limitations since it's relies on retrieval. Questions like "give me an overview of my vault" won't retrieve anything because there's nothing in the query to retrieve. Instead, ask questions like "what did I learn about x?" or "what are some ideas I jotted down about y?"
- Use the citations provided to verify information in your original notes.
- Regularly update your index, especially after adding new notes or making significant changes to your vault.
- Experiment with different chat and embedding models to find the best balance of speed and accuracy for your needs. OpenAI's
gpt-4o
andtext-embedding-3-small
is a good default combo to start with.
Important Considerations
- Indexing large vaults can take time and may incur costs if using paid embedding services. Nowadays embedding costs are usually low, but you should check the pricing of your selected embedding model.
- Local embedding models (like those from Ollama) can offer privacy and cost benefits but may be slower or less accurate than some cloud services.
- Check out Copilot Plus mode if you find Vault QA Basic is not working well for you. It uses a more advanced RAG system and agentic tool use that can handle more complex queries and provide more accurate answers.