Vector operations allow you to store and retrieve vector embeddings from vector databases like Qdrant. These actions are essential for building semantic search, recommendation systems, and Retrieval-Augmented Generation (RAG) applications.
storevector
Stores vector embeddings in a vector database along with associated metadata.
Integration ID
The vector database integration to store vectors in.
YAML Key integrationIDType string Required Yes
Vectors
The vector data to store, typically as a JSON array of floats.
YAML Key vectorsType string Required Yes
Use {{ .generate_embedding }} to reference vectors generated by a previous action, such as an AI embedding model.
Fields
Metadata fields to store alongside the vector. These fields can be used for filtering during retrieval.
YAML Key fieldsType map Required Yes
Options
Additional storage options such as the collection name.
YAML Key optionsType map Required No
Common options:
collection — The collection name to store vectors in
Example
actions:
store_embedding:
type: storevector
config:
integrationID: my_qdrant
vectors: "{{ .generate_embedding }}"
fields:
content: "{{ param \"text\" }}"
source: "user_input"
timestamp: "{{ now }}"
options:
collection: documents
next: response.success
fail: response.error
fetchvectors
Retrieves vectors from a vector database for similarity search. Returns the most similar vectors to the provided query vector.
Integration ID
The vector database integration to query.
YAML Key integrationIDType string Required Yes
Vector
The query vector to search for similar vectors. This is typically an embedding of the search query.
YAML Key vectorType string Required No
Options
Query options such as collection name and result limit.
YAML Key optionsType map Required No
Common options:
collection — The collection name to search in
limit — Maximum number of results to return
Example
actions:
search_similar:
type: fetchvectors
config:
integrationID: my_qdrant
vector: "{{ .query_embedding }}"
options:
collection: documents
limit: 10
next: response.results
fail: response.error
Common Patterns
RAG Pipeline
A typical RAG (Retrieval-Augmented Generation) workflow combines vector search with an AI agent:
actions:
embed_query:
type: agent
config:
integrationID: my_openai
userPrompt: "Generate an embedding for: {{ param \"question\" }}"
next: action.search_docs
search_docs:
type: fetchvectors
config:
integrationID: my_qdrant
vector: "{{ .embed_query }}"
options:
collection: knowledge_base
limit: 5
next: action.generate_answer
generate_answer:
type: agent
config:
integrationID: my_openai
systemPrompt: |
Answer the user's question using only the provided context.
Context: {{ .search_docs }}
userPrompt: "{{ param \"question\" }}"
next: response.answer
Document Ingestion
Store documents with their embeddings for later retrieval:
actions:
generate_embedding:
type: agent
config:
integrationID: my_openai
userPrompt: "Generate an embedding for: {{ param \"content\" }}"
next: action.store_document
store_document:
type: storevector
config:
integrationID: my_qdrant
vectors: "{{ .generate_embedding }}"
fields:
content: "{{ param \"content\" }}"
title: "{{ param \"title\" }}"
category: "{{ param \"category\" }}"
options:
collection: documents
next: response.stored
Next Steps
AI Agents Use AI agents to generate embeddings and process search results.
Data Operations Combine vector search with traditional database queries.
Actions Overview Learn the fundamentals of ServFlow actions.
Configuration Reference Explore all ServFlow configuration options.