A previous post talked about writing SQL queries to create embedding and perform vector search over the shredded description in JSON format from drone image analysis output along with associated vectors and then using the built-in operators to query the objects associated with the vectors. This article talks about creating an agent in Azure AI search with consolidated vector search from local vectors and those in the SQL database and where the agent acts as a wrapper for an LLM deployed to Azure Open AI. The LLM is used to send queries to an agentic retrieval pipeline.
from azure.search.documents.indexes import SearchIndexClient
from azure.search.documents.indexes.models import (
KnowledgeAgent,
KnowledgeAgentAzureOpenAIModel,
KnowledgeAgentRequestLimits,
KnowledgeAgentTargetIndex
)
agent=KnowledgeAgent(
name=agent_name,
target_indexes=[
KnowledgeAgentTargetIndex(
index_name=index_name, default_include_reference_source_data=True,
default_reranker_threshold=2.5
)
],
models=[
KnowledgeAgentAzureOpenAIModel(
azure_open_ai_parameters=AzureOpenAIVectorizerParameters(
resource_url=azure_openai_endpoint,
deployment_name=azure_openai_gpt_deployment,
model_name=azure_openai_gpt_model,
)
)
],
request_limits=KnowledgeAgentRequestLimits(
max_output_size=agent_max_output_tokens
)
)
index_client = SearchIndexClient(endpoint=endpoint, credential=credential)
index_client.create_or_update_agent(agent)
And with constants such as
AZURE_OPENAI_ENDPOINT=https://<openai-resource-name>.openai.azure.com
AZURE_OPENAI_GPT_DEPLOYMENT=gpt-4o-mini
AZURE_SEARCH_ENDPOINT=https://<search-resource-name>.search.windows.net
AZURE_SEARCH_INDEX_NAME=agentic-retrieval-drone-images
And its usage as follows:
from azure.search.documents.agent import KnowledgeAgentRetrievalClient
from azure.search.documents.agent.models import KnowledgeAgentRetrievalClient, KnowledgeAgentMessage
agent_client = KnowledgeAgentRetrievalClient(
endpoint=AZURE_SEARCH_ENDPOINT, agent_name=AZURE_SEARCH_AGENT, credential=azure_credential
)
messages.append({
“role”: “user”,
“content”:
“““
How do the landmarks detailed in the object detection output compare in proximity to those found near high population density?
”””
})
retrieval_result = agent_client.knowledge_retrieval.retrieve(
messages[KnowledgeAgentMessage(
role=msgp[“role”],
content=[KnowledgeAgentMessageTextContent(text=msg[“content”])])
for msg in messages if msg[“role”] != “system”],
Target_index_params=[KnowedgeAgentIndexParams(index_name=index_name, reranker_threshold=3, include_reference_source_data=True)],
)
)
messages.append({
“role”: “assistant”,
“content”: retrieval_result.response[0].content[0].text
})
No comments:
Post a Comment