I want user queries to go through a matrix/query step prior to the vDB being selected to execute the query.
If its a question on these topics it should get this vDB.
We are looking at this path as we may also have certain fineT-models answer specific questions vs. trying to train on a broader data set.
Is this on your roadmap something being considered?
Hey @CJM and welcome to the forums!
I don’t quite understand your question. I’d guess that you’re looking to do something like some kind of query categorization/understanding step? Or maybe you’re trying to do direct question-lookup/matching?
Vectara does support trying to find the closest question to a user’s question by changing the query semantics to “response” in the API as described on Question-Answer Matching System | Vectara Docs
Maybe you can clarify what you mean by “a matrix/query step?” Or provide a bit more of an example of your use case?
Hey - I have read the Q&A doc linked and it doesn’t work for my use.
I want user questions to be sent to a router (transistor/gate) that looks at the questions and determines which vDB should be used when answering a question. I can not have vDB 1 be used to answer certain questions. I need to have 4+ different vDB’s that that be selected in a routing stage step before providing back an answer.
We have tested dropping all data into single vDBs and the results are horrid. When its specific works well.
@shane is it possible to control the embedding meta description - we have seen increases in accuracy doing this in other platforms.
I see the write up on scoring below but I am worried scoring x sources (embedding areas) higher will falsify results as keywords/termonology can be the same. Hence why we are looking for a router that gets the AI to look for data from a specific embedding source.
Let me know if you have a calendly link and I can book in a time to chat and go through the data we have and flow we are looking for on Figma.