Hi,
I began testing vectara using the sample app, but I realized that while the summary I received in response to my search “what is ML” was right, the first reference was not in any way related to “Machine Learning.” The engine mixed up “ML” for machine learning with “ML” for milliliter and based the summary on that.
My question: how can that be avoided?
Thanks numix,
This is a great example. In this case Vectara’s retrieval engine returned results with both interpretation: “ML as milimiter” and “ML as machine learning.” And the summarizer might decide that one is better than the other or might try to incorporate both.
It’s a tricky problem to solve. What’s the best way to disambiguate an ambiguous response like this? Maybe the summary can respond with a follow-up question if it needs to know how you’d prefer to disambiguate the response. For example in this case, it could prompt you with, “Did you mean ML as in millimeter, or ML as in machine learning?” and you could follow up with your answer. Would you find this useful? Also open to any other ideas you might have.
Thanks ofermend,
Actually I thought of that. Is there any way to implement it using Vectara’s API? Can we have a parameter to trigger a follow-up question if a prompt could refer to a variety of things?
Thanks
@numix - if you are using vectara-answer - unfortunately that is not implemented yet. Would you mind opening an issue for this and we can prioritize it there?