Optum Guide Bot
Business Opportunity
Provide instant and accurate information and assist agents with troubleshooting or finding specific information from knowledge central and other knowledge resources. Advocates can ask questions and receive answers based on a predefined templates and content
Solution / Approach
To address this problem, a solution was developed to automate the retrieval of documents from KC and store them securely in Blob storage on Azure cloud. This solution includes a batch service that triggers every day to check for the latest updates and fetch the updated/newly added documents. The solution also utilizes the Bot Framework to pull content from microsites and extract documents from the Content Management system like KC, AEM, etc. Additionally, LangChain, a Python library, is used to generate word embeddings and vectors for language-related lookups. These vectors are stored in a Vector Database, which is indexed into Azure Cognitive Search. Custom logic for semantic search is used to retrieve the results and display them. The architecture is scalable and can be evaluated for cost, and there are plans to migrate to Vespa services for embeddings and Vector Database. The solution also includes an admin interface for non-sharable/proprietary needs and utilizes Microservices/Functions for training/feedback loops. This solution enables agents to quickly and efficiently retrieve the information they need from KC, reducing response times and potential errors. The solution can be customized as needed and can be utilized as a central repository for Vector DB -KC.
- To address this problem, a solution was developed to automate the retrieval of documents from KC and store them securely in Blob storage on Azure cloud.
- This solution includes a batch service that triggers every day to check for the latest updates and fetch the updated/newly added documents
- The solution also utilizes the Bot Framework to pull content from microsites and extract documents from the Content Management system like KC, AEM, etc. Additionally, LangChain, a Python library, is used to generate word embeddings and vectors for language-related lookups.
- These vectors are stored in a Vector Database, which is indexed into Azure Cognitive Search. Custom logic for semantic search is used to retrieve the results and display them. The architecture is scalable and can be evaluated for cost, and there are plans to migrate to Vespa services for embeddings and Vector Database.
- The solution also includes an admin interface for non-sharable/proprietary needs and utilizes Microservices/Functions for training/feedback loops. This solution enables agents to quickly and efficiently retrieve the information they need from KC, reducing response times and potential errors.
- The solution can be customized as needed and can be utilized as a central repository for Vector DB -KC
Data Indexing :
Architecture
Key Metrics
- Retrieval time: This measures the time it takes to retrieve documents from KC, helping to identify areas for improvement.
- Accuracy rate: This measures the accuracy of retrieved results, ensuring that agents can quickly and easily find the information they need.
- User satisfaction: This measures the satisfaction level of users with the solution, helping to identify areas for improvement and ensure that the product meets user needs.
- Cost savings: This measures the cost savings achieved by automating the retrieval process, helping organizations to optimize their resources.
- Scalability: This measures the ability of the solution to scale up or down based on demand, ensuring that it can meet the needs of small, medium, or large organizations.
Tech Stack
- Python
- Azure Cognitive services
- Azure Functions
- Azure Form recognizer
- Azure Translator
- Azure OpenAI
- Jenkins
- Docker
- Git
- Terraform
- Azure Cosmos & PowerBI
Resources Links
Feedback
We appreciate your feedback! Please provide us with any suggestions or improvements you have for our product.Please provide feedback on this product by clicking the following Link: