Provider Agent Assitant BOT BETA
Business Opportunity
Retrieving PDF/HTML/Docs from Knowledge Central for M&R, C&S, and E&I provider services by daily batch processes can be time-consuming and inefficient. Additionally, retrieving the correct information from KC can be challenging for agents who require specific information, which can lead to delays in response times and potential errors.
Link to Canvas Dashboard: Optum Guide BOT
Solution
To address this problem, a solution was developed to automate the retrieval of documents from KC and store them securely in Blob storage on Azure cloud. This solution includes a batch service that triggers every day to check for the latest updates and fetch the updated/newly added documents. The solution also utilizes the Bot Framework to pull content from microsites and extract documents from the Content Management system like KC, AEM, etc. Additionally, LangChain, a Python library, is used to generate word embeddings and vectors for language-related lookups. These vectors are stored in a Vector Database, which is indexed into Azure Cognitive Search. Custom logic for semantic search is used to retrieve the results and display them. The architecture is scalable and can be evaluated for cost, and there are plans to migrate to Vespa services for embeddings and Vector Database. The solution also includes an admin interface for non-sharable/proprietary needs and utilizes Microservices/Functions for training/feedback loops. This solution enables agents to quickly and efficiently retrieve the information they need from KC, reducing response times and potential errors. The solution can be customized as needed and can be utilized as a central repository for Vector DB -KC.
- Data Indexing :
- To address this problem, a solution was developed to automate the retrieval of documents from KC and store them securely in Blob storage on Azure cloud.
- This solution includes a batch service that triggers every day to check for the latest updates and fetch the updated/newly added documents
- The solution also utilizes the Bot Framework to pull content from microsites and extract documents from the Content Management system like KC, AEM, etc. Additionally, LangChain, a Python library, is used to generate word embeddings and vectors for language-related lookups.
- These vectors are stored in a Vector Database, which is indexed into Azure Cognitive Search. Custom logic for semantic search is used to retrieve the results and display them. The architecture is scalable and can be evaluated for cost, and there are plans to migrate to Vespa services for embeddings and Vector Database.
- The solution also includes an admin interface for non-sharable/proprietary needs and utilizes Microservices/Functions for training/feedback loops. This solution enables agents to quickly and efficiently retrieve the information they need from KC, reducing response times and potential errors.
- The solution can be customized as needed and can be utilized as a central repository for Vector DB -KC
Product Values
- Efficiency: The solution automates the retrieval of documents from KC, improving efficiency and reducing delays in response times.
- Accuracy: The solution utilizes custom logic for semantic search, improving the accuracy of retrieved results.
- Scalability: The solution is scalable and can be evaluated for cost, making it suitable for small, medium, or large organizations
- Security: The solution stores retrieved documents securely in Blob storage on Azure cloud, ensuring data privacy and security.
- Customization: The solution can be customized as needed, with a user interface that can be customized to fit specific needs.
Key Metrics
- Retrieval time: This measures the time it takes to retrieve documents from KC, helping to identify areas for improvement.
- Accuracy rate: This measures the accuracy of retrieved results, ensuring that agents can quickly and easily find the information they need.
- User satisfaction: This measures the satisfaction level of users with the solution, helping to identify areas for improvement and ensure that the product meets user needs.
- Cost savings: This measures the cost savings achieved by automating the retrieval process, helping organizations to optimize their resources.
- Scalability: This measures the ability of the solution to scale up or down based on demand, ensuring that it can meet the needs of small, medium, or large organizations.
Tech Stack
- Python
- Azure Cognitive services
- Azure Functions
- Azure Form recognizer
- Azure Translator
- Azure OpenAI
- Jenkins
- Docker
- Git
- Terraform
- Azure Cosmos & PowerBI
Architecture Diagram
Release Notes
- Agent Assistance Bot BETA supports only knowledge repository content by using natural language processing (NLP) to interpret and respond to user input, allowing it to provide automated responses to common queries.
- It is running on a non-production environment and uses the basic version of Azure OpenAI, hence the bot response time will be slower compared to the production bot.
- For the knowledge repository, we are using the staging environment, and the URL is Knowledge Central (uhc.com).
- Limitations: The bot is not able to read data from PPT and XSLX documents.
- We are not storing any transcript data.