Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Amazon Bedrock Knowledge Bases is a fully managed service that empowers developers to create highly accurate, low latency, secure, and customizable generative AI applications cost effectively. Amazon Bedrock Knowledge Bases connects foundation models (FMs) to a company’s internal data using Retrieval Augmented Generation (RAG). RAG helps FMs deliver more relevant, accurate, and customized responses.
In this post, we detail two announcements related to Amazon Bedrock Knowledge Bases:
- Support for custom connectors and ingestion of streaming data.
- Support for reranking models.
Support for custom connectors and ingestion of streaming data
Today, we announced support for custom connectors and ingestion of streaming data in Amazon Bedrock Knowledge Bases. Developers can now efficiently and cost-effectively ingest, update, or delete data directly using a single API call, without the need to perform a full sync with the data source periodically or after every change. Customers are increasingly developing RAG-based generative AI applications for various use cases such as chatbots and enterprise search. However, they face challenges in keeping the data up-to-date in their knowledge bases so that the end users of the applications always have access to the latest information. The current process of data synchronization is time-consuming, requiring a full sync every time new data is added or removed. Customers also face challenges in integrating data from unsupported sources, such as Google Drive or Quip, into their knowledge base. Typically, to make this data available in Amazon Bedrock Knowledge Bases, they must first move it to a supported source, such as Amazon Simple Storage Service (Amazon S3), and then start the ingestion process. This extra step not only creates additional overhead but also introduces delays in making the data accessible for querying. Additionally, customers who want to use streaming data (for example, news feeds or Internet of Things (IoT) sensor data) face delays in real-time data availability due to the need to store the data in a supported data source before ingestion. As customers scale up their data, these inefficiencies and delays can become significant operational bottlenecks and increase costs. Keeping all these challenges in mind, it’s important to have a more efficient and cost-effective way to ingest and manage data from various sources to ensure that the knowledge base is up-to-date and available for querying in real-time. With support for custom connector and ingestion of streaming data, customers can now use direct APIs to efficiently add, check the status of, and delete data, without the need to list and sync the entire dataset.
How it works
Custom connectors and ingestion of streaming data can be accessed using the Amazon Bedrock console or the AWS SDK.
- Add Document
The Add Document API is used to add new files to the knowledge base without having to perform a full sync after the document has been added. Customers can add content by specifying the Amazon S3 path of the document, the text content to add as a document to the source, or as a Base64-encoded string. For example:PUT /knowledgebases/KB12345678/datasources/DS12345678/documents HTTP/1.1 Content-type: application/json { "documents": [{ "content": { "dataSourceType": "CUSTOM", "custom": { "customDocumentIdentifier": { "id": "MyDocument" }, "inlineContent": { "textContent": { "data": "Hello world!" }, "type": "TEXT" }, "sourceType": "IN_LINE" } } }] }
- Delete Document
The Delete Document API is used to delete data from the knowledge base without needing to perform a full sync after the document has been deleted. For example:POST /knowledgebases/KB12345678/datasources/DS12345678/documents/deleteDocuments/ HTTP/1.1 Content-type: application/json { "documentIdentifiers": [{ "custom": { "id": "MyDocument" }, "dataSourceType": "CUSTOM" }] }
- List Document(s)
The List Document API returns a list of records that match the criteria that is specified in the request parameters. For example:POST /knowledgebases/KB12345678/datasources/DS12345678/documents/ HTTP/1.1 Content-type: application/json { "maxResults": 10 }
- Get Document
The Get Document API returns information about the document(s) that match the criteria that is specified in the request parameters. For example:POST /knowledgebases/KB12345678/datasources/DS12345678/documents/getDocuments/ HTTP/1.1 Content-type: application/json { "documentIdentifiers": [{ "custom": { "id": "MyDocument" }, "dataSourceType": "CUSTOM" }] }
Now available
Support for custom connectors and ingestion of streaming data in Amazon Bedrock Knowledge Bases is available today in all AWS Regions where Amazon Bedrock Knowledge Bases is available. Check the Region list for details and future updates. To learn more about Amazon Bedrock Knowledge Bases, visit the Amazon Bedrock product page. For pricing details, review the Amazon Bedrock pricing page.
Send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS contacts, and engage with the generative AI builder community at community.aws.
Support for reranking models
Today we also announced the new Rerank API in Amazon Bedrock to offer developers a way to use reranking models to enhance the performance of their RAG-based applications by improving the relevance and accuracy of responses. Semantic search, supported by vector embeddings, embeds documents and queries into a semantic high-dimension vector space where texts with related meanings are nearby in the vector space and therefore semantically similar, so that it returns similar items even if they don’t share any words with the query. Semantic search is used in RAG applications because the relevance of retrieved documents to a user’s query plays a critical role in providing accurate responses and RAG applications retrieve a range of relevant documents from the vector store.
However, semantic search has limitations in prioritizing the most suitable documents based on user preferences or query context especially when the user query is complex, ambiguous, or involves nuanced context. This can lead to retrieving documents that are only partially relevant to the user’s question. This leads to another challenge where proper citation and attribution of sources is not attributed to the correct sources, leading to loss of trust and transparency in the RAG-based application. To address these limitations, future RAG systems should prioritize developing robust ranking algorithms that can better understand user intent and context. Additionally, it is important to focus on improving source credibility assessment and citation practices to confirm the reliability and transparency of the generated responses.
Advanced reranking models solve for these challenges by prioritizing the most relevant content from a knowledge base for a query and additional context to ensure that foundation models receive the most relevant content, which leads to more accurate and contextually appropriate responses. Reranking models may reduce response generation costs by prioritizing the information that is sent to the generation model.
How it works
At launch, we’re supporting Amazon Rerank 1.0 and Cohere Rerank 3.5 reranking models. For the walkthrough, I will use the Amazon Rerank 1.0 model, I will start by requesting access to this model.
Once access has been granted, I create a knowledge base using the existing Amazon Bedrock Knowledge Bases Console experience (an API process is also available as an alternative). The knowledge base contains two data sources; a music playlist, and a list of films.
As soon as the knowledge base has been created I edit the Service Role to add the policy that contains the bedrock:Rerank
action. The API takes the user query as the input along with the list of documents that needs to be reranked. The output will be a reranked prioritized list of documents.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel"
],
"Resource": [
"arn:aws:bedrock:us-west-2::foundation-model/amazon.rerank-v1:0"
]
},
{
"Sid": "Statement2",
"Effect": "Allow",
"Action": [
"bedrock:Rerank"
],
"Resource": [
"*"
]
}
]
}
The last step is to sync the data sources to index their contents for searching. A sync can take between a few minutes to a few hours.
The knowledge base is ready for use. The RetrieveAndGenerate
API reranks the results retrieved from the vector datastore based on their relevance with the query.
To contrast, I ran the same query against the same data in a separate account that doesn’t have the Rerank API. The outcome is that results aren’t reranked on their relevance with the query. This could affect performance and compromise the accuracy of the responses.
Now available
The Rerank API in Amazon Bedrock is available today in the following AWS Regions: US West (Oregon), Canada (Central), Europe (Frankfurt), and Asia Pacific (Tokyo). Check the Region list for details and future updates. Rerank API can be used independently to rerank documents even if you are not using Amazon Bedrock Knowledge Bases. To learn more about Amazon Bedrock Knowledge Bases, visit the Amazon Bedrock product page. For pricing details, review the Amazon Bedrock pricing page.
Send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS contacts, and engage with the generative AI builder community at community.aws.
– Veliswa.
from AWS News Blog https://ift.tt/8I3tUz0
Share this content: