Back to Blog
Cloud Computing

Amazon Kendra: The Complete Guide to AWS Enterprise Search

Amazon Kendra replaces keyword-based enterprise search with ML-powered natural language understanding — extracting precise answers from your documents with confidence scores, source citations, and automatic access control enforcement. This guide covers index architecture, three result types, 14+ data source connectors with ACL sync, the Kendra Retriever API for RAG with Bedrock, GenAI Index, pricing, security, and a comparison with Azure AI Search.

Cloud Computing
Service Deep Dive
25 min read
4 views

What Is Amazon Kendra?

According to McKinsey research, the average enterprise employee spends 3.6 hours per day searching for information — navigating disconnected systems, scanning irrelevant keyword results, and digging through document repositories that grow larger every quarter. However, traditional keyword-based search fails because users do not always know the exact terms documents use. Specifically, they type “what is our vacation policy” and get zero results because the HR document says “paid time off” instead. Amazon Kendra solves this fundamental problem with ML-powered natural language search.

Amazon Kendra is a fully managed intelligent enterprise search service from Amazon Web Services powered by machine learning. Unlike traditional keyword search engines, Amazon Kendra uses natural language processing to understand the intent behind user queries and deliver highly relevant answers — not just a ranked list of potentially matching documents. Consequently, users can ask questions in plain English (“How long is maternity leave?” or “How do I configure my VPN?”), and Kendra extracts and returns precise answers with confidence scores and source citations.

Importantly, Amazon Kendra connects to your existing data sources — S3, SharePoint, Confluence, Salesforce, ServiceNow, OneDrive, Google Drive, and dozens more — through pre-built connectors that crawl and index content automatically on configurable schedules while respecting existing access controls from each source system. Consequently, employees only see search results from documents they are authorized to access, maintaining governance without requiring a separate permission layer.

Amazon Kendra Enterprise Impact

40-60%
Search Time Reduction
30-50%
Fewer Support Tickets
14+ connectors
Native Data Source Integrations

Moreover, Amazon Kendra plays a critical role in the generative AI landscape as the retrieval layer for Retrieval-Augmented Generation (RAG) architectures. The Kendra Retriever API provides high-accuracy semantic retrieval that feeds relevant document passages to Amazon Bedrock foundation models, enabling grounded, hallucination-free AI responses based on your actual enterprise content. This RAG pattern — Kendra for retrieval, Bedrock for generation — has become the standard architecture for enterprise generative AI applications in 2026.

Supported Document Formats

Additionally, Amazon Kendra supports a wide variety of document formats including PDFs, HTML pages, Word documents, PowerPoint presentations, plain text files, and FAQ lists in CSV format. The service processes and indexes these diverse formats automatically, extracting text, structure, and metadata without requiring custom parsing logic or document format conversion pipelines. Consequently, this format flexibility means organizations can index their entire document ecosystem — from formal policy documents to internal wiki pages to customer-facing knowledge articles — through a single, unified, permission-aware search interface that respects access controls across every connected source.

Furthermore, Amazon Kendra integrates natively with the broader AWS AI ecosystem — Amazon Lex for conversational chatbot interfaces, Amazon Comprehend for text analysis, Amazon Q Business for enterprise knowledge assistants, and AWS Lambda for custom processing logic. This deep integration means you can build complete intelligent knowledge systems entirely within AWS, from document indexing through natural language search to generative AI-powered answers.

Key Takeaway

Amazon Kendra replaces keyword-based enterprise search with ML-powered natural language understanding — extracting precise answers from your documents, respecting access controls, and serving as the retrieval layer for RAG-based generative AI applications. If your employees spend hours searching for information across disconnected systems, Kendra is the fastest path to intelligent enterprise search on AWS.


How Amazon Kendra Works

Fundamentally, Essentially, Amazon Kendra operates by indexing your enterprise content from connected data sources, then applying machine learning models to understand the meaning behind user queries and match them to the most relevant information in your index.

Amazon Kendra Index Architecture

At the core of every Kendra deployment is an index — a searchable repository of your enterprise content. Simply create an index, connect data sources, and Kendra crawls, processes, and indexes your documents automatically. Importantly, the index stores not just the raw text but also metadata, document structure, access control lists, and semantic representations that enable intelligent retrieval.

Under the hood, Kendra’s indexing process applies multiple ML models to each document: text extraction (handling PDFs, HTML, PPT, Word, and other formats), entity recognition, key phrase identification, and semantic embedding generation. This multi-model processing ensures that when users search, Kendra can match on meaning — not just surface-level keyword overlap. Ultimately, the result is a search experience that understands “Who approves overtime?” matches a document titled “Management Authorization Procedures” even though no keywords overlap — a level of semantic understanding that keyword search simply cannot achieve.

Currently, Amazon Kendra offers two index editions for different scale requirements:

  • Developer Edition: Supports up to 10,000 documents and 4,000 queries per day. Designed for development, testing, and small-scale deployments. Runs on a single availability zone.
  • Enterprise Edition: Supports up to 100,000 documents (expandable to 10+ million with additional storage units) and 8,000 queries per day (expandable to 800,000+). Production-ready with three availability zones for high availability.

Additionally, Amazon Kendra has introduced the GenAI Index — a new index type specifically designed for retrieval-augmented generation and intelligent search. The GenAI Index leverages advanced semantic models and the latest information retrieval technologies to help enterprises build digital assistants and intelligent search experiences more efficiently. This index is optimized for the Kendra Retriever API, which feeds high-accuracy retrieval results to foundation models in Bedrock.

Three Types of Amazon Kendra Results

Unlike traditional search engines that return a simple ranked list of documents, Amazon Kendra returns three distinct types of results for every query — each serving a different user need and providing progressively more detail:

  • Factoid answers: Essentially, direct answers extracted from documents with high confidence. For a question like “How long is maternity leave?”, Kendra might return “14 weeks” — extracted directly from your HR policy document with the source cited. This is the highest-value result type, delivering instant answers without requiring the user to open and read any document.
  • Document excerpts: Additionally, relevant passages from documents with the answer highlighted in context. For broader questions like “How do I configure my VPN?”, Kendra extracts the most relevant paragraphs and highlights the key information, giving the user enough context to understand the answer without reading the entire document.
  • Document rankings: Finally, a traditional ranked list of relevant documents as a fallback when specific answers cannot be extracted. This ensures users always receive useful results, even for ambiguous, complex, or highly specialized queries where automatic answer extraction is not confident enough.

Consequently, Kendra delivers a layered search experience — precise answers when possible, relevant excerpts when the question is broader, and document links when neither applies. This multi-tier approach significantly outperforms traditional keyword search, where users must open and read multiple documents to find what they need. Furthermore, each result includes a confidence score that your application can use to determine how to present results — showing factoid answers prominently when confidence is high, or falling back to document lists when the query is ambiguous.

Data Source Connectors for Amazon Kendra

Currently, Amazon Kendra provides 14+ native data source connectors that index content from your existing repositories without requiring data migration:

  • Cloud storage: Amazon S3, OneDrive, Google Drive, Box
  • Collaboration platforms: SharePoint, Confluence, Slack
  • Business applications: Salesforce, ServiceNow, Jira
  • Databases: Amazon RDS, custom JDBC-compatible databases
  • Web content: Web crawler for indexing website content
  • Custom sources: Custom connector SDK for proprietary data sources

Importantly, each connector automatically syncs access control lists (ACLs) from the source system. Consequently, when a user searches, Kendra filters results based on the user’s identity and group memberships — ensuring they only see documents they are authorized to access. Ultimately, this ACL synchronization eliminates the need to rebuild permission models and ensures search results remain compliant with your existing access policies.


Core Amazon Kendra Features

Beyond the index and connector infrastructure, several capabilities make Amazon Kendra particularly powerful for enterprise deployment. These features work together to deliver search experiences that continuously improve over time — learning from user behavior, adapting to your organization’s vocabulary, and providing fine-grained control over result relevance:

Natural Language Query Understanding
ML models understand the intent behind queries, not just keywords. Ask “Who can approve travel expenses over $5,000?” and Kendra finds the approval matrix — even if the document never uses that exact phrasing.
Kendra Retriever API for RAG
Optimized API for retrieval-augmented generation workflows. Provides high-accuracy semantic retrieval with passage-level granularity optimized for feeding into Bedrock foundation models — eliminating the need to build custom retrieval systems.
FAQ Matching
Specialized model that pinpoints the closest question from your curated FAQ lists and returns the corresponding answer. Upload FAQ files in CSV format and Kendra automatically handles question-matching logic.
Incremental Learning
ML models continuously optimize search results based on end-user search patterns and feedback. Popular, high-quality documents automatically rise in rankings as usage data accumulates — improving accuracy over time without manual tuning.
Relevance Tuning
Boost search results based on document attributes — data source authority, author, freshness, department, or custom metadata. Ensure that the most authoritative and current documents appear first for each query type.
Custom Synonyms
Extend Kendra’s understanding of your business vocabulary. When users search for “HSA,” Kendra automatically includes results referencing “Health Savings Account.” Define synonym groups to bridge the gap between how employees ask and how documents describe.

Amazon Kendra and Generative AI

Undeniably, the most significant evolution of Amazon Kendra in recent years is its role as the retrieval layer in generative AI architectures. Specifically, the Kendra + Bedrock RAG pattern has become the standard architecture for enterprise generative AI applications, and understanding how it works is essential for any organization deploying AI assistants grounded in enterprise content.

Kendra Retriever API for RAG Workflows

The Kendra Retriever API is specifically optimized for RAG workflows. Unlike the standard Kendra Query API that returns full document excerpts, the Retriever API returns passage-level results with optimized granularity — precisely sized chunks of content that maximize the quality of the RAG payload sent to foundation models. This eliminates the need to build custom chunking, embedding, and retrieval logic from scratch — which is one of the most technically challenging and time-consuming aspects of RAG implementation that teams typically spend weeks developing and tuning.

The complete RAG workflow operates as follows:

  1. A user asks a natural language question through your application interface.
  2. The Kendra Retriever API semantically searches your enterprise content and retrieves the most relevant document passages.
  3. These passages are assembled into a context payload and sent to a foundation model on Amazon Bedrock (such as Anthropic Claude, Meta Llama, or Amazon Nova).
  4. The foundation model generates a natural language answer grounded in the retrieved passages, citing specific sources.
  5. The answer is returned to the user with source citations from your original documents.

Importantly, this architecture provides the natural language fluency of LLMs while keeping answers grounded in your actual enterprise documents — dramatically reducing hallucination risk compared to letting an LLM generate answers solely from its training data.

Access Control in Amazon Kendra RAG Workflows

Furthermore, because Kendra respects document-level access controls, the RAG responses only reference documents the requesting user is authorized to see, maintaining security at the retrieval layer. This is a critical architectural advantage over generic vector databases that lack built-in access control — with Kendra, the same ACL synchronization that governs direct search also applies to RAG retrieval, ensuring that generative AI responses never inadvertently leak information from documents the requesting user is not authorized to access — a compliance requirement in regulated industries.

Additionally, the GenAI Index introduced in recent Kendra updates is purpose-built for this RAG pattern. It uses advanced semantic models and the latest information retrieval technologies to deliver higher retrieval accuracy than the standard Kendra index — specifically optimized for the passage-level granularity that foundation models require. For organizations building enterprise AI assistants, the GenAI Index + Kendra Retriever API + Bedrock foundation model stack represents the production-ready RAG architecture on AWS.

Need Intelligent Enterprise Search?
Our AWS team designs and deploys Kendra-powered search and RAG architectures for enterprise workloads


Amazon Kendra Pricing Model

Unlike per-request services, Unlike per-request services, Amazon Kendra uses subscription-based pricing with monthly fees for index capacity. Rather than listing specific dollar amounts that change over time, here is how the cost structure works:

Understanding Amazon Kendra Cost Dimensions

  • Index hourly rate: Essentially, charged per hour of active index uptime. The rate differs between Developer and Enterprise editions, with Enterprise costing more but providing multi-AZ availability, higher document limits, and higher query throughput.
  • Document storage units: Additionally, additional capacity beyond the base document limit can be added through storage units. Each unit increases the maximum number of indexable documents and adds incremental monthly cost.
  • Query capacity units: Similarly, additional query throughput beyond the base query limit can be added through query capacity units. Each unit increases the maximum queries per day.

Connector and Free Tier Costs

  • Connector sync: Furthermore, data source connectors run on scheduled sync intervals. More frequent syncing means higher connector runtime costs, though the per-sync cost is typically modest relative to index charges.
  • Free tier: Fortunately, Developer Edition includes 750 hours free for the first 30 days — sufficient for evaluation and prototyping. No free tier for Enterprise Edition.
Kendra Is Not Cheap

Amazon Kendra has a significant baseline cost compared to per-request AWS AI services. The index runs continuously and bills hourly — even during periods with zero queries. Therefore, Kendra makes the most financial sense for organizations with large document repositories where search quality directly impacts productivity. For smaller use cases with limited document volumes, consider Amazon OpenSearch or Amazon Bedrock Knowledge Bases as more cost-effective alternatives. For current pricing by edition and region, see the official Kendra pricing page.

Cost Optimization Strategy

Start with Developer Edition for evaluation and prototyping — the 30-day free tier provides enough time to validate search quality with your actual documents. Use metadata attributes and document filtering to reduce index size by excluding irrelevant content. Schedule connector syncs at appropriate intervals (hourly for fast-changing content, daily for stable repositories) to minimize connector runtime costs. Additionally, consider a hybrid architecture: Kendra for natural language Q&A over knowledge documents, and OpenSearch for high-volume application search — each optimized for its strength.


Amazon Kendra Security and Compliance

Since Kendra indexes your most sensitive enterprise content — HR policies, financial reports, legal contracts, customer data, technical documentation — security is paramount for any production deployment.

Specifically, all index data is encrypted at rest using AWS KMS with support for customer-managed CMKs, giving you full control over encryption keys. Furthermore, all API communications are encrypted in transit using TLS 1.2+. Moreover, Kendra operates within your VPC and supports PrivateLink for private connectivity, ensuring that document data never traverses the public internet during indexing or query processing.

Additionally, Amazon Kendra is SOC 1/2/3, ISO 27001, HIPAA eligible, PCI DSS compliant, and FedRAMP authorized — making it suitable for regulated industries including healthcare, financial services, and government. Importantly, the automatic ACL synchronization from connected data sources ensures that search results respect your existing access control policies without requiring duplicate permission management or manual security configuration.

Furthermore, Kendra provides document-level access filtering based on user identity and group memberships. Consequently, when a user submits a search query, Kendra automatically filters results to show only documents the user is authorized to access — even if those documents span multiple data sources with different permission models. This enterprise-grade access control is what differentiates Kendra from simply feeding documents into a generic search engine or LLM, where access control is typically an afterthought rather than an architectural foundation.

Moreover, all Kendra API calls are logged in AWS CloudTrail, providing a complete audit trail for compliance reporting and security investigations. Specifically, organizations can track who searched for what, when queries were submitted, which documents were returned in results, and whether users clicked through to specific documents — essential for demonstrating compliance with data access policies in regulated environments.


What’s New in Amazon Kendra

Amazon Kendra has received significant updates over the past two years, with the generative AI integration and GenAI Index being the most transformative changes:

2023
RAG Integration with Bedrock
Amazon Kendra introduced the Retriever API, enabling seamless integration with Amazon Bedrock for retrieval-augmented generation workflows. This established Kendra as the recommended retrieval layer for enterprise generative AI on AWS.
2024
GenAI Index Launch
A new index type purpose-built for RAG and intelligent search, leveraging advanced semantic models for higher retrieval accuracy. Optimized passage-level granularity eliminates the need for custom chunking logic in RAG implementations.
2025
Expanded Connector Ecosystem
Additional native connectors for enterprise data sources including Google Drive, Box, and custom JDBC databases. Improved ACL synchronization performance for large-scale deployments with millions of documents.
2026
Integration with Amazon Q Business
Kendra indexes now serve as knowledge sources for Amazon Q Business, enabling enterprise AI assistants to search across Kendra-indexed content alongside other connected data sources in a unified conversational interface.

Consequently, the role of Amazon Kendra has evolved from a standalone enterprise search service into the semantic retrieval backbone of the entire AWS enterprise AI stack. Whether accessed through direct search queries, Bedrock RAG workflows, Amazon Q Business assistants, or Amazon Lex chatbots, Kendra’s ML-powered retrieval capabilities underpin the enterprise knowledge layer that organizations need to deploy AI assistants that are both accurate and grounded in factual content.


Real-World Amazon Kendra Use Cases

Given its natural language understanding and broad connector support, Amazon Kendra serves organizations across industries where information findability directly impacts productivity, compliance, or customer satisfaction. Enterprise deployments consistently report measurable ROI: 40-60% reductions in time spent searching for information, 30-50% fewer support tickets for well-indexed self-service portals, and 70-80% less document review time in legal research scenarios.

Most Common Amazon Kendra Implementations

Below are the use cases we implement most frequently for our enterprise clients:

Enterprise Knowledge Search
Replace keyword-based intranet search with natural language Q&A across all enterprise documents — HR policies, technical documentation, training materials, and company wikis. Employees ask questions in plain English and receive precise, cited answers.
RAG for Generative AI Applications
Use the Kendra Retriever API as the retrieval layer for Bedrock-powered generative AI applications. Ground LLM responses in your actual enterprise documents to eliminate hallucination and provide source-cited answers.
Customer Self-Service Portals
Power customer-facing knowledge bases and help centers with natural language search. Customers find answers to their questions without contacting support — reducing ticket volume by 30-50% for well-indexed knowledge bases.
IT and DevOps Knowledge Management
Index runbooks, incident reports, architecture documentation, and troubleshooting guides. Engineers search in natural language (“How do I rotate the database credentials?”) and receive step-by-step answers extracted from operational documents.
Legal Document Research
Index contracts, regulatory filings, case law, and compliance documents. Legal teams search across entire document repositories for specific clauses, precedents, and regulatory references — reducing document review time by 70-80% in documented case studies compared to manual document-by-document review processes.
Contact Center Agent Assist
Integrate Kendra with Amazon Connect and Amazon Lex to provide real-time knowledge retrieval for contact center agents. When a customer asks a question, the agent sees the most relevant knowledge article automatically — reducing average handle time, improving first-call resolution rates, and eliminating the need for agents to manually search multiple knowledge systems during live customer conversations.

Amazon Kendra vs Azure AI Search

If you are evaluating enterprise search services across cloud providers, the choice between Amazon Kendra and Azure AI Search (formerly Azure Cognitive Search) represents two fundamentally different approaches to the same problem. Here is how they compare across the capabilities that matter most for enterprise deployments:

Capability Amazon Kendra Azure AI Search
Search Approach ✓ NLP semantic search out of the box Yes — Full-text + vector + semantic ranking
Natural Language Q&A ✓ Direct factoid answer extraction ◐ Requires Azure OpenAI for Q&A
Vector Search ◐ Via GenAI Index ✓ Native vector + hybrid search
Data Source Connectors ✓ 14+ native with ACL sync Yes — Indexers for Azure data sources
Access Control ✓ Automatic ACL sync from sources Yes — Security trimming with filters
RAG Integration Yes — Kendra Retriever API + Bedrock Yes — Azure OpenAI + Cognitive Search
Incremental Learning ✓ Automatic from user behavior ✕ Manual scoring profiles
Operational Complexity ✓ Fully managed, minimal config ◐ More configuration required
Custom Enrichment ◐ Limited to metadata ✓ AI enrichment pipeline (skillsets)
Compliance Yes — SOC, ISO, HIPAA, PCI, FedRAMP Yes — SOC, ISO, HIPAA, PCI, FedRAMP

Choosing the Right Amazon Kendra Alternative

Clearly, both are mature enterprise search platforms, but they cater to different priorities. Specifically, Amazon Kendra excels at rapid deployment with minimal operational overhead — you connect data sources, and Kendra delivers natural language Q&A with automatic access control and incremental learning. In contrast, Azure AI Search provides deeper customization through AI enrichment pipelines, native vector search, and explicit control over scoring profiles — ideal for teams that want fine-grained tuning of search behavior.

Ultimately, your cloud ecosystem determines the best fit. If you build on AWS and need Bedrock RAG integration, Kendra’s Retriever API delivers the most streamlined architecture. Conversely, if your infrastructure runs on Azure and you use Azure OpenAI, Azure AI Search integrates natively with the Azure AI stack.

Furthermore, for organizations on AWS that need full-text and vector search with lower baseline costs, Amazon OpenSearch Serverless offers a more cost-effective alternative — though it lacks Kendra’s built-in NLU, automatic ACL sync, and incremental learning capabilities. The highest-value architecture for many enterprises combines Kendra for knowledge Q&A (natural language questions about policies, procedures, and documentation) with OpenSearch for high-volume application search (product catalog, log analytics, real-time data) — each tool optimized for its strength.

Amazon Kendra vs Bedrock Knowledge Bases

Moreover, organizations should also evaluate Amazon Bedrock Knowledge Bases as an alternative for RAG-specific use cases. Bedrock Knowledge Bases provide a simpler, lower-cost retrieval layer when your primary goal is feeding context to foundation models rather than serving direct search experiences to end users. Importantly, Kendra’s advantage over Bedrock Knowledge Bases lies in its superior natural language Q&A capabilities (factoid answers, document excerpts), incremental learning from user behavior, and the broader connector ecosystem with automatic ACL synchronization — capabilities that matter significantly when enterprise search is a primary user-facing feature and core productivity tool rather than just a backend RAG component.


Getting Started with Amazon Kendra

Fortunately, Amazon Kendra provides a straightforward setup experience. You create an index, connect data sources, and start searching — no ML expertise required. The Developer Edition’s 750-hour free tier gives you approximately 30 days of continuous evaluation to validate search quality with your actual enterprise documents before committing to production-grade Enterprise Edition pricing.

Setting Up Your First Amazon Kendra Index

Simply navigate to the Amazon Kendra console and create a new index. Specifically, select Developer Edition for evaluation (includes 750 free hours for the first 30 days). Subsequently, configure a data source connector — start with S3 if your documents are already in AWS, or SharePoint and Confluence for collaboration content. Consequently, Kendra crawls and indexes the connected content automatically.

Below is a minimal Python example that queries an existing Kendra index:

import boto3

# Initialize the Kendra client
client = boto3.client('kendra', region_name='us-east-1')

# Query the index
response = client.query(
    IndexId='YOUR_INDEX_ID',
    QueryText='How long is maternity leave?'
)

# Print results by type
for result in response['ResultItems']:
    print(f"Type: {result['Type']}")
    print(f"Answer: {result.get('DocumentExcerpt', {}).get('Text', 'N/A')}")
    print(f"Source: {result.get('DocumentTitle', {}).get('Text', 'N/A')}")
    print(f"Confidence: {result.get('ScoreAttributes', {}).get('ScoreConfidence', 'N/A')}")
    print("---")

Subsequently, for RAG integration, use the Kendra Retriever API to feed retrieved passages to Amazon Bedrock. For conversational search interfaces, integrate Kendra with Amazon Lex bots using the built-in Kendra search intent. For more details and setup guidance, see the Amazon Kendra documentation.


Amazon Kendra Best Practices and Pitfalls

Based on our experience deploying Kendra across enterprise environments, the following patterns consistently determine whether a Kendra deployment delivers transformative search quality that employees genuinely prefer over their previous tools or disappoints users with irrelevant results that drive them back to manual document browsing. The difference lies not in the technology itself but in how you configure, tune, and feed it.

Advantages
ML-powered natural language search — understands intent, not just keywords
Three result types: factoid answers, excerpts, and document rankings
Kendra Retriever API optimized for RAG with Amazon Bedrock
14+ native connectors with automatic ACL synchronization
Incremental learning improves results automatically from user behavior
SOC, ISO, HIPAA, PCI, FedRAMP compliant for regulated industries
Limitations
Significant baseline cost — index runs continuously and bills hourly
Not suitable for e-commerce product search or real-time log analytics
Developer Edition limited to 10,000 documents and single AZ
Limited customization compared to Azure AI Search enrichment pipelines
AWS-only — no multi-cloud or on-premises deployment option
Short free tier window (30 days for Developer Edition only)

Recommendations for Amazon Kendra Deployment

  • First, start with your most-searched document repository: Specifically, connect the knowledge base that generates the most internal search queries — typically HR policies, IT runbooks, or product documentation. Consequently, proving value on the highest-traffic repository builds organizational support for broader rollout.
  • Additionally, invest in document metadata: Specifically, enrich documents with metadata attributes (department, document type, author, last updated) during indexing. This enables relevance tuning, faceted filtering, and more precise search results — metadata quality directly determines search result quality and the usefulness of faceted filtering for end users.
  • Furthermore, upload curated FAQ lists: Kendra’s FAQ matching model is highly accurate for structured Q&A pairs. Upload your existing FAQ content in CSV format to provide instant, high-confidence answers for the most frequently asked employee questions — this is often the single highest-impact action for initial search quality.
  • Moreover, define custom synonyms for your organization: Essentially, map internal terminology to common language. If your company uses “PTO” but employees search for “vacation days,” synonyms bridge this vocabulary gap automatically across all queries without requiring document modifications.
  • Finally, use the Kendra + Bedrock RAG pattern for generative AI: Instead of building custom retrieval systems, use the Kendra Retriever API to feed document passages to Bedrock foundation models. Consequently, this architecture provides grounded, hallucination-free AI responses with source citations — the gold standard for enterprise generative AI applications in 2026 where accuracy and data governance are non-negotiable requirements.

When Not to Use Amazon Kendra

Importantly, Kendra is purpose-built for enterprise document search. However, it is the wrong tool for several common scenarios. Regarding e-commerce product search with faceted filtering, price ranges, and inventory status, use Amazon OpenSearch or Amazon Personalize instead. Similarly, for real-time log analytics and operational monitoring, use Amazon OpenSearch or CloudWatch. When it comes to structured data querying (SQL against databases), use Amazon Athena or Amazon RDS. Specifically, recognize that Kendra excels at “What does our policy say about X?” questions — not “Show me all products under $50 in the electronics category” queries.

Key Takeaway

Amazon Kendra transforms enterprise search from frustrating keyword matching into intelligent, natural language Q&A — extracting precise answers from your documents with confidence scores and source citations. The key to success is connecting high-value document repositories first, investing in document metadata and FAQ content, defining custom synonyms for your domain, and leveraging the Kendra + Bedrock RAG pattern for generative AI applications. An experienced AWS partner can help you design search architectures that maximize information findability while maintaining enterprise-grade security and compliance.

Ready to Transform Your Enterprise Search?
Let our AWS team deploy Amazon Kendra with RAG architecture for intelligent, AI-powered search


Frequently Asked Questions About Amazon Kendra

Common Questions Answered
What is Amazon Kendra used for?
Essentially, Amazon Kendra is used for intelligent enterprise search — finding precise answers within your organization’s documents using natural language questions. Common use cases include enterprise knowledge Q&A (HR policies, IT runbooks, product documentation), RAG retrieval for generative AI applications (Kendra + Bedrock), customer self-service knowledge bases, legal document research, IT knowledge management, and contact center agent assist. It connects to 14+ data sources including S3, SharePoint, Confluence, Salesforce, and ServiceNow.
How is Amazon Kendra different from traditional search?
Traditional search engines match keywords — if you search for “vacation policy” but your document says “paid time off,” you get zero results. In contrast, Amazon Kendra uses machine learning to understand the intent behind queries, returning relevant answers regardless of exact keyword matches. Furthermore, Kendra extracts precise factoid answers from documents (not just links), provides confidence scores, cites sources, and learns from user behavior to improve results over time.
Is Amazon Kendra expensive?
Compared to per-request AWS AI services, Kendra has a significant baseline cost because the index runs continuously and bills hourly. Developer Edition is suitable for evaluation and small deployments. Enterprise Edition with production-grade availability and higher limits costs more. However, organizations report 40-60% reductions in search time and 30-50% fewer support tickets — ROI that typically justifies the cost for medium-to-large enterprises with substantial document repositories.

Technical and Architecture Questions

What is the difference between Amazon Kendra and Amazon OpenSearch?
Fundamentally, they serve different search patterns. Amazon Kendra specializes in natural language enterprise document search — understanding questions in plain English and extracting precise answers from documents. Amazon OpenSearch is designed for log analytics, real-time application monitoring, full-text search with custom scoring, and operational data analysis. Generally, Kendra is the better choice for knowledge Q&A — questions like “What does our policy say about remote work?”; OpenSearch is better for high-volume, latency-sensitive application search — queries like “Show me all error logs from the last hour.” Consequently, many enterprises use both together in a hybrid architecture, with Kendra handling employee-facing knowledge search and OpenSearch powering developer-facing operational analytics and application search.
How does Amazon Kendra work with Amazon Bedrock?
Essentially, Amazon Kendra serves as the retrieval layer in Retrieval-Augmented Generation (RAG) architectures with Bedrock. Specifically, the Kendra Retriever API semantically searches your enterprise documents and returns the most relevant passages. Subsequently, these passages are then sent to a Bedrock foundation model (Claude, Llama, Nova) which generates a natural language answer grounded in the retrieved content. Consequently, this pattern prevents LLM hallucination by ensuring answers come from your actual documents rather than the model’s training data. Furthermore, Kendra’s access controls apply at the retrieval stage, so users only see AI-generated answers based on documents they are authorized to access.
Weekly Briefing
Security insights, delivered Tuesdays.

Join 1 million+ security professionals. Practical, vendor-neutral analysis of threats, tools, and architecture decisions.