[SEO Subhead]
This Guidance demonstrates how to configure a self-service data analytics environment that is simple to launch and access for data engineers and data scientists. The integrated development environment (IDE) is based on Jupyter Notebooks, providing an interactive interface for easy data exploration, and includes all the necessary tools to debug, build, and schedule near real-time data pipelines. The environment supports secure team collaboration with workload isolation, and allows administrators to self-provision, scale, and de-provision resources from a single interface without exposing the complexities of the underlying infrastructure or compromising security, governance, and costs. Administrators can independently manage cluster configurations and continuously optimize for cost, security, reliability, and performance.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
Cloud operations teams develop Amazon EMR cluster templates in AWS CloudFormation according to their desired specifications (such as instance types and network configurations) and publish the templates as products in the AWS Service Catalog for self-service provisioning.
Step 2
Bid events or pixels on web ads capture user impressions and sends the data to an Amazon Kinesis Data Streams endpoint.
Step 3
Data engineering teams log in to their workspaces in Amazon EMR Studio. Here, they self-provision Amazon EMR clusters. Alternatively, they attach existing clusters to develop Spark streaming applications, like bid validation or impression measurement, using interactive notebooks.
Step 4
A Spark streaming application runs on an Amazon EMR cluster. It continuously ingests raw bid or impression event data from Kinesis Data Streams. The application transforms the data. It then stores the transformed data in an Amazon Simple Storage Service (Amazon S3) data lake.
This process enables near real-time operational reporting. You can choose provisioned Amazon EMR clusters for the most flexibility in cost optimization or serverless Amazon EMR clusters to simplify deployment and cluster management.
Step 5
Amazon S3 stores data in partitioned folders. The data can be compressed and in columnar format or in other open table formats like Apache Iceberg.
Step 6
All database and table metadata is registered within an AWS Glue Data Catalog, so data can be queried by multiple AWS services like Amazon Athena or Amazon SageMaker.
Step 7
(Optional) Data lake administrators can register the Data Catalog with AWS Lake Formation to provide more granular access controls and centralize user management.
Step 8
Users can run SQL queries against curated clickstream or impression data in Amazon S3 in near real-time with Athena and visualize dashboards with Amazon QuickSight.
Step 9
In addition to the Amazon S3 data lake, Amazon EMR workloads can write data to NoSQL databases like Amazon DynamoDB or in-memory databases like Aerospike. This supports read workloads requiring fast performance on a large scale, such as bid filtering or operational reporting.
Get Started
Deploy this Guidance
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Amazon EMR Studio provides a fully managed, web-based integrated development environment (IDE) with Jupyter Notebooks, allowing data engineering or data science teams to develop, visualize, and debug Spark streaming applications interactively without managing additional servers. Teams can self-provision Amazon EMR clusters that have been predefined using infrastructure as code (IaC) templates in the Service Catalog. This reduces the dependency on cloud operations teams, improves development agility, and helps organizations follow security and governance best practices with minimal overhead.
-
Security
Amazon EMR Studio supports authentication and authorization with AWS Identity and Access Management (IAM), or AWS Identity Center, removing the need to connect with SSH (Secure Shell) directly into Spark clusters. Lake Formation allows for granular and centralized access control to the data in your data lakes, centralizing user access management and augmenting a strong security and governance posture on your data pipelines.
-
Reliability
Kinesis Data Streams and Amazon EMR provide autoscaling capabilities to meet the throughput demand of your real-time data streaming workflow. Amazon EMR uses the Apache Spark framework, which automatically distributes and retries jobs in the event of application or network failures. Kinesis Data Streams also scales capacity automatically and synchronously replicates data across three Availability Zones, providing high availability and data durability.
-
Performance Efficiency
Kinesis Data Streams automatically scales capacity in response to varying data traffic, allowing your real-time processing workflow to meet throughput demands. Amazon EMR provides multiple performance optimization features for Spark, allowing users to run 3.5 times faster without any changes to their applications. In addition, Athena automatically processes queries in parallel and provisions the necessary resources. Also, data can be stored in Amazon S3 partition keys and columnar formats to increase query performance.
-
Cost Optimization
This Guidance provides a sample Amazon EMR cluster template that uses instance fleets with Amazon EC2 Spot Instance capacity and specifies Amazon EC2 Graviton3 instance types. This can provide up to 20 percent in cost savings over comparable x86-based Amazon Elastic Compute Cloud (Amazon EC2) instances. Further, the use of idle timeouts and Amazon S3 storage tiers allows for better utilization of compute and storage resources with optimized costs.
-
Sustainability
Amazon EC2 Graviton3 instance types use up to 60 percent less energy for the same performance as comparable Amazon EC2 instances, helping to reduce the carbon footprint. The use of Amazon EC2 Spot Instances and Amazon EMR idle timeout settings helps ensure better utilization of resources and minimizes the environmental impact of the workload.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.