AWS Certified Solutions Architect: Zero to Master - Practice Exam: Answers & Explanations PART 2

QUESTION 45

A healthcare company stores medical approval information in an S3 bucket. The data is accessed frequently for the first 7 days, but is rarely accessed after that. For compliance reasons, the data must be kept for 7 years, but it is okay if it takes more than 12 hours to retrieve it. What should you do to ensure the data is stored in the most cost-effective way?

A: Store the data in S3 Glacier Instant Retrieval initially. After 7 days, transition it to S3 Glacier Flexible Retrieval.

B: Store the data in S3 Standard initially. After 30 days, transition it to S3 Glacier Deep Archive.

C: Store the data in S3 Standard initially. After 7 days, transition it to S3 Glacier Deep Archive.

D: Store the data in S3 Intelligent-Tiering for the full 7 years.

EXPLANATIONS

A: Store the data in S3 Glacier Instant Retrieval initially. After 7 days, transition it to S3 Glacier Flexible Retrieval. It’s not possible to store data in S3 Glacier Instant Retrieval immediately. Data must first start in S3 Standard and then be transitioned to another storage class. Also, once in S3 Standard storage, objects must remain for a minimum of 30 days (not 7 days) before being transitioned. Managing your storage lifecycle - Amazon Simple Storage Service

B: Store the data in S3 Standard initially. After 30 days, transition it to S3 Glacier Deep Archive. - CORRECT. The data should be stored in S3 Standard initially. Once objects are in S3 Standard storage, they must remain for a minimum of 30 days (not 7 days) before being transitioned. The least expensive storage option is S3 Glacier Deep Archive, which has a retrieval time of 12-48 hours. This meets the requirements in the question. Managing your storage lifecycle - Amazon Simple Storage Service

C: Store the data in S3 Standard initially. After 7 days, transition it to S3 Glacier Deep Archive. While it is true that you should initially store the data in S3 Standard, objects must live there for a minimum of 30 days (not 7 days) before they can be transitioned to another class. But after 30 days, S3 Glacier Deep Archive would be the least expensive storage option based on the scenario. Managing your storage lifecycle - Amazon Simple Storage Service

D: Store the data in S3 Intelligent-Tiering for the full 7 years. S3 Intelligent-Tiering makes sense when data access patterns are unknown. AWS will determine the most cost-effective way to store the data. However, we DO know what the access patterns are in this scenario, and we should choose the most cost-effective option. That would be S3 Glacier Deep Archive for long-term storage. So the data should start in S3 Standard, live there for 30 days (the minimum requirement), and then be transitioned to Deep Archive. Managing your storage lifecycle - Amazon Simple Storage Service

QUESTION 46

Your company uses several different Amazon Machine Images. An application you’re working on needs to access the IDs for the AMIs. The IDs don’t need to be encrypted. What’s the most cost-effective way to store this information?

A: AWS Certificate Manager

B: SSM Parameter Store

C: AWS Secrets Manager

D: AWS Key Management System (KMS)

EXPLANATIONS

A: AWS Certificate Manager. AWS Certificate Manager is used to provision, manage and deploy SSL/TLS certificates. Certificate Manager – AWS Certificate Manager – Amazon Web Services

B: SSM Parameter Store - CORRECT. Systems Manager (SSM) Parameter Store is a valid way to store secrets and other information such as IDs in AWS. For data that is NOT encrypted (like mentioned in the question), this is the only option (AWS Secrets Manager requires encryption). Also, Parameter Store is free, up to 10,000 parameters, so this would be the most cost-effective option. AWS Systems Manager Parameter Store - AWS Systems Manager

C: AWS Secrets Manager. AWS Secrets Manager is used to securely store and rotate secrets, such as a database name/password. The secrets must be encrypted, and there is a cost of $0.40 per stored secret and $0.05 for 10,000 API calls. credential password management - AWS Secrets Manager - Amazon Web Services

D: AWS Key Management System (KMS). KMS is used to generate encryption keys, which are used to encrypt (scramble) your data. These are used by services such as Elastic Block Store, S3 and Lambda to encrypt data. Key Usage — AWS Key Management Service — Amazon Web Services

QUESTION 47

A CRM application runs with an RDS database on the backend. To improve performance, there is currently one read replica handling read-only traffic. When you come in on a Monday morning, you realize the primary database has failed. What is a valid way to recover from this failure?

A: Restore a previous snapshot of the primary database

B: Restore to a point in time using a snapshot

C: Promote the read replica of the database and, once it’s available, redirect database traffic to the promoted instance

D: Modify the properties of the failed database instance and choose a new instance type, then reboot the instance

EXPLANATIONS

A: Restore a previous snapshot of the primary database. While it’s possible to restore from a snapshot, we don’t have any information about when the snapshot was created. It may be very old. Because we have a read replica of the database that is current, it would make more sense to promote the read replica. Tutorial: Restore an Amazon RDS DB instance from a DB snapshot - Amazon Relational Database Service

B: Restore to a point in time using a snapshot. Restoring to a point in time is only an option using an automated backup, and can’t be done from a snapshot. Restoring a DB instance to a specified time - Amazon Relational Database Service

C: Promote the read replica of the database and, once it’s available, redirect database traffic to the promoted instance – CORRECT. Promoting a read replica to be the primary database is a valid way to recover from failure. This will take some time, but once ready, you can redirect traffic to the new (promoted) instance. Working with read replicas - Amazon Relational Database Service

D: Modify the properties of the failed database instance and choose a new instance type, then reboot the instance. This is a distractor. Trying to update the properties of a failed database instance is not likely to solve the problem. Instead, you should focus on recovering the database from a backup, snapshot or read replica.

QUESTION 48

An auditor has asked for a “paper trail” of the changes that have occurred with resources in a production environment. What service can be used to show this?

A: AWS Config

B: AWS Systems Manager

C: AWS Artifact

D: Amazon Inspector

EXPLANATIONS

A: AWS Config - CORRECT. AWS Config is used to inventory, record and audit the configuration of your AWS resources. Config Tool – AWS Config – Amazon Web Services

B: AWS Systems Manager. AWS Systems Manager is used to manage a fleet of servers from a single place. While you can use it to update and configure resources, it would not provide this “paper trail” of configuration history. Centralized Operations Hub – AWS Systems Manager – Amazon Web Services

C: AWS Artifact. AWS Artifact is a free, self-service portal used to access reports and agreements related to AWS’s compliance and security. Security Compliance Management - AWS Artifact - AWS

D: Amazon Inspector. Amazon Inspector monitors EC2 instances and ECR repositories for software vulnerabilities and network exposure. Automated software vulnerability management - Amazon Inspector - Amazon Web Services

QUESTION 49

Which AWS service allows you to analyze network, account and data access information for malicious activity?

A: AWS CloudTrail

B: Amazon GuardDuty

C: Amazon Inspector

D: Amazon Detective

EXPLANATIONS

A: AWS CloudTrail. CloudTrail Captures user activity and API calls on an AWS account, such as user sign-ins. What Is AWS CloudTrail? - AWS CloudTrail

B: Amazon GuardDuty - CORRECT. GuardDuty is a threat detection service that continuously monitors AWS accounts and workloads for malicious activity. Intelligent threat detection - Amazon GuardDuty - Amazon Web Services

C: Amazon Inspector. Amazon Inspector monitors EC2 instances and ECR repositories for software vulnerabilities and network exposure. Automated software vulnerability management - Amazon Inspector - Amazon Web Services

D: Amazon Detective. Amazon Detective organizes information from other sources, including GuardDuty, and provides visualizations, context and detailed findings to help you identify the root cause of a security incident. Security Investigation Service – Amazon Detective – Amazon Web Services

QUESTION 50

Which of the following is used to control the flow of traffic at the VPC subnet level?

A: IAM

B: Security Group

C: VPC Endpoint

D: Network Access Control List (ACL)

EXPLANATIONS

A: IAM. Identity and Access Management (IAM) is the service used to set up and manage users, user groups, roles and permissions. AWS IAM | Identity and Access Management | Amazon Web Services

B: Security Group. A security group is a firewall that controls traffic in and out of an EC2 instance, not a subnet. Control traffic to resources using security groups - Amazon Virtual Private Cloud

C: VPC Endpoint. VPC endpoints allow you to access other AWS services through a private network (vs. going across the public internet). AWS PrivateLink concepts - Amazon Virtual Private Cloud

D: Network Access Control List (ACL) – CORRECT. A network ACL is a firewall that controls traffic in and out of a subnet. Control traffic to subnets using Network ACLs - Amazon Virtual Private Cloud

QUESTION 51

Which AWS service is used as a content delivery network (CDN), to get content closer to users around the world?

A: CloudFront

B: Edge Locations

C: Route 53

D: Global Accelerator

EXPLANATIONS

A: CloudFront – CORRECT. CloudFront is AWS’s content delivery network (CDN), and its primary goal is to speed up delivery of content to end users, especially media files like videos and images. Low-Latency Content Delivery Network (CDN) - Amazon CloudFront - Amazon Web Services

B: Edge Locations. Edge Locations are used by CloudFront to do caching, but the CloudFront service itself is the most appropriate answer for this question. Low-Latency Content Delivery Network (CDN) - Amazon CloudFront - Amazon Web Services

C: Route 53. Route 53 is AWS’s managed DNS service. It can be used to purchase/register domain names, as well as handle DNS routing (such as routing IP address 12.34.56.78 to mywebsite.com). Amazon Route 53 | DNS Service | AWS

D: Global Accelerator. Global Accelerator is used to improve speeds for a variety of applications. It does this by improving routing of network traffic, moving it off the public internet. For a scenario such as the one in the question, that is the primary purpose of a content delivery network (CDN), to reduce latency for end users. CloudFront is the best way to achieve what is being asked. Low-Latency Content Delivery Network (CDN) - Amazon CloudFront - Amazon Web Services

QUESTION 52

You’ve set up an EC2 instance as a web server, and created a simple website for your startup company. You purchased a domain name through Route 53, and want to route traffic from the domain name to the IP address of the EC2 instance. How can you accomplish this?

A: In Route 53, create an A Record that points the domain name to the IP address of the EC2 instance

B: In Route 53, create a CNAME record that points the domain name to the IP address of the EC2 instance

C: In Route 53, create an A Record and enable an Alias that points to the EC2 instance

D: In Route 53, create an Alias record that points to the EC2 instance

EXPLANATIONS

A: In Route 53, create an A Record that points the domain name to the IP address of the EC2 instance - CORRECT. An A Record is used to point a host name (such as www.mycoolsite.com) to an IP address. This works ont the root domain (i.e., no subdomain). Supported DNS record types - Amazon Route 53

B: In Route 53, create a CNAME record that points the domain name to the IP address of the EC2 instance. CNAME records only work on non-root (subdomains), not on root domains (which is what we are working with in this question). Supported DNS record types - Amazon Route 53

C: In Route 53, create an A Record and enable an Alias that points to the EC2 instance. It is correct that you should create an A Record. However, you shouldn’t enable an Alias because you need to point to a specific IP address (of the EC2 instance). An Alias should be used to point to AWS resources such as load balancers, API Gateway, CloudFront distributions and Elastic Beanstalk environments. Supported DNS record types - Amazon Route 53

D: In Route 53, create an Alias record that points to the EC2 instance. You shouldn’t enable an Alias because you need to point to a specific IP address (of the EC2 instance). An Alias should be used to point to AWS resources such as load balancers, API Gateway, CloudFront distributions and Elastic Beanstalk environments. Also, an Alias isn’t a record type itself, but a “toggle” that you can use with other records such as A Records and CNAME Records. Supported DNS record types - Amazon Route 53

QUESTION 53

You’re architecting a web application that lets users create and share eBooks. You expect it to be extremely popular, as you’re getting the backing of several big influencers. Your user base will be global, and will need to scale over time as the audience grows. The application also needs to be highly available and resilient, withstanding regional failures. How would you architect the application to meet these requirements?

A: Create multiple EC2 instances in different regions. Use CloudFront to route traffic across the regions.

B: Create an Auto Scaling Group that points to a target group with EC2 instances in multiple regions. Use the Auto Scaling Group with an Application Load Balancer to distribute the traffic.

C: Use Route 53 to route traffic across regions, and then use an Application Load Balancer with an Auto Scaling Group to route traffic and scale within a single region

D: Use Route 53 to route traffic across availability zones, and then use an Application Load Balancer to distribute traffic across regions.

EXPLANATIONS

A: Create multiple EC2 instances in different regions. Use CloudFront to route traffic across the regions. CloudFront is a content delivery network (CDN), used to get content to users faster (especially media files). It is not used to route traffic. That is the job of Route 53. Low-Latency Content Delivery Network (CDN) - Amazon CloudFront - Amazon Web Services

B: Create an Auto Scaling Group that points to a target group with EC2 instances in multiple regions. Use the Auto Scaling Group with an Application Load Balancer to distribute the traffic. It is not possible for an Auto Scaling Group’s target group to have instances in multiple regions; it’s only possible across multiple Availability Zones. Target groups for your Application Load Balancers - Elastic Load Balancing

C: Use Route 53 to route traffic across regions, and then use an Application Load Balancer with an Auto Scaling Group to route traffic and scale within a single region - CORRECT. It is possible to use Route 53 in combination with an Application Load Balancer to distribute traffic globally across regions, and then also distribute it within regions. The Auto Scaling Group would also meet the scaling requirements mentioned in the question. Cross-Region DNS-based load balancing and failover - Real-Time Communication on AWS

D: Use Route 53 to route traffic across availability zones, and then use an Application Load Balancer to distribute traffic across regions. It is not possible for an Application Load Balancer to distribute traffic across regions. However, you can do that with Route 53. Cross-Region DNS-based load balancing and failover - Real-Time Communication on AWS

QUESTION 54

Your team is working on a proof of concept for a new type of social media application. You worked on a previous proof of concept, and want to leverage the same infrastructure and configuration that you used before. How can you do this?

A: Deploy the infrastructure using CodeDeploy

B: Retrieve the instance metadata from the EC2 instances used in the previous environment, and then use that data to create new instances

C: Use AWS Config to duplicate the environment

D: Deploy the same CloudFormation that you used before

EXPLANATIONS

A: CodeDeploy. CodeDeploy is used to deploy code to EC2 instances or on-premises servers. It is not used to deploy “infrastructure as code.” Software Deployment Service - Amazon CodeDeploy - AWS

B: Retrieve the instance metadata from the EC2 instances used in the previous environment, and then use that data to create new instances. EC2 instance metadata will only give you details about the EC2 instances, and not other things (like VPCs, databases, etc.). Also, this would be a very manual and time-consuming approach.

C: Use AWS Config to duplicate the environment. AWS Config is used to inventory, record and audit the configuration of your AWS resources. You cannot duplicate an environment using AWS Config. Config Tool – AWS Config – Amazon Web Services

D: Deploy the same CloudFormation that you used before - CORRECT. CloudFormation is AWS’s “infrastructure as code” solution. In a JSON/YAML template, we define the resources we want to build, and then CloudFormation builds them. Because the resources are defined in code, this means the same infrastructure will be set up the same way every time. You can also add the templates to source control to track changes over time. https://aws.amazon.com/cloudformation

QUESTION 55

You’ve been hired as a consultant to help a company stabilize an order processing application. Currently their Order service calls the Shipping service, passing in details of the order and address information. However, the Shipping service occasionally goes down, which means that orders are never sent out to customers. What service could help resolve this issue?

A: Lambda

B: Auto Scaling Groups

C: Step Functions

D: Simple Queue Service (SQS)

EXPLANATIONS

A: Lambda. While Lambda functions can be part of the overall solution, the core service in this re-architecture should be the Simple Queue Service (SQS). By using a queue, orders can be pushed to the queue, and then the Shipping service can pick them up and do the processing. If the Shipping service goes down, the orders will be preserved in the queue, and processing can be resumed when the Shipping service comes back online. Serverless Computing - AWS Lambda - Amazon Web Services

B: Auto Scaling Groups. Auto Scaling Groups can help with scalability of a service, and could play a part in this solution. If producers or consumers are run on EC2 instances, the Auto Scaling Group could scale to handle additional load. However, the primary way to achieve decoupling in this scenario is with the Simple Queue Service (SQS). Fully Managed Message Queuing – Amazon Simple Queue Service – Amazon Web Services

C: Step Functions. Step Functions allow you to orchestrate Lambda functions, using a visual drag-and-drop designer. Lambda functions may be part of the solution to the scenario, but the primary way to achieve decoupling is with the Simple Queue Service (SQS). Serverless Workflow Orchestration – AWS Step Functions – Amazon Web Services

D: Simple Queue Service (SQS) – CORRECT. Loose coupling refers to breaking parts of a system into smaller, independent parts so that an application can better handle failures. A queue, and specifically an SQS queue, is one popular way to achieve loose coupling in AWS. By using a queue, orders can be pushed to the queue, and then the Shipping service can pick them up and do the processing. If the Shipping service goes down, the orders will be preserved in the queue, and processing can be resumed when the Shipping service comes back online. Fully Managed Message Queuing – Amazon Simple Queue Service – Amazon Web Services

QUESTION 56

Which AWS service can help protect against a distributed denial of service (DDOS) attack?

A: AWS CloudHSM

B: AWS Detective

C: Amazon Artifact

D: AWS Shield

EXPLANATIONS

A: AWS CloudHSM. CloudHSM is used to generate encryption keys, which are used to encrypt (scramble) your data. These are used by services such as Elastic Block Store, S3 and Lambda to encrypt data. AWS CloudHSM - AWS cryptography services

B: AWS Detective. Amazon Detective organizes information from other sources, including GuardDuty, and provides visualizations, context and detailed findings to help you identify the root cause of a security incident. Security Investigation Service – Amazon Detective – Amazon Web Services

C: Amazon Artifact. AWS Artifact is a free, self-service portal used to access reports and agreements related to AWS’s compliance and security. Security Compliance Management - AWS Artifact - AWS

D: AWS Shield – CORRECT. AWS Shield is a service used to protect against Distributed Denial of Service (DDoS) attacks. Managed DDoS Protection – AWS Shield – Amazon Web Services

QUESTION 57

A bank uses an application that was built using the Simple Queue Service (SQS). The queue is used to pull a customer’s credit score. If the score is high enough, then the customer’s loan gets a status of “approved.” However, occasionally, messages in the queue are processed multiple times, resulting in multiple approvals for a customer. What is a potential solution to this problem?

A: Implement Auto Scaling on the queue so all messages are processed more quickly

B: Extend the Visibility Timeout of the queue

C: Enable long polling on the queue

D: Send duplicate messages to a Dead Letter Queue

EXPLANATIONS

A: Implement Auto Scaling on the queue so all messages are processed more quickly**.** Implementing Auto Scaling can help the overall throughput of the queue, meaning it can process more messages and more quickly. However, for the problem mentioned in the question, it’s likely that a single message is not being processed in enough time, and it is getting dropped back in the queue, where it is picked up by the next consumer. To fix this problem, you should extend the Visibility Timeout on the queue. Scaling based on Amazon SQS - Amazon EC2 Auto Scaling

B: Extend the Visibility Timeout of the queue - CORRECT. The problem seems to stem from the fact that a single message is not being processed in enough time, and it is getting dropped back in the queue, where it is picked up by the next consumer (so processed more than once). To fix this problem, you should extend the Visibility Timeout on the queue (the default is 30 seconds). Amazon SQS visibility timeout - Amazon Simple Queue Service

C: Enable long polling on the queue. Long polling reduces extraneous polling and minimizes cost, by waiting 20 seconds to poll again if a queue is empty. The problem in our scenario seems to be related to a single message not being processed quickly enough, and long polling wouldn’t fix this issue. Amazon SQS short and long polling - Amazon Simple Queue Service

D: Send duplicate messages to a Dead Letter Queue. Dead Letter Queues are used to store messages that were undeliverable in a “regular” queue, so they can be processed separately. In this scenario, the messages ARE being delivered, and occasionally multiple times, so this wouldn’t resolve the issue. Amazon SQS dead-letter queues - Amazon Simple Queue Service

QUESTION 58

When a CloudWatch alarm is triggered, you want to send an email notification. What service should you use to do this?

A: Simple Queue Service (SQS)

B: Simple Notification Service (SNS)

C: Step Functions

D: EventBridge

EXPLANATIONS

A: Simple Queue Service (SQS). Simple Queue Service SQS allows producers to send to a queue, and consumers to poll a queue for messages to process. It is not used to send notifications. Fully Managed Message Queuing – Amazon Simple Queue Service – Amazon Web Services

B: Simple Notification Service (SNS) - CORRECT. The Simple Notification Service (SNS) is the service that can send notifications through email, text, HTTP or Lambda. https://aws.amazon.com/sns

C: Step Functions. Step Functions allow you to orchestrate Lambda functions, using a visual drag-and-drop designer. Serverless Workflow Orchestration – AWS Step Functions – Amazon Web Services

D: EventBridge. EventBridge is used to build decoupled, event-driven architectures, such as a Lambda function triggering DynamoDB database entries, which triggers something else, which triggers something else. https://aws.amazon.com/eventbridge/

QUESTION 59

A financial services company must adhere to strict regulations around where their compute resources and data can live. As such, production resources should only be created in us-west-1 and us-west-2. The company uses AWS Organizations, and has accounts for Dev, Test and Prod. How can you enforce this rule on the Prod account with the least amount of administrative overhead?

A: Create an IAM “deny” policy for all services in regions outside of us-west-1 and us-west-2, and attach it to each user in the Prod account

B: Revoke IAM roles from the Prod account

C: Require MFA to create resources outside of us-west-1 and us-west-2

D: Apply a Service Control Policy to the Prod account denying permissions to create resources outside of us-west-1 and us-west-2

EXPLANATIONS

A: Create an IAM “deny” policy for all services in regions outside of us-west-1 and us-west-2, and attach it to each user in the Prod account. While this approach would technically work, it would require a lot of administrative overhead to set up and maintain. A better approach is to use a Service Control Policy, which can be applied at the account level for Prod. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html

B: Revoke IAM roles from the Prod account. This is a distractor and is also incomplete. It does not describe what the IAM roles allow/deny. Also, roles cannot be attached to an AWS account, but only users and user groups.

C: Require MFA to create resources outside of us-west-1 and us-west-2. Requiring MFA to create resources outside of us-west-1 and us-west-1 will not fulfill the requirements. It would only enforce that someone must enter an MFA code before creating the resource. https://docs.aws.amazon.com/singlesignon/latest/userguide/enable-mfa.html

D: Apply a Service Control Policy to the Prod account denying permissions to create resources outside of us-west-1 and us-west-2 - CORRECT. Service Control Policies allow you to manage permissions in an AWS organization. This reduces the administrative overhead of managing privileges for an entire account. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples.html

QUESTION 60

An online book publisher wants to take advantage of AWS machine learning services to produce audiobooks from PDF versions of their books. Which services should they use?

A: Textract to import the text from the PDFs and then Polly to output the text to audio

B: Comprehend to import the text from the PDFs and then Transcribe to output the text to audio

C: Rekognition Image to import the text from PDFs and then Polly to output the text to audio

D: Textract to import the text from the PDFs and then Transcribe to output the text to audio

EXPLANATIONS

A: Textract to import the text from the PDFs and then Polly to output the text to audio - CORRECT. Textract extracts text from documents (the PDFs), and can then pass the text to another service for processing. In this case, Polly would be the correct service to pass to, as it is used for text-to-speech. https://aws.amazon.com/textract/ and https://aws.amazon.com/polly/

B: Comprehend to import the text from the PDFs and then Transcribe to output the text to audio. While Comprehend is used for natural language processing of text, its intended use is for unstructured data, such as social media posts, reviews, emails, etc. In the case of structured data like the PDF books, the appropriate service would be Textract to extract the text. Also, Transcribe is not the correct service in this scenario, as it does speech-to-text. Instead, you would use Polly, which is text-to-speech. https://aws.amazon.com/comprehend/ and https://aws.amazon.com/transcribe/

C: Rekognition Image to import the text from PDFs and then Polly to output the text to audio. The Rekognition Image service is used for things like facial recognition and image search; it does not extract text from documents. However, for the text-to-speech part of the scenario, Polly would be the appropriate service. https://aws.amazon.com/rekognition/ and https://aws.amazon.com/polly/

D: Textract to import the text from the PDFs and then Transcribe to output the text to audio**.** Textract extracts text from documents (the PDFs), and can then pass the text to another service for processing, so it would be the appropriate service for the first part of the scenario. However, Transcribe is not the correct service for the second part of the scenario, as it does speech-to-text. Instead, you would use Polly, which is text-to-speech. https://aws.amazon.com/textract/ and https://aws.amazon.com/transcribe/

QUESTION 61

You need to encrypt an existing RDS database that is currently unencrypted. What should you do?

A: Enable the Multi-AZ feature on the RDS database and enable encryption. Perform a failover from the primary database to the standby database.

B: Create a new read replica of the existing database, and enable encryption on the read replica. Promote the read replica to be the primary database. Delete the original RDS instance.

C: Create a new DynamoDB table with encryption enabled. Migrate the data from RDS to DynamoDB.

D: Create a snapshot of the existing database. Copy the snapshot and encrypt the copy. Restore an encrypted database instance from the encrypted snapshot copy.

EXPLANATIONS

A: Enable the Multi-AZ feature on the RDS database and enable encryption. Perform a failover from the primary database to the standby database. You can only encrypt an RDS database instance when you create it, not afterwards. Simply enabling Multi-AZ will not give you the ability to encrypt new and existing data. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.Limitations

B: Create a new read replica of the existing database, and enable encryption on the read replica. Promote the read replica to be the primary database. Delete the original RDS instance. It’s not possible to have an encrypted read replica of an unencrypted DB instance. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.Limitations

C: Create a new DynamoDB table with encryption enabled. Migrate the data from RDS to DynamoDB. This is a distractor. While it is possible to create a DynamoDB table that’s encrypted, DynamoDB and RDS are inherently different databases. DynamoDB is a key-value (or NoSQL) database, and RDS is a relational database. Migrating between the two would require significant effort to rearchitect and re-code the applications that use the database. This is not necessary to fulfill the encryption requirements in the question.

D: Create a snapshot of the existing database. Copy the snapshot and encrypt the copy. Restore an encrypted database instance from the encrypted snapshot copy - CORRECT. To add encryption to a previously-unencrypted database instance, you should create a snapshot of the database instance. Then create a copy of the snapshot, enabling encryption on the copy. From there, restore the encrypted snapshot copy. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html

QUESTION 62

A global news organization utilizes CloudFront to get their content closer to users. They’ve set up an Application Load Balancer as the origin for the CloudFront distribution. During political election cycles, they’ve noticed more frequent attacks against the site, including cross-site scripting and SQL injection attacks. What can they do to help protect against these kinds of attacks?

A: Set up AWS Shield on the EC2 instances

B: Set up AWS Web Application Firewall (WAF) on the CloudFront distribution

C: Update the Security Groups on the EC2 instances to block known bad IP addresses

D: Install Firewall Manager on the EC2 instances to block traffic from countries of origin that are known to be problematic

EXPLANATIONS

A: Set up AWS Shield on the EC2 instances. AWS Shield is used to protect against Distributed Denial of Service (DDoS) attacks, not common attacks like cross-site scripting or SQL injection. Managed DDoS Protection – AWS Shield – Amazon Web Services

B: Set up AWS Web Application Firewall (WAF) on the CloudFront distribution – CORRECT. AWS Web Application Firewall (WAF) controls incoming and outgoing traffic for applications and websites, based on rules such as “block traffic from IP address X.” You can also enable rules to protect against common exploits such as cross-site scripting and injection attacks. WAF can be deployed on API Gateway, Application Load Balancers (but NOT Network Load Balancers), CloudFront distributions, Cognito User Pools, and AppSync GraphQL API. https://aws.amazon.com/waf/

C: Security Groups. Security groups are firewalls used to control traffic at the EC2 instance level. You are not able to set up “deny” rules in a security group. And security groups don’t protect against common attacks like cross-site scripting or SQL injection. Control traffic to resources using security groups - Amazon Virtual Private Cloud

D: Install Firewall Manager on the EC2 instances to block traffic from countries of origin that are known to be problematic. Firewall Manager is a managed service that lets you manage WAF and Shield from a single interface. It is not installed on individual EC2 instances. https://aws.amazon.com/firewall-manager/

QUESTION 63

As part of a migration to AWS, your team is taking the opportunity to re-architect your application. You need a relational database that’s fault-tolerant, highly performant and scalable. You also need to be able to quickly failover to a secondary region in the event of a problem with the primary region. What should you use?

A: Aurora Multi-Master

B: Aurora Global Database

C: RDS MySQL with Multi-AZ enabled

D: DynamoDB with Global Tables

EXPLANATIONS

A: Aurora Multi-Master. In general, Aurora was built for the cloud, and is fault-tolerant, highly performant and scalable. With Multi-Master, however, you are limited to just a single region. This would not meet the failover requirements mentioned in the question. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-multi-master.html

B: Aurora Global Database – CORRECT. An Aurora Global Database is a single database that spans multiple regions (up to 5). It is fault-tolerant, highly performant and scalable. It also handles failover to a secondary region, as required in the question. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html

C: RDS MySQL with Multi-AZ enabled. With RDS, the Multi-AZ option only spans 2 AZs in a single region. It cannot be used to failover to another region. Also, in general, Aurora will be more fault-tolerant, highly performant and scalable, as it was built for the cloud. https://aws.amazon.com/rds/features/multi-az/

D: DynamoDB with Global Tables. DynamoDB is a NoSQL or key-value database. The question mentions the need for a relational database. https://aws.amazon.com/dynamodb/

QUESTION 64

Due to a recent and widespread security incident, your fleet of 1,000 instances need to have a patch applied. How can you accomplish this with the least amount of administrative effort?

A: Use AWS Systems Manager to install the patches

B: Log into each instance and install the update

C: Use AWS Config to install the patches

D: Ask the owner of each instance to apply the patch as part of their development process

EXPLANATIONS

A: Use AWS Systems Manager to install the patches – CORRECT. One of the primary functions of Systems Manager is to patch/update a fleet of servers from a single place. This would be the easiest way to meet the requirements mentioned in the question. Centralized Operations Hub – AWS Systems Manager – Amazon Web Services

B: Log into each instance and install the update. While this approach would technically work, it would require a lot of administrative effort. Systems Manager would be the preferred way to accomplish this requirement. Centralized Operations Hub – AWS Systems Manager – Amazon Web Services

C: Use AWS Config to install the patches. AWS Config is used to inventory and record activity on resource configuration, and is commonly used for audit purposes. This is not how you would apply patches to a fleet of instances. https://aws.amazon.com/config

D: Ask the owner of each instance to apply the patch as part of their development process. While this approach would technically work, it would require a lot of administrative effort. Systems Manager would be the preferred way to accomplish this requirement. Centralized Operations Hub – AWS Systems Manager – Amazon Web Services

QUESTION 65

Your team has been tasked with reducing your AWS spend on compute resources. You’ve identified several interruptible workloads that are good candidates for cost savings. What EC2 pricing model would make the most sense in this scenario?

A: Spot Instances

B: Reserved Instances

C: On-Demand Instances

D: Dedicated Hosts

EXPLANATIONS

A: Spot Instances – CORRECT. With a Spot Instance, you can bid (specify the price you want to pay) on unused EC2 capacity. This can provide savings of up to 90% over On-Demand Instances. With this model, instances can be shut down at any time. However, because the identified workloads are interruptible, this would still be a valid solution. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html

B: Reserved Instances. Reserved instances can provide savings of up to 70%. This solution makes sense for long-running workloads, such as databases, and can be reserved for 1-3 years. While this option does provide cost savings, Spot Instances will save even more, and can be used with the interruptible workloads mentioned in the question. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html

C: On-Demand Instances. With On-Demand Instances, you pay, by the second, for instances you launch. This option would not provide the cost savings mentioned in the question. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html

D: Dedicated Hosts. A Dedicated Host is an entire physical server that is used for only your resources. This option would not provide the cost savings mentioned in the question. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html