AWS Certified Solutions Architect: Zero to Master - Practice Exam: Answers & Explanations PART 1

QUESTION 1

An application running on an EC2 instance needs to upload videos to an S3 bucket, and also write logs to CloudWatch. What is the best way to implement this using security best practices?

A: Create an IAM user that matches the ID of the EC2 instance, and grant it permissions on S3 and CloudWatch

B: Create a policy for S3 and CloudWatch, and attach it to the EC2 instance

C: Create a role with write permissions for S3 and CloudWatch, and assign the role to the EC2 instance

D: Create a user group for the EC2 instance

EXPLANATIONS

A: Create an IAM user that matches the ID of the EC2 instance, and grant it permissions on S3 and CloudWatch. This is a distractor. There is no correlation between IAM users and EC2 instance IDs.

B: Create a policy for S3 and CloudWatch, and attach it to the EC2 instance. A policy is used to grant permissions. However, it can only be attached to a principal, which can be a user, user group or role. A policy cannot be applied to an EC2 instance. IAM roles - AWS Identity and Access Management

C: Create a role with write permissions for S3 and CloudWatch, and assign the role to the EC2 instance - CORRECT. An IAM role allows an AWS service to temporarily assume permissions that the role has. Delegating permissions in this way is considered a best practice. Security best practices in IAM - AWS Identity and Access Management

D: Create a user group for the EC2 instance. This is a distractor. An EC2 instance does not have an associated user group.

QUESTION 2

You run a popular blogging website that has readers all over the world. You use Route 53 to route traffic to the site. You need to ensure content is delivered to users with the lowest latency possible. Which routing policy should you use?

A: Latency

B: Geoproximity

C: Weighted

D: Geolocation

EXPLANATIONS

A: Latency - CORRECT. With a Latency policy, traffic will be routed to the region that provides the best latency for the user. Choosing a routing policy - Amazon Route 53

B: Geoproximity. The Geoproximity policy will route traffic based on the location of resources. For example, if you have a larger server in us-west-1, you can set up the policy to route there, regardless of where users are. Choosing a routing policy - Amazon Route 53

C: Weighted. The Weighted policy will route traffic to resources based on a ratio you set (50% to Server1, 25% to Server2, 25% to Server3). Choosing a routing policy - Amazon Route 53

D: Geolocation. The Geolocation policy will route based on the location of users. However, this doesn’t necessarily mean low latency. The use case for this policy is to serve content in localized languages, or to show marketing campaigns based on a user’s location. Choosing a routing policy - Amazon Route 53

QUESTION 3

Your team runs a lot of EC2 instances for various applications. Due to budget tightening for the upcoming year, all teams have been asked to reduce their AWS spend as much as possible. You may also need to change the type of instances you’re using. Which of the following can help you accomplish this?

A: Reserved Instances

B: EC2 Instance Savings Plan

C: Compute Savings Plan

D: Spot Instances

EXPLANATIONS

A: Reserved Instances. Reserved instances can provide savings of up to 70% over On-Demand pricing. This solution makes sense for long-running workloads, such as databases, and can be reserved for 1-3 years. However, Reserved Instances require you to commit to a specific instance type, which is not very flexible. The question mentions you may need to change instance types, so this would not be the best option. Instance purchasing options - Amazon Elastic Compute Cloud

B: EC2 Instance Savings Plan - CORRECT. With a Savings Plan, you commit to a specific dollar amount over a specified period. An EC2 Instance Savings plan specifically would help reduce spend for EC2 instances. What are Savings Plans? - Savings Plans

C: Compute Savings Plan. With a Savings Plan, you commit to a specific dollar amount over a specified period. The Compute Savings Plan applies to EC2 instances, Lambda and Fargate. Since we’re only using EC2 in this question, we should use the EC2 Instance Savings Plan instead. What are Savings Plans? - Savings Plans

D: With a Spot Instance, you can bid (specify the price you want to pay) on unused EC2 capacity. The primary reason for using Spot is to save money. However, Spot instances can be terminated at any time, so they should only be used for specific, interruptible workloads, not as a general solution for all applications. Instance purchasing options - Amazon Elastic Compute Cloud

QUESTION 4

Your company recently expanded from Europe into Asia. For regulatory reasons, you must now replicate data into Asia. Which service supports this scenario?

A: Direct Connect

B: Elastic Block Store (EBS)

C: S3

D: Storage Gateway

EXPLANATIONS

A: Direct Connect. Direct Connect offers a dedicated physical connection from an on-premises data center to AWS. You can create a Direct Connect connection in any region, but Direct Connect is not used to replicate data across regions. https://aws.amazon.com/directconnect

B: Elastic Block Store (EBS). EBS replicates data across Availability Zones in a single region, but does not support replication across regions. https://aws.amazon.com/ebs

C: S3 – CORRECT. S3 supports cross-region replication of data. Replicating objects - Amazon Simple Storage Service

D: Storage Gateway. Storage Gateway allows you to store on-premises resources in the cloud, such as for backup purposes. However, Storage Gateway does not directly support cross-region replication. https://aws.amazon.com/storagegateway

QUESTION 5

You’ve built an application for your company’s marketing team. The team works with very large image and video files, ranging in size from 10 GB to 25 GB. You’ve created an S3 bucket to store the files, and users can upload them directly from the application. However, users are reporting that uploads occasionally fail partway through, and they must start over. What is a possible solution?

A: Enable S3 Transfer Acceleration

B: Use S3 Multipart Upload

C: Encrypt the file on the client side before uploading

D: Ensure that you have S3 Object Lock permissions in Compliance mode

EXPLANATIONS

A: Enable S3 Transfer Acceleration. S3 Transfer Acceleration increases the speed of transfer from long-distance (edge) locations. However, this would not solve the problem of the size of the files. For objects larger than 100 MB, the recommendation is to use Multipart Upload. S3 Transfer Acceleration

B: Use S3 Multipart Upload - CORRECT. While the official limit for a single S3 PUT request is 5 GB, for any objects larger than 100 MB, the recommendation is to use Multipart Upload. Given the size of the files in question, this is likely the best solution to the problem. Uploading and copying objects using multipart upload - Amazon Simple Storage Service

C: Encrypt the file on the client side before uploading. While it is possible to create S3 bucket policies to check for encryption upon upload, the question makes no mention of encryption requirements for the application. It is more likely that the error is due to the size of the file. Using S3 Multipart Upload for files larger than 100 MB is recommended. Uploading and copying objects using multipart upload - Amazon Simple Storage Service

D: Ensure that you have S3 Object Lock permissions in Compliance mode. This is a distractor. Object Lock is used to prevent deleting or overwriting objects. In addition, if an object is protected with Object Lock in Compliance mode, nobody (not even root) will have access to delete or overwrite it, so it wouldn’t make sense to check permissions for that. Using S3 Object Lock - Amazon Simple Storage Service

QUESTION 6

A solutions architect needs to view metadata for a Linux EC2 instance. What command can they use?

A: curl http://169.254.169.254/latest/meta-data

B: curl http://localhost/latest/meta-data

C: curl http://254.169.254.169/meta-data/latest

D: curl http://127.0.0.1/latest/meta-data

EXPLANATIONS

A: curl http://169.254.169.254/latest/meta-data - CORRECT. The link-local address for EC2 instance is 169.254.169.254. “latest” ensures you’re using the latest build for the metadata service. And then “meta-data” is used to pull back metadata from the service. Retrieve instance metadata - Amazon Elastic Compute Cloud

B: curl http://localhost/latest/meta-data. Metadata is only available through the link-local address of 169.254.169.254, not localhost. Retrieve instance metadata - Amazon Elastic Compute Cloud

C: curl http://254.169.254.169/meta-data/latest. This is the incorrect link-local address. It should be 169.254.169.254. It can be helpful to remember that the smaller number (169) comes before the larger number (254). Retrieve instance metadata - Amazon Elastic Compute Cloud

D: curl http://127.0.0.1/latest/meta-data. Metadata is only available through the link-local address of 169.254.169.254, not the local IP address of localhost (127.0.0.1). Retrieve instance metadata - Amazon Elastic Compute Cloud

QUESTION 7

A photo processing application stores a lot of information in memory before eventually writing it to a database. The EC2 instance that hosts the application occasionally needs to be rebooted. How do you ensure the memory isn’t lost when the instance reboots?

A: Enable termination protection on the instance

B: Enable stop protection on the instance

C: Create a Lambda function to save the contents of memory to disk when the instance stops

D: Enable hibernation on the instance

EXPLANATIONS

A: Enable termination protection on the instance. Termination protection helps avoid accidental termination of an instance. However, it has no effect on the contents of memory on the instance. Terminate your instance - Amazon Elastic Compute Cloud

B: Enable stop protection on the instance. Stop protection helps avoid accidental stopping of an instance. However, it has no effect on the contents of memory on the instance. Stop and start your instance - Amazon Elastic Compute Cloud

C: Create a Lambda function to save the contents of memory to disk upon stopping the instance. It’s not possible to directly trigger a Lambda function to run when an instance stops, and Lambda wouldn’t be able to access the machine’s memory either. Also, generally, this answer is more complicated than it needs to be. To achieve the requirements of the scenario, you should enable hibernation on the host, which will take care of writing the contents of memory to disk. Overview of hibernation - Amazon Elastic Compute Cloud

D: Enable hibernation on the instance - CORRECT. By enabling hibernation on the instance, the contents of memory will be written to disk when the instance is stopped/rebooted. Upon rebooting, the data from disk will be reconstituted into memory. Hibernation can only be enabled when you first create the instance, and not afterwards. Overview of hibernation - Amazon Elastic Compute Cloud

QUESTION 8

A travel application uses the Simple Queue Service (SQS) to check a customer’s credit card limit before allowing them to book a reservation through the site. However, it occasionally happens that a customer can complete their reservation before the credit card is validated. What is a potential solution to this issue?

A: Use a Standard queue for the messages

B: Use a FIFO queue for the messages

C: Set up a CloudWatch alarm to skip messages if they are out of order

D: Use a Dead Letter Queue for the messages

EXPLANATIONS

A: Use a standard queue for the messages**.** The problem seems to stem from the fact that messages are occasionally being delivered out of order (reservations are made before credit cards are validated). This can occasionally happen with Standard queues. The best solution is to use a FIFO queue, where messages are guaranteed to be processed in the order received. Amazon SQS Features | Message Queuing Service | AWS

B: Use a FIFO queue for the messages - CORRECT. The problem seems to stem from the fact that messages are occasionally being delivered out of order (reservations are made before credit cards are validated). This can occasionally happen with Standard queues. The best solution is to use a FIFO queue, where messages are guaranteed to be processed in the order received. Amazon SQS Features | Message Queuing Service | AWS

C: Set up a CloudWatch alarm to skip messages if they are out of order. This is a distractor. CloudWatch does not offer metrics related to the order of message processing, and so hence an alarm could not be set up for this.

D: Use a Dead Letter Queue for the messages. Dead Letter Queues are used to store messages that were undeliverable in a “regular” queue, so they can be processed separately. In this scenario, the messages ARE being delivered, but in the wrong order. The best solution is to use a FIFO queue, where messages are guaranteed to be processed in the order received. Amazon SQS Features | Message Queuing Service | AWS

QUESTION 9

In order to quickly move an application to AWS, a “lift and shift” migration was done, moving the application to an EC2 instance. The application load is unpredictable, and sometimes it sees no activity for hours at a time. How can this application be rearchitected to optimize compute and minimize costs?

A: Update the application to use Lambda functions

B: Keep the application code as-is, but reduce the size of the EC2 instance

C: Update the application to use ECS containers, running with the EC2 launch type

D: Create an Application Load Balancer and Auto Scaling Group to distribute the load to two instances during peak times

EXPLANATIONS

A: Update the application to use Lambda functions - CORRECT. This is the best option to optimize compute and also minimize costs. As a serverless solution, the Lambda functions will only run when needed, and you will only be charged when they run. https://docs.aws.amazon.com/lambda

B: Keep the application code as-is, but reduce the size of the EC2 instance. While this could potentially save some costs, it doesn’t optimize compute resources. You’d still have one instance running even during periods where the application isn’t used. A serverless solution of Lambda would better optimize compute and costs. https://docs.aws.amazon.com/lambda

C: Update the application to use ECS containers, running on EC2 instances. While this seems like a good way to optimize, if the containers are using the launch type of EC2 instances, that means you still have at least one EC2 instance running even if you don’t need. If you were using a launch type of Fargate, that would satisfy the requirements. Fargate is a serverless solution, so compute power would only be used when it was needed. Amazon ECS launch types - Amazon Elastic Container Service

D: Create an Application Load Balancer and Auto Scaling Group to distribute the load to two instances during peak times. While this might help performance during peak times, it will also cost more because you’ll be paying for two instances sometimes. Also, it doesn’t solve the problem of always having one instance running even if it’s not needed. Auto Scaling groups - Amazon EC2 Auto Scaling

QUESTION 10

  1. Your company was part of a merger with a larger company, and you’ve been tasked with consolidating applications in AWS. You have some data on-premises, some in AWS, and some in Microsoft Azure, and it all needs to be moved to a central AWS account. What service should you use to accomplish this?

A: DataSync

B: Storage Gateway

C: AWS Transfer Family

D: Database Migration Service (DMS)

EXPLANATIONS

A: DataSync – CORRECT. DataSync is used to move large amounts of data. It supports movement from on-premises to AWS, AWS to AWS, and other cloud providers to AWS. Data Transfer Service - AWS DataSync - AWS

B: Storage Gateway. Storage Gateway is used in a hybrid environment, either as a backup solution, or as a way to store all data in AWS and cache only frequently-used data on-premises (to save local storage costs). It is not meant as a service for migrations. AWS Storage Gateway | Amazon Web Services

C: AWS Transfer Family. AWS Transfer Family is a fully-managed service that supports the FTP protocol to upload to S3 and EFS. Secure File Transfer - File Transfer Service - AWS Transfer Family - AWS

D: Database Migration Service (DMS). DMS is used to migrate databases from on-premises to AWS, AWS to AWS, or AWS to on-premises. While you could use this tool for the database migrations specifically, you should use DataSync for moving all other data. Cloud Database Migration - AWS Database Migration Service (DMS) - AWS

QUESTION 11

An application runs across five EC2 instances, fronted by an Application Load Balancer. You need to preserve session data for users, making sure the requests are routed to the same instance. How can you accomplish this?

A: By enabling Sticky Sessions on the load balancer

B: By enabling Sticky Sessions on the target group

C: By enabling a Round Robin pattern on the load balancer

D: By using ElastiCache in same-user mode

EXPLANATIONS

A: By enabling Sticky Sessions on the load balancer. To achieve what’s described in the question, you should enable Sticky Sessions on the target group, not the load balancer itself. Sticky sessions for your Application Load Balancer - Elastic Load Balancing

B: By enabling Sticky Sessions on the target group - CORRECT. Enabling sticky sessions on the target group will set a cookie that enables future requests to be routed to the same instance. Sticky sessions for your Application Load Balancer - Elastic Load Balancing

C: By enabling a Round Robin pattern on the load balancer. The Round Robin pattern is the default way that a load balancer handles traffic. With this pattern, requests go to instance 1, then instance 2, instance 3 and so on. This would not allow you to save user’s data. Instead, you want a user’s requests to be routed to the same instance. To accomplish this, you should enable Sticky Sessions on the target group. Sticky sessions for your Application Load Balancer - Elastic Load Balancing

D: By using ElastiCache in same-user mode. This is a distractor, as there is no such thing as same-user mode in ElastiCache. And in general, ElastiCache would be used to do distributed session management, not for just a single user. Session Management

QUESTION 12

A startup company’s website has become more popular, and they need to scale to multiple EC2 instances that are fronted by an Application Load Balancer. They currently use an SSL/TLS certificate on their single server. What should they do to ensure they can use certificates with the new setup?

A: Install the certificate to all EC2 instances that live behind the load balancer

B: On the Application Load Balancer’s target group, load the certificate from the Key Management System (KMS)

C: Enable certificates on the load balancer and EC2 instances, and let AWS handle the rest

D: On the application load balancer, create an HTTPS listener that points to the certificate stored in AWS Certificate Manager (ACM)

EXPLANATIONS

A: Install the certificate to all EC2 instances that live behind the load balancer. When working with a load balancer, the certificate must be installed on the load balancer, not the individual instances. Create an HTTPS listener for your Application Load Balancer - Elastic Load Balancing

B: On the Application Load Balancer’s target group, load the certificate from the Key Management System (KMS). This is a distractor. With load balancers, “loading” the certificate is done from an HTTPS listener (not a target group). Also, KMS is used for encryption keys, not for certificates. Create an HTTPS listener for your Application Load Balancer - Elastic Load Balancing

C: Enable certificates on the load balancer and EC2 instances, and let AWS handle the rest. This is also a distractor, and is not possible. When working with a load balancer, you must create an HTTPS listener, then load a certificate. Create an HTTPS listener for your Application Load Balancer - Elastic Load Balancing

D: On the application load balancer, create an HTTPS listener that points to the certificate stored in AWS Certificate Manager (ACM) - CORRECT. When working with a load balancer, you must create an HTTPS listener, then load a certificate. The certificate can be loaded from ACM, IAM, or uploaded manually. This makes traffic from the user to the load balancer encrypted, and then it’s unencrypted from the load balancer to the EC2 instances (but still inside a private VPC). Target tracking scaling policies for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling

QUESTION 13

You are developing a Lambda function that processes text from log files as they’re uploaded to S3. While testing the function, you notice it takes a long time to run, even on relatively small log files. What is the most likely problem?

A: The Lambda function has not been allocated enough memory

B: The Lambda storage capacity has been exhausted

C: The Lambda function is timing out after 15 minutes

D: There are too many Lambda functions running concurrently

EXPLANATIONS

A: The Lambda function has not been allocated enough memory - CORRECT. Lambda memory size can range from 128 MB to 10,240 MB, and it is configurable. This value also affects the CPU resources. If you notice poor performance on the function, a very likely cause is too little memory. Lambda quotas - AWS Lambda

B: The Lambda storage capacity has been exhausted. While it’s true that there is a soft limit of 75 GB for uploaded functions, the scenario doesn’t mention anything about the size of uploaded functions so this is not likely. Lambda quotas - AWS Lambda

C: The Lambda function is timing out after 15 minutes. If a Lambda function reaches its maximum timeout of 900 seconds, or 15 minutes, it will be terminated. However, based on the scenario, that is not happening; it is simply running slow. The most common reason for slow performance is too little memory allocated to the function. Lambda quotas - AWS Lambda

D: There are too many Lambda functions running concurrently. Lambda limits concurrent executions to 1,000 by default (though this can be increased). When this limit is hit, Lambda will throttle the functions. However, based on the scenario, the function is just being developed and tested, and there’s no mention of other Lambda functions running concurrently. The most common reason for slow performance is too little memory allocated to the function. Lambda quotas - AWS Lambda

QUESTION 14

Your company has recently acquired several other companies, and you’re tasked with integrating multiple VPCs and on-premises resources. You also need to be able to manage Direct Connect and Site-to-Site VPN connections. What service would be the most appropriate?

A: AWS Transit Gateway

B: Storage Gateway

C: Internet Gateway

D: Route 53

EXPLANATIONS

A: AWS Transit Gateway – CORRECT. AWS Transit Gateway helps you manage multiple AWS VPCs, on-premises networks, Direct Connect and Site-to-Site VPN from a single place. https://aws.amazon.com/transit-gateway/

B: Storage Gateway. Storage Gateway allows you to store on-premises resources in the cloud, such as for backup purposes. AWS Storage Gateway | Amazon Web Services

C: Internet Gateway. An internet gateway is what allows resources in a public subnet to access the internet. This is done by creating a route (defined in a route table) from the public subnet to the internet gateway. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html

D: Route 53. Route 53 is AWS’s managed DNS service. It can be used to purchase/register domain names, as well as handle DNS routing (such as routing IP address 12.34.56.78 to mywebsite.com). https://aws.amazon.com/route53/

QUESTION 15

You need to control traffic in and out of an individual EC2 instance. How can you accomplish this?

A: Network Access Control List (ACL)

B: Direct Connect

C: IAM Policies

D: Security Group

EXPLANATIONS

A: Network Access Control List (ACL). A network ACL is a firewall that controls traffic in and out of a subnet. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

B: Direct Connect. Direct Connect offers a dedicated physical connection from an on-premises data center to AWS. https://aws.amazon.com/directconnect/

C: IAM Policies. IAM policies define permissions that can be attached to users, user groups and roles. These do not apply directly to EC2 instances (though a policy can be attached to a role, which can then be assigned to an EC2 instance). https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html

D: Security Group – CORRECT. A security group is a firewall that controls traffic in and out of an EC2 instance. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

QUESTION 16

You’ve set up three CloudWatch alarms: one for CPUUtilization, one for DiskReads, and one for NetworkOut. You’re getting excessive alerts, but generally no action is needed when only a single alarm is triggered. Instead, you want to be notified when all three alarms hit their threshold. What should you do?

A: Use CloudWatch Synthetics

B: Create a Lambda function that monitors for all three alarms, and then triggers a fourth alarm

C: Create a CloudWatch composite alarm

D: Enable multi-metric monitoring on CloudWatch alarms

EXPLANATIONS

A: Use CloudWatch Synthetics. CloudWatch Synthetics allow you to monitor REST APIs, URLs and website content—basically enabling you to monitor how things are going from a user experience perspective. The scenario in the question relates to monitoring for an EC2 instance. https://docs.aws.amazon.com/AmazonSynthetics/latest/APIReference/Welcome.html

B: Create a Lambda function that monitors for all three alarms, and then triggers a fourth alarm. While it’s possible to configure CloudWatch and Lambda to talk to each other, Lambda doesn’t “monitor” CloudWatch alarms (Lambda must be triggered by some event). Also, in general, this is an overly complicated solution.

C: Create a CloudWatch composite alarm - CORRECT. CloudWatch composite alarms effectively let you “chain” alarms together, where the composite alarm will alert when thresholds of other alarms are met. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Alarm.html

D: Enable multi-metric monitoring on CloudWatch alarms. This is a distractor. There is no such thing as multi-metric monitoring on CloudWatch alarms.

QUESTION 17

You are creating EC2 instances for an application that does data warehousing and log processing. You need to choose the most appropriate type of EBS volume for this use case. What should you choose?

A: Throughput Optimized HDD

B: Provisioned IOPS SSD

C: Magnetic

D: General Purpose SSD

EXPLANATIONS

A: Throughput Optimized HDD - CORRECT. A Throughout Optimized HDD makes sense where you need to read large “chunks” of files at once (think of the “large bites of chocolate” analogy from the course). Common use cases include Big Data/data warehousing and log processing. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hdd-vols.html

B: Provisioned IOPS SSD. Provisioned IOPS SSDs are ideally suited to I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency. They give you high input/output (“lots of small bites” from the chocolate analogy in the course). https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/provisioned-iops.html

C: Magnetic. Magnetic volumes are backed by magnetic drives, and are best suited to workloads where data is infrequently accessed. Based on the scenario in question, this would not be an appropriate solution. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html

D: General Purpose SSD. This type of volume is suitable for a variety of workloads, and might even work for the application in question. However, based on the details in the question, the Throughput Optimized HDD would be the best choice. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html

QUESTION 18

An EC2 instance running in a private subnet needs to access the internet to do occasional patching. How can you accomplish this?

A: In the private subnet, create a NAT instance and assign it a public IP address. Disable source/destination check.

B: In the public subnet, create a NAT gateway. In the private subnet, create a route to the NAT gateway for any traffic with a destination of the internet.

C: In the private subnet, create a route to the Internet Gateway for the VPC.

D: Add a public IP address to the instance in the private subnet.

EXPLANATIONS

A: In the private subnet, create a NAT instance and assign it a public IP address. Disable source/destination check. NAT instances are an older technology and are no longer officially supported (NAT gateways are the recommended replacement). But even so, this answer is incomplete. Simply having a NAT instance in a private subnet would not allow instances in the subnet to access the internet. You would also need to create routes to it from the private subnet. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html

B: In the public subnet, create a NAT gateway. In the private subnet, create a route to the NAT gateway for any traffic with a destination of the internet – CORRECT. To enable internet access from a private subnet, you should create a NAT Gateway in a public subnet, add a route from the private subnet to it, and then add a route from the NAT Gateway to the Internet Gateway (which lives at the VPC level). https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

C: In the private subnet, create a route to the Internet Gateway for the VPC. If a private subnet included a route to the Internet Gateway, that would make it a public subnet. That is the main difference between a private and public subnet (a route to the Internet Gateway). In this scenario, you want to maintain a private subnet for your instance. https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-basics

D: Add a public IP address to the instance in the private subnet. It is not necessary to give the instance a public IP address. To allow access to the internet, you’d create a NAT Gateway in a public subnet, add a route from the private subnet to it, and then add a route from the NAT Gateway to the Internet Gateway (at the VPC level). https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

QUESTION 19

You have two AWS accounts: Dev and Test. Resources in the Dev VPC need to be able to communicate with resources in the Test VPC, as if they were in the same VPC. How can you accomplish this?

A: VPC Endpoints

B: Bastion Host

C: Network Access Control Lists (NACLs)

D: VPC Peering

EXPLANATIONS

A: VPC Endpoints. VPC endpoints allow you to access other AWS services through a private AWS network. https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

B: Bastion Host. Bastion Hosts allow SSH connections to an EC2 instance in a private subnet. https://aws.amazon.com/blogs/security/how-to-record-ssh-sessions-established-through-a-bastion-host/

C: Network Access Control Lists (NACLs). A network ACL is a firewall that controls traffic in and out of a subnet. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

D: VPC Peering – CORRECT. VPC peering allows you to connect one or more VPCs to make them behave like a single network. This can be done in the same account or across accounts. https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html

QUESTION 20

A high-performance computing application requires extremely low latency and high network throughput across the instances that it runs on. What is the best way to accomplish this?

A: Use a Spread placement group strategy

B: Use Dedicated Hosts

C: Use a Partition placement group strategy

D: Use a Cluster placement group strategy

EXPLANATIONS

A: Use a Spread placement group strategy. With a Spread placement group, instances are distributed across hardware, across Availability Zones. While this has the benefit of high availability, it will not be as fast as the Cluster strategy where instances are located on the same physical rack. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

B: Use Dedicated Hosts. Dedicated Hosts let you book an entire physical server and bring your own licenses. However, this does not mean the servers are close to each other, and you cannot guarantee low latency or high network throughout between them. To achieve this, you should use a Placement Group with a Cluster strategy to ensure things are physically close together. https://aws.amazon.com/ec2/dedicated-hosts/

C: Use a Partition placement group strategy. With a Partition placement group, instances are grouped into partitions on separate racks, and they can be distributed across Availability Zones. While this has the benefit of high availability, it will not be as fast as the Cluster strategy where instances are located on the same physical rack. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

D: Use a Cluster placement group strategy - CORRECT. With a Cluster placement group, instances are physically close together (the same rack) in a single Availability Zone. This will achieve the requirements stated in the question. However, it should be noted that this strategy is not highly available, as instances only reside in a single AZ. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

QUESTION 21

You’re creating a new VPC for your project. You need 254 IP addresses for your EC2 instances. Which subnet mask should you choose?

A: /22

B: /23

C: /24

D: /25

EXPLANATIONS

A: /22. A subnet mask of /22 will give you 1,024 IP addresses, minus the 5 that AWS reserves, so a total of 1,019. While this will fit the requirement, it will leave a lot of IP addresses unused, so it would be better to go with the next number up (/23), which will give you fewer addresses. https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-sizing

B: /23 – CORRECT. A subnet mask of /24 will give you 256 IP addresses. However, AWS reserves the first four and last IP address in every subnet. So 256 minus 5 is only 251, which isn’t enough to cover the requirements in the question. Therefore, you would have to go to the next number down, which is /23 (the smaller the number, the more IP addresses). https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-sizing

C: /24. A subnet mask of /24 will give you 256 IP addresses. However, AWS reserves the first four and last IP address in every subnet. So 256 minus 5 is only 251, which isn’t enough to cover the requirements in the question. https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-sizing

D: /25. A subnet mask of /25 will only give you 128 IP addresses, minus the 5 reserved by AWS, which is not enough to fulfill the requirements of the question. https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-sizing

QUESTION 22

For Compliance reasons, a company must encrypt their data at rest in S3. They have keys on-premises, and the development team plans to do the encryption/uploads programmatically. Which encryption option should they use?

A: Server-side encryption with Amazon S3-managed keys (SSE-S3)

B: Server-side encryption with AWS KMS-managed keys (SSE-KMS)

C: Server-side encryption with customer-provided keys (SSE-C)

D: Client-side encryption with S3-provided keys (CS-S3)

EXPLANATIONS

A: Server-side encryption with Amazon S3-managed keys (SSE-S3). With Amazon S3-managed keys (SSE-S3), S3 creates, manages and uses the keys for you. Meaning you only upload the object, and then it’s matched with the S3 key once in AWS. In the scenario in question, the customer has the keys on-premises, so SSE-S3 is not the option you want. https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html

B: Server-side encryption with AWS KMS-managed keys (SSE-KMS). With Amazon KMS-managed keys (SSE-KMS), the Key Management Service manages the keys for you. Meaning you only upload the object, and then it’s matched with the KMS key once in AWS. In the scenario in question, the customer has the keys on-premises, so SSE-KMS is not the option you want. https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html

C: Server-side encryption with customer-provided keys (SSE-C) - CORRECT. The question states that the customer has keys on-premises, which means they should use server-side encryption with customer-provided keys (SSE-C). With this option, the key is uploaded along with the object (via HTTPS only), and then encryption happens in AWS with the key that was uploaded. SSE-C can only be done programmatically, which the development team is prepared to do. https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html

D: Client-side encryption with S3-provided keys. This is a distractor. Client-side encryption is possible to do, but you would not be able to use an S3 key to do it (and there is no such thing as “CS-S3”). To do client-side encryption, you would encrypt the file using a client-side key, then upload it to S3. https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html

QUESTION 23

Your Compliance team requires that objects in an S3 bucket be retained for 7 years, and nobody should be able to delete or overwrite them. How can you accomplish this?

A: Use object lock with a Legal Hold period of 7 years, in Compliance mode

B: Enforce MFA delete on the entire bucket

C: Use object lock in Governance mode, set for 7 years

D: Use object lock with a Retention Period of 7 years, in Compliance mode

EXPLANATIONS

A: Use object lock with a Legal Hold period of 7 years, in Compliance mode. Object lock is the correct feature. However, with the Legal Hold setting, objects are locked until they are explicitly unlocked. To prevent deletion/overwriting for 7 years, you should use the Retention Period setting, set to 7 years, and in Compliance mode so nobody (not even root) can delete/overwrite objects. Using S3 Object Lock - Amazon Simple Storage Service

B: Enforce MFA delete on the entire bucket. MFA delete will not prevent deletion/overwriting of objects. It simply requires someone to take an additional step of entering an MFA code before making changes. https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiFactorAuthenticationDelete.html

C: Use object lock in Governance mode, set for 7 years. Object lock is the correct feature. However, Governance mode allows users with special permissions to delete/overwrite objects, where the requirements are for NOBODY to be able to delete/overwrite objects (which is possible with Compliance mode). Also, Governance mode is an option under Retention Period, so a fully correct answer should include Retention Period as well. Using S3 Object Lock - Amazon Simple Storage Service

D: Use object lock with a Retention Period of 7 years, in Compliance mode – CORRECT. To prevent deletion/overwriting for 7 years, you should use the Retention Period setting, set to 7 years, and in Compliance mode so nobody (not even root) can delete/overwrite objects. Using S3 Object Lock - Amazon Simple Storage Service

QUESTION 24

Your company recently had a security breach, where data was accessed from an S3 bucket that was accidentally left open to the public. You need to ensure all S3 buckets in the account block public access. What is the fastest and most efficient way to do this?

A: Write a script to apply a “deny all” bucket policy to all S3 buckets in the account

B: Use AWS Config to make the update on all S3 buckets

C: From the S3 portal, block public access for all buckets in the account

D: For each bucket in the account, attach a bucket policy that denies all public access

EXPLANATIONS

A: Write a script to apply a “deny all” bucket policy to all S3 buckets in the account. While you could write a script to do this work, it is not the most efficient way. From the S3 portal, it’s possible to block public access for the entire AWS account. https://docs.aws.amazon.com/AmazonS3/latest/userguide/configuring-block-public-access-account.html

B: Use AWS Config to make the update on all S3 buckets. AWS Config is used to inventory, record and audit the configuration of your AWS resources. It would not be used to update the configuration of resources. https://aws.amazon.com/config/

C: From the S3 portal, block public access for all buckets in the account - CORRECT. From the S3 portal, it’s possible to block public access for the entire AWS account. This would be the fastest and most efficient way to accomplish the requirements in the scenario. https://docs.aws.amazon.com/AmazonS3/latest/userguide/configuring-block-public-access-account.html

D: For each bucket in the account, attach a bucket policy that denies all public access. While you could update the bucket policy for each bucket, that is not the most efficient way. From the S3 portal, it’s possible to block public access for the entire AWS account. https://docs.aws.amazon.com/AmazonS3/latest/userguide/configuring-block-public-access-account.html

QUESTION 25

  1. You are hosting a static website in an S3 bucket. The site needs to access resources in another S3 bucket. When navigating to pages of the site that use the resources, the resources do not display. What is the possible cause?

A: Cross-origin resource sharing (CORS) has not been enabled on the second bucket

B: Static website hosting has not been enabled on the first bucket

C: Cross-origin resource sharing (CORS) has not been enabled on the first bucket

D: A security group on the second bucket is blocking access from the first bucket

EXPLANATIONS

A: Cross-origin resource sharing (CORS) has not been enabled on the second bucket - CORRECT. Cross-origin resource sharing (CORS) defines how resources in one domain (the first bucket) interact with resources in another domain (the second bucket). CORS must be enabled on the cross-origin domain, which in this example is the second bucket. https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html

B: Static website hosting has not been enabled on the first bucket. It is true that you have to enable static website hosting on a bucket. However, the question says “When navigating to pages of the site that use the resources,” which implies that you can navigate pages on the site. This means that static website hosting would already be enabled. The likely issue is that CORS has not been enabled. https://docs.aws.amazon.com/AmazonS3/latest/userguide/EnableWebsiteHosting.html

C: Cross-origin resource sharing (CORS) has not been enabled on the first bucket. Cross-origin resource sharing (CORS) defines how resources in one domain (the first bucket) interact with resources in another domain (the second bucket). CORS must be enabled on the cross-origin domain, which in this example is the second bucket, not the first bucket. https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html

D: A security group on the second bucket is blocking access from the first bucket. This is a distractor. Security groups are firewalls used to control traffic at the EC2 instance level. They are not used with S3 buckets. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

QUESTION 26

The software engineering team at a healthcare company has separate AWS accounts for development, testing and production. In addition, the Compliance team also has a separate AWS account. For yearly audits, the Compliance team needs access to the production account to check various account and configuration settings. What is the best way to accomplish this?

A: In the production account, create IAM users for every member of the Compliance team. Add them to a user group that has administrator access on the account.

B: Create a single IAM user in the production account, and share the credentials with everyone on the Compliance team. Allow them to sign in as needed.

C: In the production account, create a new role with the permissions needed by the Compliance team. Add the Compliance team to the trust policy so they can assume the role. https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html

D: Use Amazon Cognito

EXPLANATIONS

A: In the production account, create IAM users for every member of the Compliance team. Add them to a user group that has administrator permissions on the account. It is not necessary to create new IAM users in the production account, as you can leverage the IAM users from the Compliance account (by way of a role). In addition, granting administrator access goes against the principle of least-privilege. Security best practices in IAM - AWS Identity and Access Management

B: Create a single IAM user in the production account, and share the credentials with everyone on the Compliance team. Allow them to sign in as needed. It is never a good idea to share IAM user credentials. The best way to delegate permissions is by using roles, which can be used cross-account. Security best practices in IAM - AWS Identity and Access Management

C: In the production account, create a new role with the permissions needed by the Compliance team. Add the Compliance team to the trust policy so they can assume the role.- CORRECT. To grant temporary access to other AWS accounts, you should use a role. https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html

D: Use Amazon Cognito. The recommended way to handle authentication and authorization for web and mobile apps is Amazon Cognito. Cognito allows you to integrate with external identity providers (such as Google or Facebook), and handles a lot of the behind-the-scenes work for you. However, this doesn’t make sense for our scenario, as all users have IAM user accounts, and we can accomplish the requirements by using roles across accounts. https://aws.amazon.com/cognito/

QUESTION 27

Your application runs across multiple EC2 instances, Lambda functions, containers and an on-premises server. Which AWS storage service would allow file access from all of these places?

A: Elastic Block Store (EBS)

B: S3 Glacier Deep Archive

C: Elastic File System (EFS)

D: DynamoDB

EXPLANATIONS

A: Elastic Block Store (EBS). Elastic Block Store (EBS) volumes can be thought of as hard drives for a single EC2 instance. While they can store files, they cannot be attached to (or accessed by) more than one instance so would not meet the requirements of this question. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html

B: S3 Glacier Deep Archive. S3 Glacier Deep Archive should be used to store data that rarely needs to be accessed. The default retrieval time is 12 hours. This is a solution for archiving, not for active file storage/access. https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

C: Elastic File System (EFS) – CORRECT. Elastic File System (EFS) is a file system that can be accessed by multiple services at a time, including on-premises servers. https://aws.amazon.com/efs/

D: DynamoDB. DynamoDB stores data in key-value pairs in database tables. This is not the solution for storing/sharing files across multiple services. https://aws.amazon.com/dynamodb/

QUESTION 28

An insurance company is evaluating different options for file servers in AWS. Their applications are all Windows-based, and require shared file storage. Which of the following would be the most appropriate?

A: AWS Managed Microsoft Active Directory

B: Amazon FSx for Lustre

C: Amazon FSx for Windows File Server

D: Elastic File System (EFS)

EXPLANATIONS

A: AWS Managed Microsoft Active Directory. This is a distractor. AWS does provide managed Microsoft Active Directory, but AD is a directory service, not a file server. https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html

B: Amazon FSx for Lustre. Amazon FSx for Lustre is optimized for high-performance computing needs, such as machine learning or video processing. It doesn’t integrate with Microsoft technologies like Amazon FSx for Windows File Server does. Since the company in the scenario has Windows-based applications, Amazon FSx for Windows File Server would be a more appropriate answer. https://aws.amazon.com/fsx/

C: Amazon FSx for Windows File Server – CORRECT. Amazon FSx for Windows File Server provides features and compatibility required of applications that run on Microsoft technologies. Since the company in question has Windows-based applications, this is the most appropriate answer. https://aws.amazon.com/fsx/

D: Elastic File System (EFS). Elastic File System (EFS) is a fully-managed file system that can be accessed by multiple services at a time, including on-premises servers. However, for various reasons, companies sometimes need to run third-party file systems to get vendor-specific features and integrations. The company in the scenario is an example of that. Their applications are using Microsoft technologies, and so Amazon FSx for Windows File Server would be a more appropriate solution. https://aws.amazon.com/fsx/

QUESTION 29

You are configuring the network access control list (NACL) for a web application inside of a public subnet. Users will be visiting the website using HTTP. Which of the following is true?

A: You should allow inbound traffic on Port 80; outbound traffic will automatically be allowed back on Port 80

B: You should allow inbound traffic on Port 80 and outbound traffic on all ports

C: You should allow inbound traffic on Port 80 and outbound traffic on Ports 1024-65535

D: You should allow inbound traffic on Port 443 and outbound traffic on all ports

EXPLANATIONS

A: You should allow inbound traffic on Port 80; outbound traffic will automatically be allowed back on Port 80. It’s true you should allow inbound traffic for HTTP on Port 80. However, NACLs are stateless (unlike stateful security groups), meaning you also need to explicitly allow outbound traffic. And the outbound traffic in this case should be allowed on Ports 1024-65535 to account for ephemeral ports on various types of clients. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

B: You should allow inbound traffic on Port 80 and outbound traffic on all ports. It’s true you should allow inbound traffic for HTTP on Port 80. However, for outbound traffic, you should only allow Ports 1024-65535, to cover ephemeral ports for common clients. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

C: You should allow inbound traffic on Port 80 and outbound traffic on Ports 1024-65535 - CORRECT. You should allow inbound traffic for HTTP on Port 80 and outbound traffic on Ports 1024-65535, to cover ephemeral ports for common clients. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

D: You should allow inbound traffic on Port 443 and outbound traffic on all ports. Port 443 is for HTTPS traffic, not HTTP specified in this question. Also, for outbound traffic, you should only allow Ports 1024-65535, to cover ephemeral ports for common clients. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

QUESTION 30

An entertainment company is currently planning their migration to AWS. Their database is a NoSQL key-value database. Because of their global audience, they need ultra-high performance and scalability, with microsecond latency on database reads. Which solution would meet these needs?

A: The Relational Database Service (RDS)

B: DynamoDB with on-demand scaling enabled

C: Amazon Neptune

D: DynamoDB with DynamoDB Accelerator (DAX)

EXPLANATIONS

A: The Relational Database Service (RDS). The Relational Database Service is for relational databases, but the application in question will be using a key-value (NoSQL) database. In addition, RDS itself wouldn’t be a complete answer, as it is not an actual database, but the overall service to run various database engines (such as SQL Server, Aurora, Oracle, etc.). https://aws.amazon.com/rds/

B: DynamoDB with on-demand scaling enabled. While DynamoDB would fit the requirements for a key-value database, on-demand scaling is not the best answer. On-demand scaling will scale resources up and down based on load. While that is an important feature generally during periods of unpredictable traffic, the question is specifically asking for microsecond read latencies, which can be achieved with DynamoDB Accelerator (DAX). https://aws.amazon.com/dynamodb/dax/

C: Amazon Neptune. Amazon Neptune is a graph database, best suited for things like recommendation engines, social networking and fraud detection. The application in question requires a key-value database with ultra-high performance and microsecond latency reads, which can be achieved with DynamoDB with DynamoDB Accelerator (DAX). https://aws.amazon.com/neptune/

D: DynamoDB with DynamoDB Accelerator (DAX) – CORRECT. DynamoDB is a key-value database that’s massively scalable and highly performant. With the addition of DynamoDB Accelerator (DAX), you can achieve the microsecond latency reads referred to in the question. https://aws.amazon.com/dynamodb/dax/

QUESTION 31

As part of yearly compliance reviews at your company, you’ve been asked to create a disaster recovery plan for your team’s applications. Your applications are used to support legacy systems and processes at the company, and an RTO/RPO of hours is acceptable. What strategy should you use?

A: Multi-Site/Active-Active

B: Warm Standby

C: Pilot Light

D: Backup and Restore

EXPLANATIONS

A: Multi-Site/Active-Active. Multi-Site/Active-Active should be used for mission-critical services that cannot tolerate downtime or data loss. With this strategy, you have fully-functioning environments running in multiple regions at the same time, and likely Route 53 routing traffic across the regions. https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_planning_for_recovery_disaster_recovery.html

B: Warm Standby. With this strategy, your RPO/RTO are measured in minutes, meaning you could lose some data and experience some downtime of services. With this strategy, you have a scaled-down version of a production environment in another region. Upon failover, it will take some time to scale up resources. https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_planning_for_recovery_disaster_recovery.html

C: Pilot Light. With this strategy, your RPO/RTO are measured in tens of minutes, meaning you could lose some data and experience some downtime of services. With this strategy, you will need to start and scale resources in another region after the disaster, which will take time. https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_planning_for_recovery_disaster_recovery.html

D: Backup and Restore - CORRECT. This strategy is the slowest of all, and RPO/RTO could be hours. Here, you’ll need to deploy data and infrastructure to the recovery region after a disaster. https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_planning_for_recovery_disaster_recovery.html

QUESTION 32

An application is running on EC2 instances behind an Auto Scaling Group and Application Load Balancer. Application usage is very consistent Monday through Friday, but then drops significantly on the weekends. The Auto Scaling Group is currently configured with desired instances of 8, but on weekends, you know that many of them sit idle. What Auto Scaling Policy would better ensure that instances are properly utilized?

A: A Scheduled policy

B: A Target Tracking policy

C: A Simple Scaling policy

D: A Dynamic policy

EXPLANATIONS

A: A Scheduled policy - CORRECT. Scheduled policies are good to use when usage is generally known in advance, as is the case in this scenario. https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html

B: A Target Tracking policy. Target Tracking policies are best when usage is unknown or sporadic. For example, you could set a target of 80% CPU utilization for instances, which means no new instances are created until existing instances are at 80% CPU utilization. Target tracking scaling policies for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling

C: A Simple Scaling policy. A Simple Scaling policy utilizes CloudWatch alarms, where you set low and high thresholds that should be hit before increasing or decreasing the number of instances. While this approach could work, Simple Scaling is generally not recommended because once a scaling action starts, you must wait for it to finish before you can respond to new alarms. https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html

D: A Dynamic policy. This is a distractor, and too general. There are three types of dynamic policies: Target Tracking, Step Scaling and Simple Scaling, and none of them would be correct based on our scenario. A Scheduled policy makes the most sense here. Target tracking scaling policies for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling

QUESTION 33

An ocean exploration company spends months at sea collecting massive amounts of data, resulting in many terabytes of data. The data needs some minimal processing, and then needs to be uploaded to AWS for additional processing and eventually machine learning. Internet connections at their docking locations are not reliable. What is the best way to get this data into AWS?

A: Site-to-Site VPN

B: Snowball Edge

C: CloudFront

D: Edge Locations

EXPLANATIONS

A: Site-to-Site VPN. Site-to-Site VPN is used to connect an on-premises location to AWS. This connection goes over the public internet so would not be the correct answer for this scenario. https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html

B: Snowball Edge – CORRECT. The Snowball Edge device is rugged Snowball device used for on-board storage and compute power “at the edge” (usually in remote connections without reliable internet). https://docs.aws.amazon.com/snowball/latest/developer-guide/whatisedge.html

C: CloudFront. CloudFront is AWS’s content delivery network (CDN), and its primary goal is to speed up delivery of content to end users, especially media files like videos and images. https://aws.amazon.com/cloudfront/

D: Edge Locations. Edge Locations are used with CloudFront to get content (especially videos and images) to users faster. This answer is also somewhat of a distractor, as the Snowball Edge is also used for “edge computing.” Examples of edge computing would be working on a ship or plane or a remote location without reliable connectivity. But this is different than an “Edge Location” used by CloudFront. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowCloudFrontWorks.html

QUESTION 34

As part of a security audit, you are required to go through CloudTrail logs and find instances of users logging in as the root account. You’ve stored the logs in S3. What tool should you use to query the logs?

A: Athena

B: S3 Query

C: Relational Database Service (RDS)

D: AWS Glue

EXPLANATIONS

A: Athena – CORRECT. Athena is used to query data in an S3 bucket using SQL statements. https://aws.amazon.com/athena

B: S3 Query. This is a distractor. There is not a service called S3 Query.

C: Relational Database Service (RDS). The Relational Database Service (RDS) and its various engines support SQL statements. However, these are databases, and question is asking about things stored in S3 (object storage, not a database). https://aws.amazon.com/rds

D: AWS Glue. AWS Glue is a fully-managed ETL (extract, transform, load) solution. Using Glue, you can load data from a source like S3, process it in some way, and then ultimately store it in places like RDS, Redshift, S3, etc. While it’s common to use Glue, S3 and Athena in combination, Athena is the service that allows you to write SQL statements against your data. https://aws.amazon.com/glue/

QUESTION 35

A cryptocurrency platform is building a new service that works with real-time streaming data. This data needs to be processed by a Lambda function and then stored in S3, where it can ultimately be used to build machine learning models. What service would be most appropriate in this scenario?

A: Kinesis Data Streams

B: Kinesis Data Firehose

C: DynamoDB Streams

D: Kinesis Data Analytics

EXPLANATIONS

A: Kinesis Data Streams – CORRECT. Kinesis Data Streams are used to ingest, process and store streaming data. Importantly, however, data must be processed by something like a Lambda function before it’s stored. This is exactly the situation described in the question, making this the most appropriate service. https://aws.amazon.com/kinesis/data-streams/

B: Kinesis Data Firehose. While Kinesis Data Firehose is similar to Kinesis Data Streams, Firehose is more commonly used to load data directly into S3, Redshift or Elasticsearch. Processing the data is optional (where processing is required with Data Streams). Based on the requirements in the question, since we’re processing data with Lambda, Kinesis Data Streams would be the more appropriate answer. https://aws.amazon.com/kinesis/data-firehose/

C: DynamoDB Streams**.** DynamoDB Streams are used to create a stream of observed changes in data, sometimes known as Change Data Capture (CDC). Once Streams are enabled, when you perform an operation on the database, a corresponding event will be saved in the Stream, recording what was changed. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

D: Kinesis Data Analytics. Kinesis Data Analytics is used to analyze streaming data in real-time. An example would be to analyze sales data coming in from stores around the world in order to adjust promotions throughout the day. You can do analytics using SQL queries and/or custom Java applications. https://aws.amazon.com/kinesis/data-analytics/

QUESTION 36

A company wants to establish a dedicated private connection from their on-premises data center to AWS. The connection cannot go over the public internet. Which option should they choose?

A: Direct Connect

B: Site-to-Site VPN

C: PrivateLink

D: Storage Gateway

EXPLANATIONS

A: Direct Connect – CORRECT. Direct Connect offers a dedicated physical connection from an on-premises data center to AWS. It does not go over the public internet. https://aws.amazon.com/directconnect/

B: Site-to-Site VPN. Site-to-Site VPN connections go over the public internet, so would not fulfill the requirements in this scenario. https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html

C: PrivateLink. PrivateLink is used to connect AWS resources in a VPC, through VPC endpoints. It is not used to connect to on-premises resources. https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-privatelink.html

D: Storage Gateway. Storage Gateway allows you to store on-premises resources in the cloud, such as for backup purposes. However, Storage Gateway does not establish the actual connection between AWS and on-premises data centers. To get a private connection, you want to use Direct Connect. AWS Storage Gateway | Amazon Web Services

QUESTION 37

A messaging application running on an EC2 instance needs to access the Simple Queue Service (SQS). How can you do this while ensuring a private connection on the AWS network (i.e., not over the public internet)?

A: VPC Endpoint, type Interface

B: NAT Gateway

C: VPC Endpoint, type Gateway

D: Direct Connect

EXPLANATIONS

A: VPC Endpoint, type Interface - CORRECT. VPC endpoints, powered by PrivateLink, allow you to access other AWS services through a private network (vs. going across the public internet). The “Interface” type is for all services except S3 and DynamoDB. https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

B: NAT Gateway. A NAT Gateway allows you to connect a private subnet to the internet (by connecting the NAT Gateway to the Internet Gateway). The way to achieve what the question is asking for is to use a VPC Endpoint. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

C: VPC Endpoint, type Gateway. VPC endpoints allow you to access other AWS services through a private network (vs. going across the public internet). The “Gateway” type is for S3 and DynamoDB. https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

D: Direct Connect. Direct Connect does not go over the public internet. However, Direct Connect is used to connect an on-premises data center to AWS. The question is asking about connecting one AWS service to another, which can be accomplished using VPC Endpoints. https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

QUESTION 38

A mission-critical application has been having performance issues, and you need to view performance data with a granularity of 1 second. What should you do?

A: Enable CloudTrail logs with detailed monitoring

B: Enable CloudWatch detailed monitoring

C: Enable CloudWatch high resolution metrics

D: Enable CloudTrail basic monitoring

EXPLANATIONS

A: Enable CloudTrail logs with detailed monitoring. CloudTrail captures user activity and API calls on an AWS account, such as user sign-ins. While CloudTrail can integrate with CloudWatch, CloudWatch is the service that captures the performance metrics. Also, there’s no “detailed monitoring” available with CloudTrail. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

B: Enable CloudWatch detailed monitoring. CloudWatch detailed monitoring only defines how often metrics are published, which is at 1-minute intervals. However, to be able to drill in with granularity of 1 second, you would need to enable high resolution metrics. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html#high-resolution-metrics

C: Enable CloudWatch high resolution metrics - CORRECT. With CloudWatch high resolution metrics, you can drill into metrics with a granularity of 1 second. With Standard resolution, you can only get granularity of 1 minute. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html#high-resolution-metrics

D: Enable CloudTrail basic monitoring. This is a distractor. “Basic monitoring” is a feature of CloudWatch, not CloudTrail.

QUESTION 39

A systems administrator recently left the company, and you suspect they are continuing to log in and use AWS resources. Which service would record these sign-in events?

A: CloudTrail

B: CloudWatch

C: VPC Flow Logs

D: IAM Credential Report

EXPLANATIONS

A: CloudTrail – CORRECT. CloudTrail captures user activity and API calls on an AWS account, which would include events such as sign-ins. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

B: CloudWatch. CloudWatch is used for performance monitoring of applications and resources, such as CPU, memory, disk and GPU utilization and so on. While CloudWatch can integrate with CloudTrail (which logs the sign-in events mentioned in the question), CloudTrail is the service that would capture the events. https://aws.amazon.com/cloudwatch/

C: VPC Flow Logs. VPC Flow Logs capture the activity going to and from network interfaces in a VPC, such as internet gateways, etc. https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html

D: IAM Credential Report. The IAM Credential Report shows things such as status of credentials, access keys and MFA devices. This report could tell you when the root account last logged in, but it would not log every sign-in event that the question is asking for. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html

QUESTION 40

External users of your reporting application need to download files that are stored in S3. Per the policy of your Security team, you have blocked all public access to the S3 bucket. How can you grant access to the files?

A: Create IAM users for the external users, then add them to a group with permissions on the bucket

B: Create a bucket policy that allows everyone read access to the bucket

C: Your app should generate a presigned URL that provides temporary access to download the file

D: Store the files on a public Elastic File System (EFS) drive instead

EXPLANATIONS

A: Create IAM users for the external users, then add them to a group with permissions on the bucket. While this would technically work, users should not be required to have IAM user accounts to download files from S3. This would also create a lot of administrative overhead to maintain the accounts. Instead, a presigned URL should be used to grant temporary access to the files. https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html

B: Create a bucket policy that allows everyone read access to the bucket. This option would effectively grant public access to the bucket, which is not allowed by the Security team. Instead, a presigned URL should be used to grant temporary access to the files. https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html

C: Your app should generate a presigned URL that provides temporary access to download the file – CORRECT. A presigned URL grants temporary permissions to upload or download files from S3, without users needing to have IAM user accounts. https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html

D: Store the files on a public Elastic File System (EFS) drive instead. This is a distractor. While you CAN store files on EFS, EFS is typically used by EC2, containers, Lambda and even on-premises servers as a file system interface. S3, on the other hand, is available through an internet API and can be accessed from anywhere, making it an ideal way to share files with users. To get around the permissions issue, a presigned URL can be generated that grants temporary access to the files. https://aws.amazon.com/efs/faq/

QUESTION 41

A video sharing website uses an RDS MySQL database in one Availability Zone. Most website traffic is from users viewing videos. At times, those users complain about the speed of the application. Also, your solutions architect has asked you to make the application highly available across two regions. What should you do?

A: Enable the multi-AZ feature on the database

B: Create a snapshot of the database and direct read traffic to the snapshot

C: Create a read replica in a second region for the read traffic

D: Convert the database to a global database

EXPLANATIONS

A: Enable the multi-AZ feature on the database. While the multi-AZ feature of RDS would make the database more highly available, it can only span AZs in a single region. You would not be able to go cross-region, as required by the question. Also, because most traffic to the site is read-only, the best way to increase performance would be to use a read replica, where read traffic can be routed to the replica, thereby lightening the load on the primary database. https://aws.amazon.com/rds/features/multi-az/

B: Create a snapshot of the database and direct read traffic to the snapshot. Snapshots are used for backup purposes, not to increase throughput of traffic. Read traffic cannot be directed to a snapshot of a database. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html

C: Create a read replica in a second region for the read traffic - CORRECT. The scenario in the question is the ideal use case for a read replica. By creating a read replica, the users who are only viewing videos (read-only traffic) can be directed to the replica, thereby reducing the load on the primary database. Read replicas can also be cross-region, which would fulfill the requirements in the question. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html

D: Convert the database to a global RDS database. This is a distractor. There is no such thing as a global RDS database. However, DynamoDB offers global tables, and Aurora offers global databases. https://aws.amazon.com/dynamodb/global-tables/ and https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html

QUESTION 42

Your team took over a relatively new application that uses S3 to store a large volume of objects that need to be accessed immediately. The previous team was not able to provide a lot of information about how often the data was accessed, but you need to ensure it’s being stored in the most cost-effective way. Which storage option should you use?

A: S3 Glacier Flexible Retrieval

B: S3 Intelligent-Tiering

C: S3 Glacier Deep Archive

D: S3 Standard

EXPLANATIONS

A: S3 Glacier Flexible Retrieval. S3 Glacier Flexible Retrieval should be used to store data that rarely needs to be accessed. The default retrieval time is 1 minute to 12 hours. Based on the scenario in this question, though, the data must be immediately available for access, so this would not be an appropriate option. https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

B: S3 Intelligent-Tiering - CORRECT. S3 Intelligent-Tiering makes the most sense when data is changing or the access patterns are unknown. AWS will determine the most cost-effective way to store the data based on patterns it detects. https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

C: S3 Glacier Deep Archive. S3 Glacier Deep Archive is the most cost-effective storage class; however, retrieval of data takes 12-48 hours so would not fulfill the requirements in the question. https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

D: S3 Standard. S3 Standard storage should be used for active data storage/access, when files need to be retrieved immediately. However, it does not offer any discounts. Because the access patterns are unknown in this case, S3 Intelligent-Tiering would make the most sense. https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

QUESTION 43

To improve security posture at your company, your IT Security team is requiring that all EC2 instances and ECR repositories be monitored for potential software vulnerabilities. What AWS service can help you accomplish this goal?

A: AWS Config

B: AWS Shield

C: Amazon Inspector

D: Amazon Detective

EXPLANATIONS

A: AWS Config. AWS Config is used to inventory, record and audit the configuration of your AWS resources. https://aws.amazon.com/config/

B: AWS Shield. AWS Shield is a service used to protect against Distributed Denial of Service (DDoS) attacks. https://aws.amazon.com/shield/

C: Amazon Inspector - CORRECT. Amazon Inspector monitors EC2 instances and ECR repositories for software vulnerabilities and network exposure. https://aws.amazon.com/inspector/

D: Amazon Detective. Amazon Detective organizes information from other sources, including GuardDuty, and provides visualizations, context and detailed findings to help you identify the root cause of a security incident. https://aws.amazon.com/detective/

QUESTION 44

A data processing company runs their applications on-premises, but would like the redundancy of the cloud. They need a way to store files in AWS that supports Network File System (NFS) and Server Message Block (SMB) protocols. Which service should they use?

A: Storage Gateway, and specifically Volume Gateway in Cached mode

B: S3

C: Storage Gateway, and specifically File Gateway

D: Snowball

EXPLANATIONS

A: Storage Gateway, Volumes in Stored mode. In this scenario, it is true that you should use Storage Gateway. However, Volume Gateway in Cached mode is used to store all data in S3, and cache frequently-used data on-premises. You would use this to save costs on primary storage and minimize the need to scale storage on-premises (not primarily as a file storage strategy). It does not support NFS or SMB. https://docs.aws.amazon.com/storagegateway/latest/vgw/WhatIsStorageGateway.html

B: S3. S3 provides object storage in AWS. You CAN store things from on-premises into S3, but you will need to use Storage Gateway to make that connection first. https://aws.amazon.com/s3/

C: Storage Gateway, and specifically File Gateway - CORRECT. Storage Gateway, and specifically File Gateway provides a place to store and share files in AWS. It supports NFS and SMB, and would meet all of the requirements of this question. https://docs.aws.amazon.com/filegateway/latest/files3/what-is-file-s3.html

D: Snowball. The Snow family of products can be used to transfer large amounts of data securely from on-premises to AWS. However, this tends to be more of a one-time transfer, such as for migration of data. File Gateway is a better fit for the scenario described in the question. https://aws.amazon.com/snowball/