AWS Certified Solutions Architect: Zero to Mastery - part 1

Practice Questions: Answers & Explanations

MODULE 2 – AWS Cloud Practitioner Refresher

Question 1

  1. Your application is mission-critical and must be highly available, even in the event of a large earthquake. What actions can you take to ensure high availability?

A: Deploy your application to multiple regions

B: Use economies of scale

C: Implement elasticity across compute and networking resources

D: Deploy your application to two Availability Zones (AZs)

Explanations:

A: Deploy your application to multiple regions – CORRECT. High availability is the ability of a system to continue functioning, even if some components fail. In the case of an environmental disaster (such as a flood, earthquake, fire, etc.), the best way to ensure your application can continue functioning is to have it deployed to a second geographic region (i.e., in a different part of the world). Upon failure of the first region, the application can fail over to the second region and continue operating. Global Infrastructure Regions & AZs

B: Use economies of scale. Economies of scale refer to the ability of AWS to purchase things more cheaply than an individual organization can, thus passing on the savings to customers.

C: Implement elasticity across compute and networking resources. Elasticity is the ability to adapt to workload changes (both up and down), usually in a dynamic, short-term way. In the event of an environmental disaster, it is true that you may need to spin up/down resources, but the important point here is that those resources would need to be in a different region than where the disaster occurred.

D: Deploy your application to two Availability Zones (AZs). While deploying to multiple Availability Zones (AZs) is generally a best practice for achieving high availability, these are data centers are in the same geographic region. In the event of a large environmental disaster, all data centers could be taken down. Thus, deploying to multiple regions is a better way to ensure the application can continue functioning in the event of a disaster.

Question 2

  1. Which of the following describes the concept of elasticity in AWS?

A: The ability to direct traffic across multiple Availability Zones

B: Only paying for what you use

C: The ability to rapidly develop, test and launch applications

D: The ability to automatically scale resources up and down as needed

Explanations:

A: The ability to direct traffic across multiple Availability Zones. Directing traffic across multiple Availability Zones would be a way to implement the concept of high availability.

B: Only paying for what you use. This is an example of “pay-as-you-go,” which means you only pay for what you need in AWS, and only when you use it.

C: The ability to rapidly develop, test and launch applications. Agility is the ability to rapidly develop, test and launch applications to deliver business value.

D: The ability to automatically scale resources up and down as needed – CORRECT. Elasticity is the ability to adapt to workload changes (both up and down), usually in a dynamic, short-term way. Elasticity - AWS Well-Architected Framework

Practice Questions: Answers & Explanations

MODULE 3 – Authentication and Authorization – Identity and Access Management (IAM)

QUESTION 1

  1. Regarding Identity and Access Management (IAM), which of the following is considered a best practice?

A: Create a new IAM user account for every application

B: Attach policies to individual IAM users

C: Avoid using the root user account for everyday work

D: Grant administrator privileges when you need to test something

Explanations:

A: Create a new IAM user account for every application. To grant permissions to applications, a best practice is to create an IAM role, rather than an IAM user. An IAM role allows an AWS service to temporarily assume permissions that the role has, and is the recommended way to grant permissions between services. IAM roles - AWS Identity and Access Management

B: Attach policies to individual IAM users. It is a best practice to place users into user groups, and then apply permissions at the group level. This ensures that everyone in the group has the same permissions, and it also reduces administrative overhead. IAM user groups - AWS Identity and Access Management

C: Avoid using the root user account for everyday work – CORRECT. Because the root account is all-powerful, you should avoid using it for everyday work. It is easy to take destructive actions even if you don’t mean to. It’s also hard to revoke or reduce permissions if needed. Instead, create an IAM user account with the least amount of privileges needed to accomplish your job, and then use that account instead of the root account. AWS account root user - AWS Identity and Access Management

D: Grant administrator privileges when you need to test something. The principle of least privilege says that users should only be granted the minimum permissions they need to accomplish a task. IAM allows you to be very granular with permissions. Rather than making everyone an administrator, determine what exactly they need to do, and then only grant those permissions.

QUESTION 2

  1. Your image processing application runs on an EC2 instance, and it needs to upload photos to an S3 bucket. From an IAM perspective, how should you set up permissions to accomplish this?

A: Create a role that has permissions to write to the S3 bucket, then assign the role to the EC2 instance

B: Create an IAM user for the application and store the credentials in the application’s code

C: Attach a policy to the EC2 instance that grants permissions to S3

D: Create a user group that includes the EC2 instance

Explanations:

A: Create a role that has permissions to write to the S3 bucket, then assign the role to the EC2 instance. – CORRECT. An IAM role allows an AWS service to temporarily assume permissions that the role has, and is the recommended way to grant permissions between services. IAM roles - AWS Identity and Access Management

B: Create an IAM user for the application and store the credentials in the application’s code. IAM users should generally only be created for humans that need to work with AWS. And storing credentials in application code is not a good idea. An IAM role is the preferred way to handle this scenario. IAM roles - AWS Identity and Access Management

C: Attach a policy to the EC2 instance that grants permissions to S3. Policies DO grant permissions, but in this case, the policy should be attached to the role (which is then assumed by the EC2 instance) and not the EC2 instance. Policies and permissions in IAM - AWS Identity and Access Management

D: Create a user group that includes the EC2 instance. This is a distractor. User groups are a collection of users, so this answer would not make sense in this context. An IAM role is the preferred way to handle this scenario. IAM roles - AWS Identity and Access Management

QUESTION 3

  1. Your team is developing a web application with an accompanying mobile app. Users at your company will have IAM user accounts, but the target users for the applications are external to your company. You need a way to centrally authenticate and authorize users. Which of the following should you propose?

A: Set up Managed Microsoft Active Directory

B: Use Amazon Cognito

C: Leverage the Security Token Service (STS)

D: During registration for the application, set up IAM user accounts for each user

Explanations:

A: Set up Managed Microsoft Active Directory. Managed Microsoft Active Directory would make sense in an environment where users are already in Active Directory, usually in a corporate setting. However, in this scenario, users are external, and will not have AD accounts. The recommended way to handle authentication and authorization for web and mobile apps is Amazon Cognito. AWS Managed Microsoft AD - AWS Directory Service

B: Use Amazon Cognito – CORRECT. The recommended way to handle authentication and authorization for web and mobile apps is Amazon Cognito. Cognito allows you to integrate with external identity providers (such as Google or Facebook), and handles a lot of the behind-the-scenes work for you. Customer Identity and Access Management – Amazon Cognito – Amazon Web Services

C: Leverage the Security Token Service (STS). STS enables you to request temporary, limited-privilege credentials for IAM or federated (external) users. While you CAN make this work for external users, STS is not an authentication provider for long-term application use. Rather, it’s meant to grant temporary access to accomplish something in AWS (such as gaining elevated privileges to complete a specific task). Temporary security credentials in IAM - AWS Identity and Access Management

D: During registration for the application, set up IAM user accounts for each user. While you COULD create a new IAM user account for each application user, this is not efficient or necessary. Amazon Cognito allows you to leverage existing identity providers (such as Google or Facebook) so users can seamlessly log in without needing to create another account. And from an administrative perspective, you wouldn’t want to manage thousands or even millions of IAM user accounts for your application’s user base. Customer Identity and Access Management – Amazon Cognito – Amazon Web Services

QUESTION 4

  1. Your company is in a budget crunch, and management is keeping a close eye on AWS spend. Only users in the Engineering account should be allowed to create new EC2 instances. Users in the HR and Legal accounts should not be allowed to create new EC2 instances. How can you enforce this rule with the least amount of administrative overhead?

A: Create an IAM “deny” policy for EC2, and attach it to each user in the HR and Legal accounts

B: Revoke IAM roles from the HR and Legal accounts

C: Require MFA in order to create a new EC2 instance

D: Apply a Service Control Policy to the HR and Legal accounts denying permissions to create EC2 instances

Explanations:

A: Create an IAM “deny” policy for EC2, and attach it to each user in the HR and Legal accounts. While this approach would technically work, it would require a lot of administrative overhead to set up and maintain. A better approach is to use a Service Control Policy, which can be applied at the account level for HR and Legal. Service control policies (SCPs) - AWS Organizations

B: Revoke IAM roles from the HR and Legal accounts. This is a distractor and is also incomplete. It is not clear what the IAM roles allow/deny. Also, roles cannot be attached to an AWS account.

C: Require MFA in order to create a new EC2 instance. Requiring MFA to create an EC2 instance will not fulfill the requirements. It would only enforce that someone must enter an MFA code before creating an instance.

D: Apply a Service Control Policy to the HR and Legal accounts denying permissions to create EC2 instances - CORRECT. Service Control Policies allow you to manage permissions in an AWS organization. This reduces the administrative overhead of managing privileges for an entire account. Service control policies (SCPs) - AWS Organizations

Practice Questions: Answers & Explanations

MODULE 4 – Compute

QUESTION 1

  1. There are 20 developers on a team that focuses on web development. They recently hired several new team members, and have spent a lot of time building new development environments for the new joiners. How could they speed up this process?

A: Create a new instance type through the EC2 Console

B: Create an Amazon Machine Image (AMI) based on one of the existing dev instances, and then use that as the template for future machines

C: Create a second EBS volume and attach it to the new development machines

D: Create a script that will run and install the necessary software and configurations for the new machines

Explanations:

A: Create a new instance type through the EC2 Console**.** Instance types are created by AWS, and include things like General Purpose, Compute Optimized, etc.: https://aws.amazon.com/ec2/instance-types. To fulfill the requirements in the scenario, you could create a new Amazon Machine Image (AMI) with the necessary setup for development, and then create new EC2 instances based on that AMI.

B: Create an Amazon Machine Image (AMI) based on one of the existing dev instances, and then use that as the template for future machines - CORRECT. This is the primary use case for creating your own AMI: it allows you to quickly and easily create new EC2 instances in a repeatable way. Amazon Machine Images (AMI) - Amazon Elastic Compute Cloud

C: Create a second EBS volume and attach it to the new development machines. This is a distractor, and is also incomplete. While you could attach a second EBS volume, just attaching it does not mean it has the necessary software or configuration to do development work mentioned in the scenario. Also, a root volume (with operating system, etc.) is likely what you need, not a second volume that is usually used for data. Finally, an EBS volume can only be attached to one instance at a time, so this wouldn’t help an entire team.

D: Create a script that will run and install the necessary software and configurations for the new machines. While creating a script is a good idea for repeatability, this answer is not complete. If you only have a script, that means you would need to manually run it for every new instance created. Whereas if you create an AMI that has all the required setup already, you can simply base all new machines on that AMI to get what you need. Amazon Machine Images (AMI) - Amazon Elastic Compute Cloud

QUESTION 2

  1. Due to strict regulatory requirements, a company needs their resources to be physically isolated from other AWS customers. Which EC2 purchasing option should they use?

A: Capacity Reservations

B: Dedicated Hosts

C: Spot Instances

D: On-Demand Instances

Explanations:

A: Capacity Reservations. Capacity reservations allow you to reserve capacity for instances in a specific availability zone. However, this option does not physically isolate your resources from other AWS customers. Instance purchasing options - Amazon Elastic Compute Cloud

B: Dedicated Hosts– CORRECT. A Dedicated Host is an entire physical server that is used for only your resources. This meets the requirements given in the question. Instance purchasing options - Amazon Elastic Compute Cloud

C: Spot Instances. With a Spot Instance, you can bid (specify the price you want to pay) on unused EC2 capacity. The primary reason for using Spot is to save money. However, this option does not physically isolate your resources from other AWS customers. Instance purchasing options - Amazon Elastic Compute Cloud

D: On-Demand Instances. With On-Demand Instances, you pay by the second for the instances you launch. This pricing model does not offer the physical isolation specified in the question. Instance purchasing options - Amazon Elastic Compute Cloud

QUESTION 3

  1. A read-heavy database application is suffering from performance issues. Users complain that the application just hangs, and that queries sometimes take minutes to complete. You’re evaluating whether the application is on the correct type of EBS volume. What type of volume would be most appropriate in this scenario?

A: Throughout Optimized HDD

B: Provisioned IOPS SSD

C: Magnetic

D: General Purpose SSD

Explanations:

A: Throughout Optimized HDD. A Throughout Optimized HDD makes sense where you need to read large “chunks” of files at once (think of the “large bites of chocolate” analogy from the course). Common use cases include Big Data/Data Warehousing. By contrast, database applications usually do better with IOPS-optimized volumes, where they can take “small bites,” but need to take them more often (the “read-heavy database” mentioned in the question). Amazon EBS volume types - Amazon Elastic Compute Cloud

B: Provisioned IOPS SSD - CORRECT. A read-heavy database is a good fit for a Provisioned IOPS SSD. This will give you high input/output (“lots of small bites” from the chocolate analogy in the course). Amazon EBS volume types - Amazon Elastic Compute Cloud

C: Magnetic. Magnetic volumes are backed by magnetic drives, and are best suited to workloads where data is infrequently accessed. Based on the scenario in question, this would not be an appropriate solution. Amazon EBS volume types - Amazon Elastic Compute Cloud

D: General Purpose SSD. This type of volume is suitable for a variety of workloads, and might even work for a read-heavy database application. However, based on the issues in question, it seems that the application requires higher IOPS, so evaluating the Provisioned IOPS SSD would make most sense. Amazon EBS volume types - Amazon Elastic Compute Cloud

QUESTION 4

  1. You have an application that stores a lot of information in memory. You need to ensure this memory isn’t lost when an EC2 instance reboots. What do you do?

A: Enable termination protection on the instance

B: Enable stop protection on the instance

C: Create a Lambda function to save the contents of memory to disk when the instance stops

D: Enable hibernation on the instance

Explanations:

A: Enable termination protection on the instance. Termination protection helps avoid accidental termination of an instance. However, it has no effect on the contents of memory on the instance.

B: Enable stop protection on the instance. Stop protection helps avoid accidental stopping of an instance. However, it has no effect on the contents of memory on the instance.

C: Create a Lambda function to save the contents of memory to disk upon stopping the instance. It’s not possible to directly trigger a Lambda function to run when an instance stops, and Lambda wouldn’t be able to access the machine’s memory either. Also, generally, this answer is more complicated than it needs to be. To achieve the requirements of the scenario, you should enable hibernation on the host, which will take care of writing the contents of memory to disk. Overview of hibernation - Amazon Elastic Compute Cloud

D: Enable hibernation on the instance - CORRECT. By enabling hibernation on the instance, the contents of memory will be written to disk when the instance is stopped/rebooted. Upon rebooting, the data from disk will be reconstituted into memory. Hibernation can only be enabled when you first create the instance, and not afterwards. Overview of hibernation - Amazon Elastic Compute Cloud

QUESTION 5

  1. Your team has an application with Lambda functions that call a service to do data processing. It can sometimes take 30 minutes for data processing to complete before a “completed” message is returned. Users are complaining about errors in the application. What is the most likely cause?

A: The Lambda functions have not been allocated enough memory

B: The Lambda storage capacity has been exhausted

C: The Lambda functions are timing out after 15 minutes

D: There are too many Lambda functions running concurrently

Explanations:

A: The Lambda functions have not been allocated enough memory. While it’s true that the memory for Lambda functions is configurable, this is not likely the issue based on the scenario. The maximum timeout for a Lambda function is 15 minutes, and the question states that requests can take 30 minutes. Hence, the more likely issue is related to timeouts. Lambda quotas - AWS Lambda

B: The Lambda storage capacity has been exhausted. While it’s true that there is a soft limit of 75 GB for uploaded functions, this is not likely the issue based on the scenario. The maximum timeout for a Lambda function is 15 minutes, and the question states that requests can take 30 minutes. Hence, the more likely issue is related to timeouts. Lambda quotas - AWS Lambda

C: The Lambda functions are timing out after 15 minutes - CORRECT. Lambda functions timeout at 900 seconds, or 15 minutes. Based on the scenario, the data processing can take up to 30 minutes to complete, so this is the most likely reason for the errors. Lambda quotas - AWS Lambda

D: There are too many Lambda functions running concurrently. Lambda limits concurrent executions to 1,000 by default (though this can be increased). However, based on the scenario, this is not the most likely issue. The maximum timeout for a Lambda function is 15 minutes, and the question states that requests can take 30 minutes. Hence, it’s more likely that the errors are related to a timeout issue. Lambda quotas - AWS Lambda

QUESTION 6

  1. Your team has been tasked with reducing your AWS spend on compute resources. You’ve identified several interruptible workloads that are good candidates for cost savings. What EC2 pricing model would make the most sense in this scenario?

A: Spot Instances

B: Reserved Instances

C: On-Demand Instances

D: Dedicated Hosts

Explanations:

A: Spot Instances – CORRECT. With a Spot Instance, you can bid (specify the price you want to pay) on unused EC2 capacity. This can provide savings of up to 90% over On-Demand Instances. With this model, instances can be shut down at any time. However, because the identified workloads are interruptible, this would still be a valid solution. Instance purchasing options - Amazon Elastic Compute Cloud

B: Reserved Instances. Reserved instances can provide savings of up to 70%. This solution makes sense for long-running workloads, such as databases, and can be reserved for 1-3 years. While this option does provide cost savings, Spot Instances will save even more, and can be used with the interruptible workloads mentioned in the question. Instance purchasing options - Amazon Elastic Compute Cloud

C: On-Demand Instances. With On-Demand Instances, you pay, by the second, for instances you launch. This option would not provide the cost savings mentioned in the question. Instance purchasing options - Amazon Elastic Compute Cloud

D: Dedicated Hosts. A Dedicated Host is an entire physical server that is used for only your resources. This option would not provide the cost savings mentioned in the question. Instance purchasing options - Amazon Elastic Compute Cloud

QUESTION 7

  1. You’re working on an application that uses Elastic Container Service (ECS) tasks to access a Simple Queue Service (SQS) queue. What should you do to ensure the task can access the queue?

A: Create an ECS task execution role and associate it with the ECS task

B: Create a config file inside of the container with credentials needed to access SQS

C: Create an IAM role for SQS that grants permissions from ECS, then associate the role with SQS

D: Create an ECS task role with the appropriate permissions for SQS, and then associate it with the ECS task

Explanations:

A: Create an ECS task execution role and associate it with the ECS task. A task execution role grants an ECS container and Fargate agents permissions to make AWS API calls to manage tasks in a cluster. The execution role is commonly used to pull images from the Elastic Container Registry (ECR), fetch parameters from SSM Parameter Store, and write logs to CloudWatch. To access other services, such as SQS, you instead want an ECS task role. Task IAM role - Amazon Elastic Container Service

B: Create a config file inside of the container with credentials needed to access SQS. This is a distractor and is also complete. In general, putting credentials in a file is not a best practice. But even so, simply having the file on the container would not be enough; you would need to implement code to access and use the credentials. To access other services, such as SQS, you instead want an ECS task role. Task IAM role - Amazon Elastic Container Service

C: Create an IAM role for SQS that grants permissions from ECS, then associate the role with SQS. This is a distractor. It is not possible to associate a role to SQS. For a container task to access SQS, you would want to associate an ECS task role that has appropriate SQS permissions. Task IAM role - Amazon Elastic Container Service

D: Create an ECS task role with the appropriate permissions for SQS, and then associate it with the ECS task - CORRECT. An ECS task role allows your containers to assume an IAM role without having to use credentials inside a container (similar to how an EC2 instance can assume a role). It is used when your container needs access to other services like S3, SNS, SQS, etc. Task IAM role - Amazon Elastic Container Service

Practice Questions: Answers & Explanations

MODULE 5 – Elastic Load Balancing and Auto Scaling

QUESTION 1

  1. A new website needs to run on three EC2 instances behind an Application Load Balancer. As the architect on the team, what should you recommend to make the website highly available?

A: Create an Auto Scaling Group that spans Availability Zones, set the desired capacity to 3 instances, then create an Application Load Balancer that points to the ASG

B: Replace the Application Load Balancer with a Network Load Balancer

C: When creating the Application Load Balancer, add it to three Availability Zones

D: Create an Auto Scaling Group that spins up new instances in a second Availability Zone when the first Availability Zone fails

Explanations:

A: Create an Auto Scaling Group that spans Availability Zones, set the desired capacity to 3 instances, then create an Application Load Balancer that points to the ASG – CORRECT. This solution will automatically create 3 EC2 instances across Availability Zones, and traffic will be routed between the instances. This means if an AZ goes down, the website can continue to function because traffic will be routed to the healthy instances in the healthy AZ. Amazon EC2 Auto Scaling benefits - Amazon EC2 Auto Scaling

B: Replace the Application Load Balancer with a Network Load Balancer. Because the question is asking about a website, an Application Load Balancer is an appropriate answer because it’s used to handle HTTP traffic. Generally, a Network Load Balancer is used when you need ultra-high performance or ultra-low latency, and those are not requirements based on this question. What is a Network Load Balancer? - Elastic Load Balancing

C: When creating the Application Load Balancer, add it to three Availability Zones. The Application Load Balancer itself isn’t added to multiple Availability Zones, so this is a distractor. However, it can route to multiple AZs. The key to this question, though, is that the instances will be in multiple AZs, and that function will be handled by the Auto Scaling Group as it creates new instances. Auto Scaling groups - Amazon EC2 Auto Scaling

D: Create an Auto Scaling Group that spins up new instances in a second Availability Zone when the first Availability Zone fails. This answer is incomplete. While an Auto Scaling Group can create instances in a second Availability Zone, it needs to be coupled with the Application Load Balancer to distribute traffic across the instances. Use Elastic Load Balancing to distribute traffic across the instances in your Auto Scaling group - Amazon EC2 Auto Scaling

QUESTION 2

  1. A data processing application runs on EC2 instances behind an Application Load Balancer. For large processing jobs, the requests can take 10 minutes to complete. Users are receiving HTTP 5xx errors when these large jobs run. What is the likely cause of these errors?

A: The cooldown period for the Auto Scaling Group is too short

B: The connection draining is set to the default of 300 seconds

C: The deregistration delay is set to the default value of 300 seconds

D: The warmup period for the Auto Scaling Group is too short

Explanations:

A: The cooldown period for the Auto Scaling Group is too short. The cooldown period prevents an Auto Scaling Group from launching or terminating instances until effects from previous actions are known. While this could result in errors if the period is too short, the question does not indicate that an Auto Scaling Group is being used in this case (only an Application Load Balancer). Scaling cooldowns for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling

B: The connection draining is set to the default of 300 seconds. Connection draining is used by the Classic Load Balancer. This particular application is using an Application Load Balancer, where the equivalent setting is “deregistration delay.” Configure connection draining for your Classic Load Balancer - Elastic Load Balancing

C: The deregistration delay is set to the default value of 300 seconds - CORRECT. Deregistration delay is the amount of time in-flight requests have to complete before a connection is closed, and it is set on the target group of an Application Load Balancer. The default value is 300 seconds (5 minutes), so it’s possible that long-running requests of 10 minutes could error-out, resulting in the HTTP 5xx errors. https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#deregistration-delay

D: The warmup period for the Auto Scaling Group is too short. The warm period prevents instances from reporting usage data until they’re fully warmed up. If metrics are reported too soon, this could cause an Auto Scaling Group to take action based on skewed metrics. This likely wouldn’t result in the HTTP 5xx errors mentioned in the question. However, the bigger problem with this answer is that the question does not indicate an Auto Scaling Group is being used at all (only an Application Load Balancer). Set the default instance warmup for an Auto Scaling group - Amazon EC2 Auto Scaling

QUESTION 3

  1. An application is running on EC2 instances behind an Auto Scaling Group and Application Load Balancer. Application usage is very sporadic. The desired number of instances is set to 10; however, your team discovers that a lot of the instances are not being fully utilized. What Auto Scaling Policy would better ensure that instances are properly utilized?

A: A Scheduled policy

B: A Target Tracking policy

C: A Simple Scaling policy

D: A Dynamic policy

Explanations:

A: A Scheduled policy. Because application usage is very sporadic, a Scheduled policy would not be the best approach here. Scheduled policies are good to use when usage is generally known in advance, such as Monday-Friday load, or the launch of a new product. Scheduled scaling for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling

B: A Target Tracking policy - CORRECT. Because application usage is very sporadic, a Target Tracking policy will help ensure instances are properly utilized. For example, you could set a target of 80% CPU utilization for instances, which means no new instances are created until existing instances are at 80% CPU utilization. Target tracking scaling policies for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling

C: A Simple Scaling policy. A Simple Scaling policy utilizes CloudWatch alarms, where you set low and high thresholds that should be hit before increasing or decreasing the number of instances. While this approach could work, Simple Scaling is generally not recommended because once a scaling action starts, you must wait for it to finish before you can respond to new alarms. Step and simple scaling policies for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling

D: A Dynamic policy. This is a distractor, and too general. There are three types of dynamic policies: Target Tracking, Step Scaling and Simple Scaling. While it’s a correct answer based on the broad category, the Target Tracking policy is the best answer based on the scenario. Target tracking scaling policies for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling

QUESTION 4

  1. A popular cryptocurrency website uses a DynamoDB database. The site has variable traffic, with unpredictable spikes, and users sometimes report the site is very slow. How can you improve performance on the database with the least amount of effort?

A: Implement DynamoDB on-demand scaling

B: Implement DynamoDB auto scaling

C: Place the underlying instance in an Auto Scaling Group and set the Target Tracking metric to 80%

D: Create a network load balancer to distribute load across DynamoDB tables

Explanations:

A: Implement DynamoDB on-demand scaling – CORRECT. With DynamoDB on-demand scaling, AWS handles the work of scaling up and down based on demand (similar to a “serverless” model like Lambda). This option would require the least amount of effort. Also, on-demand scaling is ideal for load that is variable, which is true for this scenario. https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/

B: Implement DynamoDB auto scaling. With DynamoDB auto scaling (sometimes called “provisioned” auto scaling), you must specify the target, minimum and maximum capacities for the database. This is extra overhead for you. In addition, based on the scenario, it would be hard to know these values, given the variability in traffic. A better answer in this case is to use on-demand scaling, where AWS does the scaling for you based on demand. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html

C: Place the underlying instance in an Auto Scaling Group and set the Target Tracking metric to 80%. DynamoDB is a fully-managed database, meaning you don’t get access to the underlying instance(s) that run it. Therefore, it would not be possible to manually place it in an Auto Scaling Group.

D: Create a network load balancer to distribute load across DynamoDB tables. This answer is a distractor. Network load balancers are used to distribute load to EC2 instances, not DynamoDB tables.

Practice Questions: Answers & Explanations

MODULE 6 – Networking

QUESTION 1

  1. Your IT Security team actively tracks IP addresses of known hackers, and has asked you to block a specific IP address. How could you go about this?

A: IAM

B: Security Group

C: VPC Endpoint, powered by PrivateLink

D: Network Access Control List (ACL)

Explanations:

A: IAM. Identity and Access Management (IAM) is the service used to set up and manage users, user groups, roles and permissions. https://aws.amazon.com/iam/

B: Security Group. A security group is a firewall that controls traffic in and out of an EC2 instance, but you can only use “allow” rules, not “deny” rules. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

C: VPC Endpoint, powered by PrivateLink. VPC endpoints allow you to access other AWS services through a private network (vs. going across the public internet). https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html

D: Network Access Control List (ACL) – CORRECT. A network ACL is a firewall that controls traffic in and out of a subnet. With this option, you’re able to use “deny” rules. Blocking a specific IP address is a common use case for NACLs. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

QUESTION 2

  1. For regulatory reasons, your company needs to establish a dedicated private connection from your on-premises data center to AWS. The connection cannot go over the public internet. Which option should you choose?

A: Direct Connect

B: Site-to-Site VPN

C: PrivateLink

D: Storage Gateway

Explanations:

A: Direct Connect – CORRECT. Direct Connect offers a dedicated physical connection from an on-premises data center to AWS. It does not go over the public internet. https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

B: Site-to-Site VPN. Site-to-Site VPN connections go over the public internet, so would not fulfill the requirements in this scenario. https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html

C: PrivateLink. PrivateLink is used to connect AWS resources in one Virtual Private Network (VPC) to another VPC. It can’t be used to connect to on-premises resources. https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html

D: Storage Gateway. Storage Gateway allows you to store on-premises resources in the cloud, such as for backup purposes. However, Storage Gateway does not establish the actual connection between AWS and on-premises data centers. To get a private connection, you want to use Direct Connect. https://aws.amazon.com/storagegateway/

QUESTION 3

  1. Which AWS service allows customers to create their own private network within AWS?

A: CloudFront

B: Direct Connect

C: Route 53

D: Virtual Private Cloud (VPC)

Explanations:

A: CloudFront. CloudFront is AWS’s content delivery network (CDN), and its primary goal is to speed up delivery of content to end users by caching, especially media files like videos and images. https://aws.amazon.com/cloudfront/

B: Direct Connect. Direct Connect offers a dedicated physical connection from an on-premises data center to AWS. This does not represent a private network within AWS. https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

C: Route 53. Route 53 is AWS’s managed DNS service. It can be used to purchase/register domain names, as well as handle DNS routing (such as routing IP address 12.34.56.78 to mywebsite.com). https://aws.amazon.com/route53/

D: Virtual Private Cloud (VPC) – CORRECT. A virtual private cloud (VPC) is a private network within AWS, used to isolate a customer’s resources. https://aws.amazon.com/vpc

QUESTION 4

  1. You need to manage multiple AWS VPCs, as well as on-premises networks. Which service allows you to do this?

A: AWS Transit Gateway

B: Storage Gateway

C: Internet Gateway

D: Route 53

Explanations:

A: AWS Transit Gateway – CORRECT. AWS Transit Gateway helps you build applications that span multiple AWS VPCs and on-premises networks from a single place. https://aws.amazon.com/transit-gateway/

B: Storage Gateway. Storage Gateway allows you to store on-premises resources in the cloud, such as for backup purposes. https://aws.amazon.com/storagegateway/

C: Internet Gateway. An internet gateway is what allows resources in a public subnet to access the internet. This is done by creating a route (defined in a route table) from the public subnet to the internet gateway. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html

D: Route 53. Route 53 is AWS’s managed DNS service. It can be used to purchase/register domain names, as well as handle DNS routing (such as routing IP address 12.34.56.78 to mywebsite.com). https://aws.amazon.com/route53/

QUESTION 5

  1. Which of the following is used to control the flow of traffic at the EC2 instance level?

A: IAM

B: Security Group

C: VPC Endpoint, powered by PrivateLink

D: Network Access Control List (ACL)

Explanations:

A: IAM. Identity and Access Management (IAM) is the service used to set up and manage users, user groups, roles and permissions. https://aws.amazon.com/iam/

B: Security Group - CORRECT. A security group is a firewall that controls traffic in and out of an EC2 instance. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

C: VPC Endpoint, powered by PrivateLink. VPC endpoints allow you to access other AWS services through a private network (vs. going across the public internet). https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html

D: Network Access Control List (ACL). A network ACL is a firewall that controls traffic in and out of a subnet. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

QUESTION 6

  1. Your application running on an EC2 instance needs to access files stored in an S3 bucket. How can you do this while ensuring a private connection on the AWS network (i.e., not over the public internet)?

A: VPC Endpoint, type Interface

B: NAT Gateway

C: VPC Endpoint, type Gateway

D: Direct Connect

Explanations:

A: VPC Endpoint, type Interface. VPC endpoints allow you to access other AWS services through a private network (vs. going across the public internet). However, the “Interface” type is for AWS services other than S3 and DynamoDB. For the scenario in this question, the “Gateway” type is what you want. https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

B: NAT Gateway. A NAT Gateway allows you to connect a private subnet to the internet (by connecting the NAT Gateway to the Internet Gateway). The way to achieve what the question is asking for is to use a VPC Endpoint. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

C: VPC Endpoint, type Gateway – CORRECT. VPC endpoints allow you to access other AWS services through a private network (vs. going across the public internet). The “Gateway” type is for S3 and DynamoDB. https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

D: Direct Connect. Direct Connect does not go over the public internet. However, Direct Connect is used to connect an on-premises data center to AWS. The question is asking about connecting one AWS service to another, which can be accomplished using VPC Endpoints. https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

QUESTION 7

  1. Resources in a private subnet need to access the internet for periodic software updates. How can you accomplish this?

A: Internet Gateway

B: NAT Gateway

C: Site-to-Site VPN

D: VPC Endpoints

Explanations:

A: Internet Gateway. It is true that an Internet Gateway enables internet access, but this is only part of the solution. When working with a private subnet (i.e., one that doesn’t have a direct route to the Internet Gateway), you will need to use a NAT Gateway as an in-between point to then direct traffic to the Internet Gateway. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html

B: NAT Gateway – CORRECT. To enable internet access from a private subnet, you should create a NAT Gateway in a public subnet, add a route from a private instance to it, and then add a route from the NAT Gateway to the Internet Gateway (at the VPC level). https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

C: Site-to-Site VPN. Site-to-Site VPN is used to connect an on-premises location to AWS, so would not apply in this scenario. https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html

D: VPC Endpoints. VPC endpoints allow you to access other AWS services through a private AWS network. These are not used to access the public internet. https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

QUESTION 8

  1. A network administrator needs to review traffic activity across the network, including VPCs, subnets and security groups. Which service can be used to get this information?

A: VPC Endpoints

B: CloudTrail

C: Network Logs

D: VPC Flow Logs

Explanations:

A: VPC Endpoints. VPC endpoints allow you to access other AWS services through a private AWS network. https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html

B: CloudTrail. CloudTrail captures user activity and API calls on an AWS account, which would include events such as sign-ins or creation of new resources. https://aws.amazon.com/cloudtrail/

C: Network Logs. This is a distractor. There isn’t a service called Network Logs. VPC Flow Logs would give you the information mentioned in the question.

D: VPC Flow Logs – CORRECT. VPC Flow Logs capture the activity going to and from network interfaces in a VPC. https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html

Practice Questions: Answers & Explanations

MODULE 7 – Storage

QUESTION 1

  1. An on-premises application stores photos and documents on a local file server. As a backup strategy, you would like to also store these files in AWS. Which AWS service enables you to leverage cloud storage from an on-premises location in this way?

A: Storage Gateway, Volumes in Cached mode

B: S3

C: Storage Gateway, Volumes in Stored mode

D: Snowball

Explanations:

A: Storage Gateway, Volumes in Cached mode. In this scenario, it is true that you should use Storage Gateway. However, with Volumes in Cached mode, all of the data is stored and accessed from S3. Only frequently-accessed data is cached locally. When using Storage Gateway as a backup strategy (as indicated in this question), Volumes should be used in Stored mode (where the data is stored/accessed locally, and only sent to AWS for backup purposes). https://docs.aws.amazon.com/storagegateway/latest/vgw/WhatIsStorageGateway.html

B: S3. S3 provides object storage in AWS. You CAN store things from on-premises into S3, but you will need to use Storage Gateway to make that connection first. https://aws.amazon.com/s3/

C: Storage Gateway, Volumes in Stored mode – CORRECT. Storage Gateway allows you to store on-premises resources in the cloud, such as for backup purposes. Because this question specifies wanting to use AWS for backup purposes, you would use Volumes in Stored mode. https://docs.aws.amazon.com/storagegateway/latest/vgw/WhatIsStorageGateway.html

D: Snowball. The Snow family of products can be used to transfer large amounts of data securely from on-premises to AWS. However, this tends to be more of a one-time transfer, such as for migration of data. Storage Gateway is a better fit for the storage/backup use case described in the question. https://aws.amazon.com/snowball/

QUESTION 2

  1. For compliance reasons, you need to ensure objects in an S3 bucket can’t be deleted or overwritten. Which S3 feature should you use?

A: Cross-region replication

B: Block all public access

C: Static website hosting

D: Object lock

Explanations:

A: Cross-region replication. Cross-region replication will make a copy of an S3 object and place it in another region. While this can be used to ensure backup copies are available, it will not prevent someone from deleting or overwriting objects. Object lock would accomplish the goal described in the question. https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication.html

B: Block all public access. Blocking public access will prevent the general public from accessing objects. However, it does not prevent deletion or overwriting of objects by internal users or roles. Object lock would accomplish the goal described in the question. https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html

C: Static website hosting. It is possible to host a static website using an S3 bucket. However, this is not related to deleting or overwriting objects in the bucket. Object lock would accomplish the goal described in the question. https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html

D: Object lock – CORRECT. Under the Advanced Settings for an S3 bucket, you can enable version lock, which will prevent objects from being deleted or overwritten. https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html

QUESTION 3

  1. As part of a restructuring effort at your company, there has been an increased focused on reducing AWS costs. The Legal team is required to store data for up to 7 years, but it is rarely accessed after the first year. When they do need the data, retrieval time of 4 hours is sufficient. Which data storage solution would be most cost-effective for this scenario?

A: S3 Glacier Flexible Retrieval

B: S3 Intelligent-Tiering

C: S3 Glacier Deep Archive

D: S3 Standard

Explanations:

A: S3 Glacier Flexible Retrieval – CORRECT. S3 Glacier Flexible Retrieval should be used to store data that rarely needs to be accessed. The default retrieval time is 1 minute to 12 hours, which meets the requirements stated in the question. https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

B: S3 Intelligent-Tiering. S3 Intelligent-Tiering makes the most sense when data is changing or the access patterns are unknown. In this scenario, we know the data is rarely accessed, and that it is okay to have a retrieval time of up to 4 hours. S3 Glacier Flexible Retrieval fits these requirements and is also more cost effective than S3 Intelligent-Tiering. https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

C: S3 Glacier Deep Archive. S3 Glacier Deep Archive is the most cost-effective storage class; however, retrieval of data takes 12-48 hours so would not fulfill the requirements in the question. https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

D: S3 Standard. S3 Standard storage should be used for active data storage/access, when files need to be retrieved immediately. It does not offer discounts for longer retrieval times. S3 Glacier Flexible Retrieval would make the most sense for the scenario described. https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

QUESTION 4

  1. Which AWS service can be used to store files that will be accessed by Lambda functions, multiple EC2 instances and on-premises servers?

A: Elastic Block Store (EBS)

B: S3 Glacier Deep Archive

C: Elastic File System (EFS)

D: DynamoDB

Explanations:

A: Elastic Block Store (EBS). Elastic Block Store (EBS) volumes can be thought of as hard drives for a single EC2 instance. While they can store files, they cannot be attached to (or accessed by) more than one instance so would not meet the requirements of this question. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html

B: S3 Glacier Deep Archive. S3 Glacier Deep Archive should be used to store data that rarely needs to be accessed. The default retrieval time is 12 hours. This is a solution for archiving, not for active file storage/access. https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

C: Elastic File System (EFS) – CORRECT. Elastic File System (EFS) is a file system that can be accessed by multiple services at a time, including on-premises servers. https://aws.amazon.com/efs/

D: DynamoDB. DynamoDB stores data in key-value pairs in database tables. This is not the solution for storing/sharing files across multiple services. https://aws.amazon.com/dynamodb/

QUESTION 5

  1. Your application is attempting to upload a 2 GB file to S3 using a PUT request, but you keep receiving an error. What is a possible solution?

A: Copy the file to Elastic File System (EFS) instead

B: Use S3 Multipart Upload

C: Encrypt the file on the client side before uploading

D: Ensure that you have S3 Object Lock permissions in Compliance mode

Explanations:

A: Copy the file to Elastic File System (EBS) instead. This is a distractor. While you could copy the file to EFS, this presumably does not meet the requirements of the application that should use S3. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html

B: Use S3 Multipart Upload. While the official limit for a single S3 PUT request is 5 GB, for any objects larger than 100 MB, the recommendation is to use Multipart Upload. Given the size of the file in question, this is likely the best solution to the problem. https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html

C: Encrypt the file on the client side before uploading. While it is possible to create S3 bucket policies to check for encryption upon upload, the question makes no mention of encryption requirements for the application. It is more likely that the error is due to the size of the file. Using S3 Multipart Upload for files larger than 100 MB is recommended. https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html

D: Ensure that you have S3 Object Lock permissions in Compliance mode. This is a distractor. Object Lock is used to prevent deleting or overwriting objects. In addition, if an object is protected with Object Lock in Compliance mode, nobody (not even root) will have access to delete or overwrite it, so it wouldn’t make sense to check permissions for that. https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html

Practice Questions: Answers & Explanations

MODULE 8 – Databases

QUESTION 1

  1. A gaming company is developing a new application with a key-value database on the backend. They need to ensure ultra-high performance and scalability, with microsecond latency on database reads. Which solution would meet these needs?

A: The Relational Database Service (RDS)

B: DynamoDB with on-demand scaling enabled

C: Amazon Neptune

D: DynamoDB with DynamoDB Accelerator (DAX)

Explanations:

A: The Relational Database Service (RDS). The Relational Database Service is for relational databases, but the application in question will be using a key-value (NoSQL) database. In addition, RDS itself wouldn’t be a complete answer, as it is not an actual database, but the overall service to run various database engines (such as SQL Server, Aurora, Oracle, etc.). https://aws.amazon.com/rds/

B: DynamoDB with on-demand scaling enabled. While DynamoDB would fit the requirements for a key-value database, on-demand scaling is not the best answer. On-demand scaling will scale resources up and down based on load. While that is an important feature generally during periods of unpredictable traffic, the question is specifically asking for microsecond read latencies, which can be achieved with DynamoDB Accelerator (DAX). https://aws.amazon.com/dynamodb/dax/

C: Amazon Neptune. Amazon Neptune is a graph database, best suited for things like recommendation engines, social networking and fraud detection. The application in question requires a key-value database with ultra-high performance and microsecond latency reads, which can be achieved with DynamoDB with DynamoDB Accelerator (DAX). https://aws.amazon.com/neptune/

D: DynamoDB with DynamoDB Accelerator (DAX) – CORRECT. DynamoDB is a key-value database that’s massively scalable and highly performant. With the addition of DynamoDB Accelerator (DAX), you can achieve the microsecond latency reads referred to in the question. https://aws.amazon.com/dynamodb/dax/

QUESTION 2

  1. A social media company is moving their workloads from on-premises to AWS. They need a graph database to support their applications. What AWS service should they use?

A: Relational Database Service (RDS)

B: DynamoDB

C: ElastiCache

D: Neptune

Explanations:

A: Relational Database Service (RDS). The Relational Database Service (RDS) is for relational databases, not graph databases as mentioned in the question. Neptune is a fully-managed graph database available from AWS. Social networking apps are a common use case for this type of database. https://aws.amazon.com/rds/

B: DynamoDB. DynamoDB is a non-relational or NoSQL database, where data is stored in key-value pairs. Neptune is a fully-managed graph database available from AWS. Social networking apps are a common use case for this type of database. https://aws.amazon.com/dynamodb

C: ElastiCache. ElastiCache is an in-memory database offered by AWS, used for caching and session management. Neptune is a fully-managed graph database available from AWS. Social networking apps are a common use case for this type of database. https://aws.amazon.com/elasticache/

D: Neptune – CORRECT. Neptune is a fully-managed graph database available from AWS. Social networking apps are a common use case for this type of database. https://aws.amazon.com/neptune/

QUESTION 3

  1. Due to regulatory requirements, your application needs to replicate data into a second AWS region. Which service supports this scenario?

A: Direct Connect

B: Elastic Block Store (EBS)

C: Relational Database Service (RDS)

D: Storage Gateway

Explanations:

A: Direct Connect. Direct Connect offers a dedicated physical connection from an on-premises data center to AWS. You can create a Direct Connect connection in any region, but Direct Connect is not used to replicate data across regions. https://aws.amazon.com/directconnect

B: Elastic Block Store (EBS). EBS replicates data across Availability Zones in a single region, but does not support replication across regions. https://aws.amazon.com/ebs

C: Relational Database Service (RDS) – CORRECT. RDS supports read replicas, where a copy of your database is used for read requests. This functionality is supported across regions so this would fulfill the requirements in the question. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.XRgn.html

D: Storage Gateway. Storage Gateway allows you to store on-premises resources in the cloud, such as for backup purposes. However, Storage Gateway does not directly support cross-region replication. https://aws.amazon.com/storagegateway

QUESTION 4

  1. Your company runs a popular gaming web application that you’re migrating from on-premises to AWS. The app requires fast data access, and is read-intensive. You need a service that can handle distributed session management for the game. What AWS service should you use?

A: An Application Load Balancer with Sticky Sessions enabled on the target group

B: ElastiCache

C: DynamoDB

D: Redshift

Explanations:

A: An Application Load Balancer with Sticky Sessions enabled on the target group. Sticky Sessions on a load balancer’s target group enable traffic to be directed to the same instance for a period of time. While it’s common to use Sticky Sessions for session management, this is only for an individual user. For distributed session management, you should use ElastiCache. Redis and Memcached are other viable options. https://aws.amazon.com/caching/session-management/

B: ElastiCache - CORRECT. ElastiCache is an in-memory database offered by AWS, used for caching and session management. It is ideally suited for read-intensive web applications and those requiring fast data access. Distributed session management is also an important feature of Elasticache. https://aws.amazon.com/elasticache/

C: DynamoDB. While DynamoDB is a highly-performant key-value database solution, an in-memory database or caching solution will deliver faster performance, and will also allow distributed session management. Options for distributed session management include ElastiCache, Redis and Memcached. https://aws.amazon.com/caching/session-management/

D: Redshift. Redshift is a data warehousing service, used to store and report on massive amounts of data. Use cases include analytics, log analysis and being able to combine multiple data sources. Options for distributed session management include ElastiCache, Redis and Memcached. https://aws.amazon.com/caching/session-management/

Practice Questions: Answers & Explanations

MODULE 9 – Data Migration and Transfer

QUESTION 1

  1. A large company needs to migrate massive amounts of data from an on-premises data center to AWS. Given the amount of data, it is impractical to do the transfer over the public internet. What is an alternative way to transfer this data securely?

A: Site-to-Site VPN

B: Snowball

C: CloudFront

D: Edge Locations

Explanations:

A: Site-to-Site VPN. Site-to-Site VPN is used to connect an on-premises location to AWS. This connection goes over the public internet so would not be the correct answer for this scenario. https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html

B: Snowball – CORRECT. The Snow family of products are physical devices that can be used to transfer large amounts of data securely from on-premises to AWS. Depending on the amount of data, a Snowcone or Snowmobile may be more appropriate, but given that Snowball is the only “Snow” answer provided, this would be the most appropriate answer. https://aws.amazon.com/snowball/

C: CloudFront. CloudFront is AWS’s content delivery network (CDN), and its primary goal is to speed up delivery of content to end users, especially media files like videos and images. https://aws.amazon.com/cloudfront/

D: Edge Locations. Edge Locations are used with CloudFront to get content (especially videos and images) to users faster. This answer is also somewhat of a distractor, as the Snow family of products can be used for “edge computing.” Examples of edge computing would be working on a ship or plane or a remote location without reliable connectivity. But this is different than an “Edge Location” used by CloudFront. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowCloudFrontWorks.html

QUESTION 2

  1. Your company is using the Database Migration Service (DMS) to migrate from a PostreSQL database on-premises to an Aurora database in AWS. How will you ensure the database schemas are compatible?

A: Use a DataSync agent on-premises to do a schema mapping

B: Use RDS to create a new empty schema in AWS

C: Create a new migration job and choose Aurora as the target

D: Use the Schema Conversion Tool with the Database Migration Service (DMS)

Explanations:

A: Use a DataSync agent on-premises to do a schema mapping. Although DataSync is used to move data, it is not specific to databases. Instead, you should use the Database Migration Service (DMS), and because the databases have different engines, the Schema Conversion Tool is also required. https://aws.amazon.com/datasync/

B: Use RDS to create a new empty schema in AWS. It is true that you can pre-create a schema in AWS for the target database. However, you would want to create a schema for Aurora, not an “empty schema.” The scenario in question is exactly why the Database Migration Service (DMS) Schema Conversion Tool was created, and that would be the correct answer. https://aws.amazon.com/dms/schema-conversion-tool/

C: Create a new migration job and choose Aurora as the target. You can use the Database Migration Service (DMS) to create a new migration job. However, the key to ensuring the migration works between different database types (heterogeneous databases) is to use the Schema Conversion Tool. https://aws.amazon.com/dms/schema-conversion-tool/

D: Use the Schema Conversion Tool with the Database Migration Service (DMS) – CORRECT. When migrating databases of different (heterogeneous) engine types (such as PostgreSQL to Aurora), the Schema Conversion Tool is required to the source database schema to the target database schema. https://aws.amazon.com/dms/schema-conversion-tool/

QUESTION 3

  1. Your company recently acquired a small startup business. Their users need to upload files to S3, and they must use FTP. What AWS service can fulfill this requirement?

A: Site-to-Site VPN

B: Storage Gateway

C: AWS Backup

D: AWS Transfer Family

Explanations:

A: Site-to-Site VPN. Site-to-Site VPN is used to connect an on-premises location to AWS. To enable users to upload to S3 via FTP, you should use the AWS Transfer Family. https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html

B: Storage Gateway. Storage Gateway allows you to store on-premises resources in the cloud, such as for backup purposes. While it’s possible to use S3 with the Storage Gateway, FTP is not a supported protocol. https://aws.amazon.com/storagegateway/

C: AWS Backup. AWS Backup allows you to set up and schedule regular backup jobs that include other services such as EBS, DynamoDB and RDS. This is not specifically for file uploads to S3, and does not support FTP. https://aws.amazon.com/backup

D: AWS Transfer Family – CORRECT. AWS Transfer Family is a fully-managed service that supports the FTP protocol to upload to S3 and EFS. https://aws.amazon.com/aws-transfer-family/

Practice Questions: Answers & Explanations

MODULE 10 – Analytics

QUESTION 1

  1. Which AWS service can be used to create dashboards and visualize data?

A: CloudTrail

B: S3

C: Athena

D: QuickSight

Explanations:

A: CloudTrail. Captures user activity and API calls on an AWS account. It is not used to deploy infrastructure. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

B: S3. S3 is used for object (file) storage. While other services can pull data from S3 and build visualizations from it, S3 itself does not offer visualization and dashboard capabilities. https://aws.amazon.com/s3/

C: Athena. Athena is used to query data in an S3 bucket using SQL statements. https://aws.amazon.com/athena

D: QuickSight – CORRECT. QuickSight allows you to get “sight” (visualization) of your data using dashboards, reports and other visualizations. https://aws.amazon.com/quicksight/

QUESTION 2

  1. You store a large amount of sales data in an S3 bucket. Which service can be used to query this data using SQL statements?

A: Athena

B: S3 Query

C: Relational Database Service (RDS)

D: AWS Glue

Explanations:

A: Athena – CORRECT. Athena is used to query data in an S3 bucket using SQL statements. https://aws.amazon.com/athena

B: S3 Query. This is a distractor. There is not a service called S3 Query.

C: Relational Database Service (RDS). The Relational Database Service (RDS) and its various engines support SQL statements. However, these are databases, and question is asking about things stored in S3 (object storage, not a database). https://aws.amazon.com/rds

D: AWS Glue. AWS Glue is a fully-managed ETL (extract, transform, load) solution. Using Glue, you can load data from a source like S3, process it in some way, and then ultimately store it in places like RDS, Redshift, S3, etc. While it’s common to use Glue, S3 and Athena in combination, Athena is the service that allows you to write SQL statements against your data. https://aws.amazon.com/glue/

Practice Questions: Answers & Explanations

MODULE 11 – Monitoring

QUESTION 1

  1. A developer from your team recently left the company, and you’ve noticed suspicious activity with various AWS resources since they left. Which service would record account logins by this user?

A: CloudTrail

B: CloudWatch

C: VPC Flow Logs

D: IAM Credential Report

Explanations:

A: CloudTrail – CORRECT. CloudTrail captures user activity and API calls on an AWS account, which would include events such as sign-ins. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

B: CloudWatch. CloudWatch is used for performance monitoring of applications and resources, such as CPU, memory, disk and GPU utilization and so on. While CloudWatch can integrate with CloudTrail (which logs the sign-in events mentioned in the question), CloudTrail is the service that would capture the events. https://aws.amazon.com/cloudwatch/

C: VPC Flow Logs. VPC Flow Logs capture the activity going to and from network interfaces in a VPC, such as internet gateways, NACLs, etc. It would not capture logins. https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html

D: IAM Credential Report. The IAM Credential Report shows things such as status of credentials, access keys and MFA devices. This report could tell you when the root account last logged in, but it would not log every sign-in event that the question is asking for. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html

QUESTION 2

  1. You’re planning to deploy a new web application, and it’s critical that you can view performance metrics within 1 minute to ensure things are working correctly. What should you do?

A: View CloudTrail logs right after deploying the application

B: Enable CloudWatch detailed monitoring

C: Enable CloudWatch high resolution metrics

D: Turn on CloudTrail basic monitoring

Explanations:

A: View CloudTrail logs right after deploying the application. CloudTrail captures user activity and API calls on an AWS account, such as user sign-ins. While CloudTrail can integrate with CloudWatch, CloudWatch is the service that captures the performance metrics. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

B: Enable CloudWatch detailed monitoring - CORRECT. With CloudWatch detailed monitoring enabled, CloudWatch will publish metrics at 1-minute intervals. Detailed monitoring does have to be enabled manually, and it also occurs an additional cost (vs. basic monitoring, which is free, but only publishes metrics at 5-minute intervals). https://aws.amazon.com/cloudwatch/

C: Enable CloudWatch high resolution metrics. With CloudWatch high resolution metrics, you can drill into metrics with a granularity of 1 second. However, in order to have the metrics published within 1 minute, you will need to enable detailed monitoring (the default is basic monitoring, which means the first metrics won’t be published for 5 minutes). https://aws.amazon.com/cloudwatch/

D: Turn on CloudTrail basic monitoring. This is a distractor. “Basic monitoring” is a feature of CloudWatch, not CloudTrail.

QUESTION 3

  1. Your team has been having problems with performance of EC2 instances. To help diagnose the issues, you frequently look at CPU utilization metrics and then take action. However, you would prefer to be notified when there’s a problem, rather than having to look at the metrics. What should you do?

A: Create a CloudTrail alert to send a text message when logs indicate problems with the CPU

B: Create a CloudWatch alarm that sends an email when CPUUtilization is greater than 80%

C: Create an SNS topic to monitor CPU usage and send emails when it reaches 80%

D: Write a script to set the CPUUtilization CloudWatch metric into an alarm state at 80%

Explanations:

A: Create a CloudTrail alert to send a text message when logs indicate problems with the CPU. There are no CloudTrail alerts so this is a distractor. While you can integrate CloudTrail and CloudWatch, the scenario in the question is the ideal use case for a CloudWatch alarm. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html

B: Create a CloudWatch alarm that sends an email when CPUUtilization is greater than 80%- CORRECT. The scenario described in the question is the ideal use case for a CloudWatch alarm. You can create an alarm that triggers when the CPUUtilization metric hits 80%, and then use the Simple Notification Service (SNS) to send an email. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html

C: Create an SNS topic to monitor CPU usage and send emails when it reaches 80%. It is true that the Simple Notification Service (SNS) is the service that will send the email. However, CloudWatch is the service that does the monitoring of CPU usage (not SNS), and then it will trigger SNS to send the email. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html

D: Write a script to set the CPUUtilization CloudWatch metric into an alarm state at 80%. There are a couple problems with this answer. First, you cannot set a CloudWatch metric to an alarm state; you set a CloudWatch ALARM to an alarm state. Also, the primary reason for manually setting an alarm state is to test an alarm as you’re creating it. Once the alarm is active, it will trigger automatically when thresholds are met (such as high CPUUtilization); it does not need to be triggered manually.

Practice Questions: Answers & Explanations

MODULE 12 – Security and Compliance

QUESTION 1

  1. Which AWS service can help protect against a distributed denial of service (DDOS) attack?

A: AWS CloudHSM

B: AWS Detective

C: Amazon Artifact

D: AWS Shield

Explanations:

A: AWS CloudHSM. CloudHSM is used to generate encryption keys, which are used to encrypt (scramble) your data. These are used by services such as Elastic Block Store, S3 and Lambda to encrypt data. https://docs.aws.amazon.com/crypto/latest/userguide/awscryp-service-hsm.html

B: AWS Detective. Amazon Detective organizes information from other sources, including GuardDuty, and provides visualizations, context and detailed findings to help you identify the root cause of a security incident. https://aws.amazon.com/detective/

C: Amazon Artifact. AWS Artifact is a free, self-service portal used to access reports and agreements related to AWS’s compliance and security. https://aws.amazon.com/artifact/

D: AWS Shield – CORRECT. AWS Shield is a service used to protect against Distributed Denial of Service (DDoS) attacks. https://aws.amazon.com/shield/

QUESTION 2

  1. During a routine audit, an auditor raised concerns about personally identifiable information (PII) being stored in S3 buckets. Your team ensures you that there is no PII present, but you need to verify this. Which AWS service should you use?

A: AWS Config

B: AWS Security Hub

C: Amazon Macie

D: Amazon Inspector

Explanations:

A: AWS Config. AWS Config is used to inventory, record and audit the configuration of your AWS resources. https://aws.amazon.com/config/

B: AWS Security Hub. AWS Security Hub provides a consolidated view of security findings in your account (checked against best practices). https://aws.amazon.com/security-hub/

C: Amazon Macie – CORRECT. Amazon Macie scans for, and analyzes, personally identifiable information, or PII, or S3 buckets. https://aws.amazon.com/macie/

D: Amazon Inspector. Amazon Inspector monitors EC2 instances and ECR repositories for software vulnerabilities and network exposure. https://aws.amazon.com/inspector/

QUESTION 3

  1. Your web application uses a database that requires a user name and password. You are trying to decide the best way to store and access this information. Per your company policy, secrets such as this must be rotated every 90 days. What is the best way to store and use the database credentials?

A: AWS Certificate Manager

B: SSM Parameter Store

C: AWS Secrets Manager

D: AWS Key Management System (KMS)

Explanations:

A: AWS Certificate Manager. AWS Certificate Manager is used to provision, manage and deploy SSL/TLS certificates. https://aws.amazon.com/certificate-manager/

B: SSM Parameter Store. Systems Manager (SSM) Parameter Store is a valid way to store secrets in AWS. However, you are not able to rotate secrets with this option. The recommended way to securely store AND rotate secrets in AWS is to use AWS Secrets Manager. https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html

C: AWS Secrets Manager – CORRECT. AWS Secrets Manager is used to securely store and rotate secrets, such as a database name/password. https://aws.amazon.com/secrets-manager/

D: AWS Key Management System (KMS). KMS is used to generate encryption keys, which are used to encrypt (scramble) your data. These are used by services such as Elastic Block Store, S3 and Lambda to encrypt data. https://aws.amazon.com/kms/

QUESTION 4

  1. Your company recently launched several new products, and your web application has been receiving a lot more traffic than normal. There have also been a lot more cross-site scripting attacks against the site. What AWS service can you use to help protect against these kinds of attacks?

A: AWS Artifact

B: AWS Web Application Firewall (WAF)

C: Security Groups

D: AWS Secrets Manager

Explanations:

A: AWS Artifact. AWS Artifact is a free, self-service portal used to access reports and agreements related to AWS’s compliance and security. https://aws.amazon.com/artifact/

B: AWS Web Application Firewall (WAF) – CORRECT. AWS Web Application Firewall (WAF) controls incoming and outgoing traffic for applications and websites, based on rules such as “block traffic from IP address X.” You can also enable rules to protect against common exploits such as cross-site scripting and injection attacks. https://aws.amazon.com/waf/

C: Security Groups. Security groups are firewalls used to control traffic at the EC2 instance level. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

D: AWS Secrets Manager. AWS Secrets Manager is used to securely store and rotate secrets, such as a database name/password. https://aws.amazon.com/secrets-manager/

Practice Questions: Answers & Explanations

MODULE 13 – Automation and Governance

QUESTION 1

  1. Your team needs to create hundreds of infrastructure resources across multiple regions, ensuring that they are created and configured the same way every time. What is the best way to accomplish this?

A: CodeDeploy

B: Document the configuration for the resources and have all team members use the same steps

C: CloudTrail

D: CloudFormation

Explanations:

A: CodeDeploy. CodeDeploy is used to deploy code to EC2 instances or on-premises servers. It is not used to deploy “infrastructure as code.” https://aws.amazon.com/codedeploy/

B: Document the configuration for the resources and have all team members use the same steps. While this approach could work, it would not be the most efficient, and it would leave room for human error/deviations.

C: CloudTrail. CloudTrail captures user activity and API calls on an AWS account. It is not used to deploy infrastructure. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

D: CloudFormation - CORRECT. CloudFormation is AWS’s “infrastructure as code” solution. In a JSON/YAML template, we define the resources we want to build, and then CloudFormation builds them. Because the resources are defined in code, this means the same infrastructure will be set up the same way every time. You can also add the templates to source control to track changes over time. https://aws.amazon.com/cloudformation

QUESTION 2

  1. A company is in the process of migrating to AWS. They have a fleet of approximately 4,000 servers, some in AWS and some on-premises. They need to apply a patch to all machines. How can they accomplish this with the least amount of administrative effort?

A: Use AWS Systems Manager to install the patches

B: Log into each instance and install the update

C: Use AWS Config to install the patches

D: Ask the owner of each instance to apply the patch as part of their development process

Explanations:

A: Use AWS Systems Manager to install the patches – CORRECT. One of the primary functions of Systems Manager is to patch/update a fleet of servers from a single place. This would be the easiest way to meet the requirements mentioned in the question. https://aws.amazon.com/systems-manager/

B: Log into each instance and install the update. While this approach would technically work, it would require a lot of administrative effort. Systems Manager would be the preferred way to accomplish this requirement. https://aws.amazon.com/systems-manager/

C: Use AWS Config to install the patches. AWS Config is used to inventory and record activity on resource configuration, and is commonly used for audit purposes. This is not how you would apply patches to a fleet of instances. https://aws.amazon.com/config

D: Ask the owner of each instance to apply the patch as part of their development process. While this approach would technically work, it would require a lot of administrative effort. Systems Manager would be the preferred way to accomplish this requirement. https://aws.amazon.com/systems-manager/

Practice Questions: Answers & Explanations

MODULE 14 – DNS and Network Routing

QUESTION 1

  1. A global entertainment company has been receiving complaints about the speed of their video streaming service. They need to increase the speed to improve the user experience for users around the world. What service should they use?

A: CloudFront

B: Edge Locations

C: Route 53

D: S3 Transfer Accelerator

Explanations:

A: CloudFront – CORRECT. CloudFront is AWS’s content delivery network (CDN), and its primary goal is to speed up delivery of content to end users, especially media files like videos and images. https://aws.amazon.com/cloudfront/

B: Edge Locations. Edge Locations are used by CloudFront to do caching, but the CloudFront service itself is the most appropriate answer for this question. https://aws.amazon.com/cloudfront/

C: Route 53. Route 53 is AWS’s managed DNS service. It can be used to purchase/register domain names, as well as handle DNS routing (such as routing IP address 12.34.56.78 to mywebsite.com). https://aws.amazon.com/route53/

D: S3 Transfer Accelerator. S3 Transfer Accelerator speeds up transfers of objects to and from an S3 bucket. While it does route traffic through CloudFront edge locations, CloudFront is the most appropriate answer here. As a content delivery network (CDN), the primary function of CloudFront is to increase the speed to customers, especially for media files. https://aws.amazon.com/cloudfront

QUESTION 2

  1. Your company has recently expanded its services into new countries around the world. The team has been hard at work localizing content for different locations, and you need to ensure that the correct version of the content is served up, depending on the location of the user. What should you do?

A: Use a CloudFront Latency policy

B: Use a Route 53 Geoproximity policy

C: Create a Web Application Firewall (WAF) rule to route traffic based on country of origin

D: Use a Route 53 Geolocation policy

Explanations:

A: Use a CloudFront Latency policy. This is a distractor. CloudFront does not have a Latency policy (the Latency policy is a feature of Route 53).

B: Use a Route 53 Geoproximity policy. While it is correct that you should use a routing policy in Route 53 for this scenario, the Geoproximity policy will route traffic based on the location of resources. For example, if you have a larger server in us-west-1, you can set up the policy to route there, regardless of where users are. To route based on the location of users, you would use the Route 53 Geolocation policy. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

C: Create a Web Application Firewall (WAF) rule to route traffic based on country of origin. WAF rules can be written to allow or deny certain traffic, including traffic from a certain origin. However, WAF is used to protect sites from common exploits, not to route traffic between locations. To route based on the location of users, you would use the Route 53 Geolocation policy. https://docs.aws.amazon.com/waf/latest/developerguide/waf-rules.html

D: Use a Route 53 Geolocation policy - CORRECT. To route based on the location of users, you would use the Route 53 Geolocation policy. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

QUESTION 3

  1. You recently created a static website that’s hosted in S3. You purchased a domain name, www.mycoolsite.com, through Route 53, and want to point that domain to your S3 site. How can you accomplish this?

A: In Route 53, create an A Record and enable an Alias

B: In S3, create a distribution origin and specify the domain name

C: In Route 53, create a CNAME Record and enable an Alias

D: In Route 53, using a routing policy to point traffic to S3

Explanations:

A: In Route 53, create an A record and enable an Alias - CORRECT. An A Record is used to point a host name (such as www.mycoolsite.com) to an IP address. This works at the root domain (i.e., no subdomain). You can also use an Alias with an A Record to point to an AWS resource, such as the static website in the S3 bucket. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html

B: In S3, create a distribution origin and specify the domain name. This is a distractor. S3 does not have a “distribution origin.”

C: In Route 53, create a CNAME Record and enable an Alias. An CNAME Record is used to point a host name to another name or A Record. However, this only works for non-root domains (i.e., no subdomain), so would not work for www.mycoolsite.com. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html

D: In Route 53, using a routing policy to point traffic to S3. Routing policies determine how Route 53 responds to queries. Some examples include failover routing, geolocation routing, geoproximity routing and more. These policies are not used to map hosts to specific AWS services like the scenario requires. https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

Practice Questions: Answers & Explanations

MODULE 15 – Application Integration

QUESTION 1

  1. As part of your company’s migration to the cloud, your team has decided to rearchitect your applications to decouple critical services. Specifically, a service that does video encoding needs to be decoupled from the video upload service, making it more resilient to failures. Which AWS service should you consider?

A: Lambda

B: Auto Scaling Groups

C: Step Functions

D: Simple Queue Service (SQS)

Explanations:

A: Lambda. While Lambda functions can be part of the overall solution, the core service in this re-architecture should be the Simple Queue Service (SQS). By using a queue, videos can be pushed to the queue when they are uploaded (by the producer), and then the video encoding service (the consumer) can pick them up and do the processing. If the video encoding service goes down, the video messages will be preserved in the queue, and processing can be resumed when the encoding service comes back online. https://aws.amazon.com/lambda

B: Auto Scaling Groups. Auto Scaling Groups can help with scalability of a service, and could play a part in this solution. If producers or consumers are run on EC2 instances, the Auto Scaling Group could scale to handle additional load. However, the primary way to achieve decoupling in this scenario is with the Simple Queue Service (SQS). https://aws.amazon.com/sqs/

C: Step Functions. Step Functions allow you to orchestrate Lambda functions, using a visual drag-and-drop designer. Lambda functions may be part of the solution to the scenario, but the primary way to achieve decoupling is with the Simple Queue Service (SQS). https://aws.amazon.com/step-functions/

D: Simple Queue Service (SQS) – CORRECT. Loose coupling refers to breaking parts of a system into smaller, independent parts so that an application can better handle failures. A queue, and specifically an SQS queue, is one popular way to achieve loose coupling in AWS. By using a queue, videos can be pushed to the queue when they are uploaded (by the producer), and then the video encoding service (the consumer) can pick them up and do the processing. If the video encoding service goes down, the video messages will be preserved in the queue, and processing can be resumed when the encoding service comes back online. https://aws.amazon.com/sqs/

QUESTION 2

  1. An order processing application uses the Simple Queue Service (SQS) to validate the credit worthiness of a customer and then place their order. Occasionally, orders are placed before credit worthiness is validated, putting the company at risk if customers can’t pay. What is a potential solution to this issue?

A: Use a Standard queue for the messages

B: Use a FIFO queue for the messages

C: Set up a CloudWatch alarm to skip messages if they are out of order

D: Use a Dead Letter Queue for the messages

Explanations:

A: Use a standard queue for the messages**.** The problem seems to stem from the fact that messages are occasionally being delivered out of order (orders are placed before credit worthiness is established). This can occasionally happen with Standard queues. The best solution is to use a FIFO queue, where messages are guaranteed to be processed in the order received. https://aws.amazon.com/sqs/features

B: Use a FIFO queue for the messages - CORRECT. The problem seems to stem from the fact that messages are occasionally being delivered out of order (orders are placed before credit worthiness is established). This can occasionally happen with Standard queues. The best solution is to use a FIFO queue, where messages are guaranteed to be processed in the order received. https://aws.amazon.com/sqs/features

C: Set up a CloudWatch alarm to skip messages if they are out of order. This is a distractor. CloudWatch does not offer metrics related to the order of message processing, and so hence an alarm could not be set up for this.

D: Use a Dead Letter Queue for the messages. Dead Letter Queues are used to store messages that were undeliverable in a “regular” queue, so they can be processed separately. In this scenario, the messages are being delivered, but in the wrong order. The best solution is to use a FIFO queue, where messages are guaranteed to be processed in the order received. https://aws.amazon.com/sqs/features

QUESTION 3

  1. A startup eCommerce company wants to stay in touch with their customers through text messages, where they’ll promote new products and sales events. Which AWS service should they use?

A: Simple Queue Service (SQS)

B: Simple Notification Service (SNS)

C: Step Functions

D: EventBridge

Explanations:

A: Simple Queue Service (SQS). Simple Queue Service SQS allows producers to send to a queue, and consumers to poll a queue for messages to process. While SQS could be part of this overall solution, the Simple Notification Service (SNS) is the service that can send notifications through email, text, HTTP or Lambda. https://aws.amazon.com/sqs/

B: Simple Notification Service (SNS) - CORRECT. The Simple Notification Service (SNS) is the service that can send notifications through email, text, HTTP or Lambda. https://aws.amazon.com/sns

C: Step Functions. Step Functions allow you to orchestrate Lambda functions, using a visual drag-and-drop designer. Lambda functions may be part of the solution to the scenario, but the Simple Notification Service (SNS) is the service that can send notifications through email, text, HTTP or Lambda. https://aws.amazon.com/step-functions/

D: EventBridge. EventBridge is used to build decoupled, event-driven architectures, such as a Lambda function triggering DynamoDB database entries, which triggers something else, which triggers something else. While you might use EventBridge to build an overall solution, the Simple Notification Service (SNS) is the service that can send notifications through email, text, HTTP or Lambda. https://aws.amazon.com/eventbridge.