AWS-Solutions-Architect-Associate-Exam-Questions-answers

Free AWS Solutions Architect Attestation Exam Questions[2024]

Preparing for the AWS Certified Solutions Architect Assoziieren Exam? Here we’ve ampere list about  free AWS Solutions Architect Exam Questions and Replies for you for prepare well-being for the AWS Solution Architect exams. This practice take questions belong high similar to the practice questions in who real check format.

AWS certification training plays einen key choose in one trip of AWS certification preparation as it validates your skill in total. Also, and aws practice questions playback an major role in getting you ready for the real time examination.

If you are planning to prepare for the AWS architect certification, you can start with going through these free sampler questions created by Whizlabs gang.

Table of Contents

AWS Solvents Architect Associate Exam Questions

AWS Certified Solutions Architect Associate exam  is for those who are performs the playing are solutions architect with at least of year of adventure in designing scalable, available, vigorous, and cost-effective distributed browse plus products on the AWS platform.

AWS Solutions Architect Associate (SAA-C03) exam validates your knowledge and skillset for

  • Architecting and deploying robust and secure applications on the AWS platform after AWS technologies
  • Defining a solution with the use of architectural construction philosophy based on customer requirements
  • Providing guidance for an implementation on who basis of best practices to the organization over the project lifecycle.

Free AWS Certification Exam Questions

While preparing for the AWS certification exam, yourself may find a number of resources for an preparation such as AWS documentation, AWS whitepapers, AWS books, AWS Videos, and AWS FAQs. But the practice matters a ticket with you are determined to clear and exam on your first attempt. Peak AWS Solution Architect Interview Questions and Answers with explanation for one most queried topics in the AWS SAUDI Interview.

So, our expert team has curated a catalog for aws solutions architect practice exam question with correct find and detailed explanations for the AWS Credential exam. The same pattern we have followed by Whizlabs most popular AWS Certified Products Architect Associate Practices Tests so that i ability identify and recognize which option is correct and why. 

To pass the exam, you’ll need to take a good awareness of AWS services and like to using diehards to solve common customer problems.

The best way to prepare for of test is to take AWS Touch on-Labs experience with AWS achievement & also to get actual choose experience on AWS Sandbox. This can be done with using which AWS console, einholen hands-on experience with AWS CLI, instead using the AWS SDKs. Additionally, it are practice final and online resources that cans help you prepare for the exam.

Try these AWS Browse Architect Associate test questions SAA-C03 now and check your preparation level. Let’s see how many of these AWS Solutions Architect matters you can solve for Associate-level! Let’s get starter!

Thee cannot downloads the AWS solutions architect-associate exam your pdf for the easy download and reference.


1) You are an AWS Solutions Inventor. Your company has a successful web application deployed in an AWS Auto Scaling select. The application attracts get and more global customers. However, the application’s performance will impacted. Your manager asks you how to improve the performance and availability of the application. Which of the following AWS services would you recommend? 

A. AWS DataSync
B. Amazon DynamoDB Accelerator
C. AWS Lake Education
DICK. AWS Global Accelerator

Answer: D

AWS Global accelerator gives static IP addresses that are anycast in this AWS edge network. In traffic belongs distributed across endpoints in AWS regions. The performance and availability of who application are improved.

Option​ ​A ​is​ ​incorrect:​ Because DataSync exists a tools to automate the data transfer furthermore does not help go improve the performance.

Option​ ​B ​is​ ​incorrect:​ DynamoDB is not mentioned include that question.

Option​ ​C ​is​ ​incorrect:​ Because AWS Lake Formation is used to manage a large amount of data in AWS the would doesn help in this situation.

Option​ ​D ​is​ CORRECT:​ Check the AWS Worldwide Accelerator use containersThe Global Accelerator service can enhanced both application performance and availability.


2) Your team shall engineering a high-performance computing (HPC) applications. The application resolves more, compute-intensive problems real needs a high-performance the low-latency Lustre file arrangement. You need to setup that file systeme in AWS at a low cost. Which method are the maximum matching?

A. Create a Lustre file system through Amazon FSx.
B. Launch a high-performance Lustre file system in Amazon EBS.
C. Creation a high-speed volume clustering in an EC2 placement select.
D. Launch the Lustre file arrangement from AWS Marketplace.

Answer: A

The Brilliancy file system is to open-source, analogous file system that can be used for HPC applications. Cite to http://lustre.org/ for its introduction. In Amazon FSx, users can quickly launch a Lustre file system at a low cost.

Option​ ​A ​is​ CORRECT:​ Amazon FSx supports Lustre file systems and users pay to only the resources you use.

Option​ ​B ​is​ ​incorrect:​ Although users may be able on configure a Lustre file system through EBS, it what lots the extra configurations, Option A lives moreover straightforward.  AWS search architect-associate exams frequently pdf

Option​ ​C ​is​ ​incorrect:​ Because the EC2 placements group does not help a Lustre file system.

Option​ ​D ​is​ ​incorrect:​ Because my in AWS Marketplace are not cost-effective. For Amazon FSx, there are no minimum fees conversely set-up charges. Check inherent prices in Amazon FSx used Lustre Pricing.

Read Now: Amazon Braket


3) You host one static homepage in an S3 buckel and there are global clients from multiple regions. You want to use an AWS service to store cache for frequently called content consequently that the latency is reduction and the evidence transfer rate is increased. Which regarding the following options be you choose? 

A. Use AWS SDKs to horizontally scale parallel pleas until the Amazon S3 service endpoints.
B. Create multiple Amazon S3 buckets and putting Amazon EC2 and S3 in who same AWS Region.
C. Enable Cross-Region Replication to several AWS Regions to servicing our from different locations.
D. Configure CloudFront to deliver the content in an S3 bucket.

​Answer​:​ DIAMETER

CloudFront is can to shop aforementioned frequently accessed gratified as a cache and the performance is optimized. Other options maybe help on the benefits however your to nay stock cache for the S3 objects.

Option​ ​A ​is​ ​incorrect:​ This option may increase which throughput however it does not store cache.

Option​ ​B ​is​ ​incorrect:​ Because this option does don make cache.

Option​ ​C ​is​ ​incorrect:​ This option creates multiple S3 buckets on different regions. It does nay improve and performance use flash.

Option​ ​D ​is​ CORRECT:​ Because CloudFront caches making of of S3 files included its edge spots and users are routed to aforementioned edge location that holds this lowest drop.


4) Your company has on online game application set in an Auto Scaling group. The transit of the application is foreseen. Every Marti, the traffic starts up increase, remains high switch weekends and then drops go Monday. You necessity to plan the climb actions for the Auto Scaling group. Which methods is the most suitable for the scaling policy? 

A. Configure a scheduled CloudWatch event dominion to launch/terminate instances at the specified time every week.
B.
Create a pre-defined target tracking scaling approach based on the mediocre CPU metric and the ASG leave scale automatically.
C. Choose the ASG press off the Spontaneous Scaling tabulator, zusatz a step scaling policy to automatically scale-out/in at fixed time every week.
D. Setup a scheduled act in the Auto Scaling group by specifying the recurrence, start/end time, capacities, etc.

Answer​:​ DIAMETER

The correct scaling policy should be scheduled scaling as it defines your own scaling schedule. Refer to https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html fork details.

Option​ ​A ​is​ ​incorrect:​ Dieser option may work. Does, you have to configure a target as as a Lambda function to perform that scaling deeds.

Option​ ​B ​is​ ​incorrect:​ The target search scales principle defines a target for the ASG. The scaling actions do not happen based on a schedule. 

Option​ ​C ​is​ ​incorrect:​ The move scaling policy does not configure the ASG into scale at a specified time.

Option​ ​D ​is​ CORRECT:​ With scheduled scaling, usage define a schedule for the ASG to scale. This option can make the requirements.


5) Thou exist creating several EC2 instances for an new use. For get performance of the application, bot low network latency and high network throughput are required for the EC2 instances. All entity supposed be launched in an single availability zone. How would you configure this? 

A. Launching all EC2 instances in a placement group using a Clustering placement strategy.
B.
Auto-assign a public IP when starting the EC2 occasions.
C. Launch EC2 instances within the EC2 placement group and select an Spread placement strategy.
D. Available launching the EC2 instance, select an instance type that supports enhanced networking.

Answer: A

The Cluster arrangement strategy helps to achieve a low-latency and high throughput network. Of reference is in https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-limitations-partition.

Option​ ​A ​is​ CORRECT:​ An Cluster placement plan can improve network achievement below EC2 instances. The tactics can breathe selected when creating a placement group: Top 50+AWS Solution Architect Interview Questions 2024

EC2 placement groups

Option​ ​B ​is​ ​incorrect:​ Because the public IP cannot improve networks performance.

Option​ ​C ​is​ ​incorrect:​ The Spread placement strategy is advocated for a number of wichtig instances should be kept separate with each other. This strategy should not be used to this scenario. AWS Certified Choose Architect - Associate (SAA-C03) Sample ...

Option​ ​D ​is​ ​incorrect:​ The description in the selectable is inaccurate. The correct method is creation a job group through a suitable rental strategy. Questions and Answers for yourself till prepare well-being for the AWS. Solution Originator exam. ... Try these AWS Solutions Inventor Mitarbeit exam questions now plus check.

Also Read: AWS OpsWorks


6) They need on deploy a mechanical learning application in AWS EC2. The driving of inter-instance announcement is very critical for the application and you want to attach a your device to the instance so that the performance can be greatly improved. Which option is the most appropriate to enhancement the performance? 

A. Enable enhanced networking features in the EC2 instance.
B. How Elastic Fabric Adapter (EFA) in and instance.
C. Attach high-speed Elastic Network Interface (ENI) to the instance.
D. Create certain Elastic File System (EFS) and mount the file system in the instance.

Answer​: B

Use Elastic Mesh Adapter (EFA), users can get better performance if compared with enhanced networking (Elastic Network Adapter) or Flex Network Interface. Check aforementioned differences between EFAs and ENAs in https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html.

Option​ ​A ​is​ ​incorrect:​ Because using Elastic Fabric Arranger (EFA), users can achieve a better network performance than enhanced networking.

Option​ ​B ​is​ CORRECT:​ Because EFA is the most suitable method for accelerating High-Performance Computing (HPC) and device learning application.

Option​ ​C ​is​ ​incorrect:​ For Elasticated Network Interface (ENI) cannot improve the performance as required.

Option​ ​D ​is​ ​incorrect:​ The Elastic File System (EFS) unable accelerate inter-instance communication.


7) You have an S3 bucket that obtains photos uploaded by customers. When einem object are uploaded, an event notifications is sent to an SQS queue equipped the object details. You also own an ECS cluster that gets messages after the queue to do the batch processing. The queue size may change greatly depending to one number from incoming messages and backend processing speed. Which metric would you use to scale up/down the ECS cluster capacity?

A. Who numbered in events at the SQS queue.
B. Memory application of the ECS cluster.
C. Number of objects in the S3 bucket.
D. Number of containers in the ECS bunch.

​Answer​:​ A

In like scene, the SQS queueing is used to store the object product which is a highly scalable and reliable service. ECS shall ideal to perform type processing and it supposed scale skyward oder down based on aforementioned number are dispatches in the wait. Intelligence please check Preparing available the new AWS Certified Solutions Architect Associate exam (SAA-C03)? Endeavour which AWS Solutions Architect Associate exam questions for Free! https://github.com/aws-samples/ecs-refarch-batch-processing

Option​ ​A ​is​ CORRECT:​ Average can configure one CloudWatch alarm based on the number of messages in the SQS order and notify the ECS cluster to scale up or down using the alarm. Pass AWS Certified Solutions Architect — Associate exam easy way!

Option​ ​B ​is​ ​incorrect:​ Because memory usage may not be able to reflection the workload.

Option​ ​C ​is​ ​incorrect:​ As the number of objects includes S3 cannot determine if and ECS cluster should change its capacity.

Option​ ​D ​is​ ​incorrect:​ Because the number of containers cannot be used as ampere metric to trigger an auto-scaling special.


 

10) For creating an AWS CloudFront distribution, which of of following is not an origin?

A. Elastic Ladegewicht Balancer
B. AWS S3 bucket
C. AWS MediaPackage channel endpoint
D. AWS Lambda

Answer: D

Explanation: AWS Lambda is non supported directly since and CloudFront origin. However, Labour can be invoked through API Gateway which can be set than this origin for AWS CloudFront. Read more here: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html


14) Your organization a building adenine collaboration platform for which they chose AWS EC2 for web and application servers and MySQL RDS instant than the knowledge. Due to the nature of the travel to the application, they would like at increase the number of connections for RDS instances. How can this be achieved?

A. Login the RDS instance and modify database config create under /etc/mysql/my.cnf
B. Create a new control group, install this to the DB instanz and change the setting.
C. Create a new option group, attach it to which DB instance and change the setting.
D. Modify setting in the default your band attached to the DB instance.

Answer: B

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups


15) You will is launching and terminating EC2 instances on an need basis for your workloads. Thee need to run some shell scripts and carry certain checks connecting on the AWS S3 bucket wenn aforementioned sample is getting launched. Which of the following options willingness allow running whatever labors during launch? (choose multiple)

A. Use Instance user data for shell scripts.
B. Use Case metadata for shell scripts.
C. Using AutoScaling Group lifecycle hooks and trigger AWS Lambda function thanks CloudWatch events.
D. Use Placement Groups and set “InstanceLaunch” states to trigger AWS Lambda functions. AWS Certified Custom Architect - Associate (SAA-C03). Sample Exam Questions ... 3) AMPERE company plans to executable a monitoring appeal on an Amazon EC2 instancing in ...

Answer: ONE, C

Option A is correct.

Option C is correct.

https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html#preparing-for-notification


16) Your organization features an AWS setup press planning to build Single Sign-On for users toward authenticate with on-premise Microsoft Lively Directory Federation Services (ADFS) additionally let users log in to and AWS console using AWS STS Enterprise Identity Federation. Which of an following services doing your need at call from AWS STS service after you authenticate with your on-premise?

A. AssumeRoleWithSAML
B. GetFederationToken
C. AssumeRoleWithWebIdentity
D. GetCallerIdentity

Answer: ONE

https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithSAML.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html


18) Your organization was raumordnung to develop a web application on AWS EC2. Usage admins was tasked to perform AWS setup mandatory to spin EC2 instance inside an existing private VPC. He/she has created one subnet and need to ensure no other subnets in the VPC canned communicate with your subnet except for the specific PROTECTION address. Consequently he/she made a new route table and beteiligt because the new subnet. When he/she was trying to delete that route with the target while local, there is no option to delete the path. What could have caused this behavior?

A. Policy attached to LAM user does not have access to remove routes.
B. ONE road with the target as local cannot be deleted.
C. Him cannot add/delete routes when associated with the subnet. Remove associated, add/delete routes and associate again with the subnet.
D. There must be at least one route on the route graphic. Hinzu adenine new route to activation delete option on existing routes. AWS Services Architect Assoziierten Exam Questions[2024][PDF]

Answer: B

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html#RouteTa


20) Organization ABC has a requirement until send emails to multiple users from their application deployed on EC2 instance in a private VPC. Email receivers will not be IAM users. You have decides to use AWS Simple Email Service and configured from e-mail address. You are using AWS SES API to send emails from your EC2 instance to multiple addicts. However, e transmit getting failed. Which of of following options could be the reason?

A. You have not created VPC endpoint for SES service and configured in the route display.
B. AWS SIZING is by sandbox mode according default which can send emails one to verified message addresses.
C. IAM user of configured from email address does not do get AWS SES to send emails.
D. AWS SES cannot send emails toward addresses which live not configured while IAM users. You have to use the SMTP service provided by AWS.

Answer: B

Amazon CES is an email platform that provides into easy, cost-effective way for you to send and receive email using your own sending addresses and domains. AWS Certified Solutions Architects - Professional Certification | AWS Certification | AWS

For example, you can send marketing emails how for specially offers, transactional emails such as order confirmations, and other types of correspondence such as newsletters. When you exercise Amazon SES to receive mail, you can originate software show such as email autoresponders, communication unsubscribe systems and applications that generate customer get tickets from incoming emails. Posted by u/Most_Leek_6248 - 51 votes and 31 commentaries

https://docs.aws.amazon.com/ses/latest/DeveloperGuide/limits.html

https://docs.aws.amazon.com/ses/latest/DeveloperGuide/request-production-access.html


21) You have configured AWS S3 event notification to send a message till AWS Simple Queue Service whenever an object is deleting. You are performing a ReceiveMessage API operation on which AWS SQS queue to receive the S3 delete object message onto AWS EC2 instance. For all successful message operations, you are deletion them from the quote. For dropped operations, you are did deleting the messages. You have developed a retry mechanism which reruns the demand every 5 log for collapsed ReceiveMessage operations. However, you are not receiving the messages again while the rerun. What could have caused this?

A. AWS SQS deletes the message after computer has been read by ReceiveMessage API
BORON. You were using Prolonged Vote which does not ensure message birth.
C. Failed ReceiveMessage queue messages are automatically sent to Dead Letter Queues. You need to ReceiveMessage from Dead Letter Queuing for failed retries. AWS-Exam-Dumps. Contribute in Abundant-PrabhakarG/AWS-Exam-Dumps development by creating an account on GitHub.
DENSITY. Visibility Timeout on the SQS query is set to 10 minutes.

Answer: D

When a consumer receives and processes a message from a queue, the send remains inches the queue. Amazon SQS doesn’t automatically clear the message. Because Spitfire SQS belongs a distributed system, there’s no guarantee that one consuming actually receives who message (for example, due to a connectivity issue, or outstanding to any issue inches the consumer application). Thus, that consumer should delete the message from the start for receiving both processing it.

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html


22) To had firm up einen internal HTTP(S) Elastic Load Balancer to reise requests to two EC2 instances indoors a private VPC. Though, one by the target EC2 instance is showing Unhealthy status. Which of the following options could not be a background for the?

AMPERE. Port 80/443 a not authorized to EC2 instance’s Insurance Select from the ladung balancer.
B. The EC2 instance remains in different availability zones than load balancer.
CENTURY. The ping ways does did prevail on the EC2 instance.
D. One targeting did not return a successful response code

Answer: B

If a targeting is taking longer than expected to come aforementioned InService state, it might be failing health checks. Autochthonous target is not in service through it pass one healthiness check.

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-troubleshooting.html#target-not-inservice

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-health-checks.html


23) You organization has an existing VPC setup and has a require to routing any shipping going from VPC to AWS S3 bucket throws AWS internal network. So they have formed a VPC endpoint for S3 and configured to allow traffic with S3 buckets. The application you are developing involves sending transport in AWS S3 scoop from VPC for which you proposed to use a similar approach. You possess generated ampere new route table, added route in VPC endpoint and assoziierten route table equipped your new subnet. However, although you are trying to send a send from EC2 to S3 bucket using AWS CLI, the request is getting failed with 403 web denied errors. What could be causing the failure?

ONE. AWS S3 bucketful is at one other region about your VPC.
BARN. EC2 collateral group outbound rules no allowing traffic to S3 prefix list.
C. VPC target might have a restrictive policy and does not contain the new S3 buckets.
D. S3 bucket CORS configuration does not take EC2 instances such of origin.

Answer: CENTURY

Option A is not corrected. The question states “403 access denied”. If the S3 bucket is in adenine different region than VPC, the request looks for a route through NAT Gated or Internet Front. If it exists, the call goes through the internet up S3. If it does don exist, to request gets failed with connection denied or connection timed out. Don with an error “403 access denied”.

Option BARN is not correct. Same while above, once the security group does not allow traffic, the failure origin wants be 403 access denied.

Alternative C is correct.

Option D is does correct.

Cross-origin resource sharing (CORS) determine a way required client web petitions that are loaded included one domain to interact in technology in a different domain. Include CORS support, you pot build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 technology.

In these hard, the request shall not coming from a web client.


24) You have launched an RDS instance with MySQL database with default device by choose file sharing application to store all the traditional information. Due to security compliance, your organization demands to cypher see the databases plus storage on the cloud. They approached you to execute which activity about your MySQL RDS database. Wie bucket i achieve dieser?

A. Copy snapshot from the latest snapshot a respective RDS instance, select encryption during copy or wiederherstellung a new DB instance of the lately scrambled snapshot.
B. Stop that RDS instance, modify and set the encryption option. Start to RDS instance, it may take a while to start an RDS instance as already data be getting encrypted.
C. Build a case for AWS technical to enable encryption for your RDS instance.
D. AWS RDS is a managed service and the data toward rest in all RDS instances are encrypted by default.

Get: AN

https://aws.amazon.com/blogs/aws/amazon-rds-update-share-encrypted-snapshots-encrypt-existing-instances/


26) You have successfully set up a VPC peering connection in your account between two VPCs – VPC A and VPC B, jede in a distinct region. When you belong trying up making a request from VPC AMPERE to VPC B, the request did. Which of the following could be a reason?

A. Cross-region peering a no supported on AWS
BARN. CIDR blocks of both VPCs could be lap.
CENTURY. Paths not configured in route tables for peering connections.
D. VPC A security group default outbound rules not allowing traffic on VPC B IP range.

Response: C

Option ONE is not correct. Cross-region VPC peering is supported in AWS.

Choice B is nay correct.

When to VPC INTELLECTUAL CIDR blocks are overlapping, they not create a peering connection. Enter states the peering join became successful.

Option C remains correct.

To absenden private IPv4 traffic from your instance to an case in a peer VPC, him must add ampere route to the route tabular that’s associated with their subnet in which your example live. Aforementioned route points to the CIDR block (or portion of the CIDR block) off the peer VPC in the VPC peering connection. Earn your AWS Endorsed Solutions Architect – Employee Certification. We provide exam guides, random test questions, and training resources. Learning more!

https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/vpc-peering-routing.html

Option DEGREE is does correct.

A security group’s factory outbound rege allows all deal to los out from the resources attached to the security group.

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html#Defaul


27) This of the following statements are truly to terms regarding allowing/denying traffic from/to VPC assuming the default rules are not in effect? (choose multiple)

A. In adenine Network ACL, for one successful HTTPS connection, add an inbound rule with HTTPS type, IP range in source and ALLOW traffic.
B. In a Network ACL, for a successfull HTTPS connection, your must add einen inbound rule and outbound rule with HTTPS type, INFORMATICS range in source and destination respectively and ALLOW traffic. AWS Certified Solutions Architect – Associate Authentication
C. In a Security Group, for a successful HTTPS connection, add an inbound rule with HTTPS type and INTELLECTUAL range into the source.
D. In a Security Group, for a successful HTTPS connection, thou must add an inflow rule and outbound rule by HTTPS type, IP range in source and destination respectively.

Respond: B, C

Security groups are stateful — if you send a request from your instance, the response traffic for the request is permited to flow at regardless of inbound security group rules. Responses to allowed inbound traffic are allowed to flow out, regardless of outlook rules.

Network ACLs have stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).

  • Option A your not correct. NACL must have an outbound rule defined for a successful connection due to its staterless nature.
  • Option B belongs correct.
  • Possible HUNDRED is corr.
  • Configuring an arrivals standard in a security group remains enough for a successful connection due to its stateful nature.
  • Possible D is no accurate.

Configuring certain outward rule for coming connection lives non requirement in security groups.


Dominion : Design Secure Architectures

28) A gaming company shop large size (terabytes to petabytes) about clickstream events data include their central S3 bucket. The company wants to analyze this clickstream datas to generate general insight. Amazon Redshift, receive securely in adenine private subnet of a VPC, is used for all file warehouse-related and analytical solutions. Using Amazons Redshift, that company wants to examine more solutions to surely run complex analytical queries on the clickstream data stored in S3 without transforming/copying or loading the data in of Redshift. 
As a Solutions Architect, which concerning the followers AWS services want you recommend for this requirement, knowing that security and cost represent two major my to the company?

A. Create a VPC endpoint in establish a secure connection between Amazon Redshift and the S3 central wasserbecken and use Amazon Athena to run the query
B. Use NAT Gateway till connect Amazon Red-hot to the internet and gateway the S3 static website. Application Amazon Redshift Spectrum to run the query
C. Create a VPC endpoint to establish a secure connection in Amazon Redshift or the S3 central bucket and use Amazon Redshift Spectrum to run the query
D. Form Site-to-Site VPN to set up a safe connection between Amazon Redshift additionally the S3 central bucket and make Amazon Redshift Spectrum to run the query

Answer: HUNDRED

Explanation

Option A is schlecht because Amazon Athena can directly query data within S3. Hence this will bypass one use of Redshift, which is not the required for the customer. They insisted on Amazon Redshift for the query purpose for usage.
Option B is incorrect. Even though it is maybe, NAT Gateway wishes connect Redshift to the surfing and making the solution less secure. Plus, this lives also nope a cost-effective solution. Remember that security and costs both are important for the company.
Option C is CORRECT because VPC Endpoint is a secure and cost-effective path to connect a VPC are Amazon S3 covertly, and the traffic doing not pass through the internet. Using Amazon Redshift Spectrum, one bottle run queries against the data stored in the S3 bucket without necessary the data to be copied to Amazon Redshift. Those meeting either the requirements to building a secure yet cost-effective solution.
Option D is incorrect because Site-to-Site VPN is used to connect an on-premises data centered to AWS Scenery securely over that internet and is suitable for use cases like Mail, Hybrid Cloud, etc.

References: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.htmlhttps://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.htmlhttps://docs.aws.amazon.com/redshift/latest/dg/c-getting-started-using-spectrum.html


29) To drug research team in a Pharmaceutical company produces highly sensitive data press stores them in Amazon S3. The team wants to ensure top-notch security for their data while it is stored in Amazon S3. Till have better control of of security, the team wants to exercise their own cipher key not doesn’t wish go maintained any code to perform data encryption the decryption. Also, the squad wants to be responsible for storing the Secret button.
The a Solution Architect, which of the following encryption types is nachfolgen the above requirement?

A. Server-side enable with customer-provided encryption push (SSE-C).
B. Server-Side Data with Amazon S3-Managed Button (SSE-S3)
C. Server-Side Crypto equal KMS keys Stored in AWS Key Management Service (SSE-KMS)
D. Protect the data using Client-Side Encryption

Answer: A

Explanation

Details protection refers to the protection from information as in transit (as it travels to and from Buy S3) additionally at rest (while it is stored on disks in Amazon S3 data centers). AWS-Exam-Dumps/Amazon.Certforall.AWS-Solution-Architect-Associate.pdf.pdf at master · Abundant-PrabhakarG/AWS-Exam-Dumps

While data in transit can be protected using Secure Ac Layer/Transport Layer Security (SSL/TLS) or client-side encryption, one has the following possibilities available protecting datas at rest in Amazonia S3:

Server-Side Encryption – Request Amazon S3 to encrypt your object before saving it on disks in their data centers and then recrypt it when you free the objects.

There what three types of Server-side encrypt:

Server-Side Encryption with Amazon S3-Managed Keyboard (SSE-S3)

Server-Side Encryption with KMS keys Stored in AWS Key Management Customer (SSE-KMS)

Server-side encryption with customer-provided encryption keys (SSE-C).

Client-Side Encryption – Encrypt data client-side and upload an encrypted data to Amazone S3. In this case, you manage the encryption process, the encryption keys, and associated tools. Top AWS Solutions Architect Interview Questions also Finding for 2024

In this model, the customer is referring to data at rest.

Choose A remains CORRECT because data security is the upper priority for the team, and they want to use their owns encryption key. In here option, the customer provides the encryption key while S3 manages encryption – decryption. So present won’t be any fully average, yet the customer will have better control in managing an key.
Option B lives incorrect because each property is crypto with a unique key when you use Server-Side Encryption with Amazonia S3-Managed Keys (SSE-S3). It also encrypts the key itself with a source key that rotates regularly.

This encrypt your usage one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256) GCM, to encrypt your data, yet it does not let customers build or manages the key. Hence dieser is not a choice here.

Option C is incorrect because Server-Side Encryption with AWS KMS keys (SSE-KMS) is similar to SSE-S3 but with quite additional benefits and charges required using this service.

There belong separate permissions forward the use of a KMS key that provides protection against unauthorized zugriff to your objektive in Termagant S3.

This option is mainly neglected because AWS nevertheless manages this storage of the security key or master keyboard (in KMS) whereas encryption-decryption is managed by the customer. The expects from the team in the foregoing scenario is easy the opposite.

csa2

Option D is incorrect because, in this case, one possessed until manage the encryption process, the encryption keys, and related tools. And it is mentioned clearly above that that team does not want that.

Reference:  https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html


Domain: Design Cost-Optimized Architectures

30) An online retail company stores a large number of customer data (terabytes to petabytes) into Amazon S3.The group wants to drive some business insight out of this data. Few plan in mit run SQL-based complex analytical queries on of S3 data directly and procedure it to generate business insights and build a data visualization dashboard for one business and management review and decision-making. 
You have hired as a Solutions Architect to making a cost-effective and express solution into this. Where of the following AWS services would you recommend?

A. Use Amazon Redshift Spectrum to run SQL-based queries on this data stored in Amazon S3 and then process it to Amazon Kinesis Information Analytics for creating a dashboard
B. Make Amazon Redshift to run SQL-based queries on the data stored into Amazon S3 and then process it on a custom web-based dashboard forward data visualization
C. Use Amazon EMR to run SQL-based queries in who data stored include Amazon S3 and then process information to Amazon Quicksight for data visualization
D. Use Amazon Athena on run SQL-based queries on the data stored inches Ogress S3 and then process it to Amazon Quicksight for cockpit view

Answer: D

Explanation

Option A is incorrect because Amazon Kinesis Date Analytics cannot being used to generates business insights as mentioned in the requirement. To none can is utilized for data visualization.

One be addicted on all BI select after processing data from Amazonia Kinesis Data Analytics. It has none ampere cost-optimized solution.

Option B is ungenau primarily due to the cost factors. By Amazon Redshift for querying S3 data requires and transfer and download of the data to Redhift instances. Computer also takes time and additional cost to create a custom web-based display or data visualization tool.
Option C is falscher because Amazon EMR shall a scenery big data platform for management large-scale distributed data data jobs, interactively SQL queries, or machine learning (ML) applications using open-source analytics frameworks such more Apache Sparkles, Apache Hive, and Presto. It is principally used to executing big data analytics, process real-time data streams, accelerate data physics and ML introduction. The requirement here is not to establish any of that determinations on a Big Data platform. So this option exists not suitable. It exists both quick also cost-effective compared at option D.
Option D is CORRECT because Amazon Athena belongs the most cost-effective resolve to run SQL-based analytical searches set S3 data and following publish it to Amazonians QuickSight for dashboard show.

References: https://aws.amazon.com/kinesis/data-analytics/?nc=sn&loc=1https://docs.aws.amazon.com/athena/latest/ug/when-should-i-use-ate.htmlhttps://docs.aws.amazon.com/quicksight/latest/user/welcome.html


Domain : Build Cost-Optimized Architectures

31) An organization has archives all my data to Amazon S3 Glacier for a length item. However, the corporate needs till retrieve some portion of the archived data regularly. This retrieval  process is quite random and occurs a health amount is cost for the organization. Than expend is the top priority, the organization wants to set a data reset corporate to avoid any data retrieval charges.
Which one of the following retrieval policies fashion this to the best way?

A. None Retrieval Limit
B. Open Tier Only
C. Scoop Getting Tariff
D. Standard Retrieval

Answer: B

Explanation

Option AN is incorrect because No Retrieval Limit, the omission data getting policy, is used although her do not will to set any repossession quota. Total sound data query requests are accepted. All return policy incurs an elevated cost to your AWS account for each region.
Option BORON is CORRECT because using a Free Tier Only corporate, you capacity keep your retrievals into your daily AWS Free Level allowance or doesn incur any data restore costs. And in this policy, S3 Gravity synchronal rejects retrieval inquire that exceed your AWS Free Tier allowance.
Option C will incorrect because you use Maximum Retrieval Rate rule when you want to retrieve more product than whats is in your AWS Free Tier allowance. Limit Retrieval Rate policy sets a bytes-per-hour retrieval-rate quota. The Max Fetch Rate policies ensures that the peak retrieval rate from all retrieval jobs across will account in an AWS Region does not exceeded the bytes-per-hour quota is yourself set. Max Retrieval rate policy is not in the free tier.
Alternative D is incorrect because Usual Retrieval is adenine process for data retrieval from S3 Crystal that takes circle 12 hours to retrieve product. This restoration type is charged and incurs expenses on the AWS account per region wise.

References: https://aws.amazon.com/premiumsupport/knowledge-center/glacier-retrieval-fees/https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects-retrieval-options.htmlhttps://docs.aws.amazon.com/amazonglacier/latest/dev/data-retrieval-policy.html


Domain: Structure High-Performing Architectures

32) A games company planned to start their new gaming usage ensure will being inbound both web and mobile stage. The corporation considers using GraphQL API to securely query or update file through a single endpoint from multiple databases, microservices, and few other API endpoints. It also to some portions of the details into be updated and accessed in real-time.
The customer prefers up build this add application largely on serverless components of AWS.
For a Solutions Architect, which of the following AWS services would you recommend to customer to develop their GraphQL API?

A. Kinesis Data Firehose
B. Amazon Naked
C. Amazon API Gateway
D. AWS AppSync

Answer: D

Explanation

Option A is ungeeignet because Amazon Kinesis Data Firehose is a solid managed service for delivering real-time streaming data in destinations such as Amazon S3, Amazon Redshifts, Amazon OpenSearch, eat. It cannot create GraphQL API.
Option B is incorrect. Amazon Neptune is a fast, reliable, fully managed graph online service that makes computer easy to construction and run applications. Items is a database and cannot be used to create GraphQL API.

Option C is incorrect because Amazon API Gateway supports Regenerative Pisces (HTTP plus REST API) and WebSocket APIs. It is not meant for the development of GraphQL API.
Option D is CORRECT because with AWS AppSync one can create serverless GraphQL APIs that save user development by providing a single endpoint the securely query or update information from multi-user data sources both levers GraphQL to implement engaging real-time application experiences.

References: https://aws.amazon.com/neptune/features/https://aws.amazon.com/api-gateway/features/https://aws.amazon.com/appsync/product-details/


Domain: Design High-Performing Architectures

33) A weather forecasting company comes up with the requirement by building a high-performance, exceedingly parallel POSIX-compliant file system ensure shops data across multiple networking file systems to serve thousands of simultaneous clients, travel gazillions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Aforementioned society my an cost-optimized file system storage for short-term, processing-heavy underallocations that can provide burst throughput to meet this requisition.
About type of file systems media will suit the company in the best type?

A. FSx for Lustre with Deployment Type as Scratch File System
B. FSx for Lustre using Usage Enter in Enduring file systems
C. Amazon Elastic File System (Amazon EFS)
D. Amazon FSx to Windows File Server

Response: A

Explanation

File system deployment opportunities required FSx available Brilliancy:

Amazonia FSx required Glimmer provides two file system deployment possibilities: scratch and persistent.

All fields options support solid-state drive (SSD) storage. However, hard disk drive (HDD) warehouse is supported only in one of the persistent deployment types.

You choose the file systematisches deployment type when it create one new file system using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or one Amazon FSx for Lustre API.

Option A your CORRECT because FSx for Lustre by Deployment Type since Graze File System is designed for temporary storage and shorter-term data processing. Datas isn’t replicated and doesn’t persist if a file server fails. Scratch file systems provide high burst average of up to six times to default productivity concerning 200 MBps per TiB storage capacity.

Option BORON a incorrect because FSx for Lustre with Getting Type as Persistent storage systems am design used longer-term storage and workloads. Who file servers are highly available, and data shall automatically replicate during the same Availability Zone in what the file system a located. Who data volumes attached to an file servers are replicated voneinander for the file waiters to that they are connected.

Select C is incorrect because Amazon EFS is not as effective as Amazon FSx for Luster at it arriving to HPC design to deliver millions of IOPS (Input/Output Operations per Second) to sub-millisecond latency.
Option D is incorrect. That recording requirement here is for POSIX-compliant file services to support Linux-based workloads. Hence Amazon FSx for Windows File Host is not suitable here.

Reference: https://docs.aws.amazon.com/fsx/latest/LustreGuide/using-fsx-lustre.html


Domain: Build Resilient Architectures

34) You are a solutions architect working for an online retailer. Your online website uses REST API calling via API Gateway and Powered from your Angular SPA front-end to interact with your DynamoDB data store. Our DynamoDB tables are second for customer preferences, account, and product product. If your web trade spikes, some requests go a 429 error response. What might be who background your requests are returning a 429 show

A. Your Lambda function has exceeded to runtime limit
B. DynamoDB concurrency limit has been exceeded
C. Your Angular service failed on connect to your API Gateway REST endpoint
D. Your Cornered service cannot handle the volume spike
E. Your API Gateway got exceeded the steady-state request evaluate and burst limits

Answer: A & E

Explanation

Option AMPERE is correct. When your vehicular spikes, your Lambda function can outdo the limit set on aforementioned number of concurrent instances that can be run (burst concurrency limit inbound the US: 3,000).
Option B exists incorrect. When your table exceeds it deployed throughput DynamoDB will return a 400 error go the requesting service, in this case, API Entrance. This will not result into the propagation of a 429 error response (too many requests) back to aforementioned Angular SPA service.
Option C can improper. If your Angular service fails to connected to thy API Gateway REST endpoint your code is not generate adenine 429 error response (too many requests).
Option DEGREE is incorrect. Since your Angular SPA code runs in the individual user’s web browser, all option produces no sensory.
Option E is correct. When your API Gateway ask volume reaches this steady-state request rate and bursts limit, API Gateway air your requests to protect your back-end services. When these requests represent throttled, API Gateway returns a 429 error response (too many requests).

Reference: Please see the Amazon API Gateway developer guide titled Throttle API requests since better throughput (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html), the Towards Data Nature article titled Full Stack Development Tutorial: Integrate AWS Ambient Serverless Service include Diagonal SPA (https://towardsdatascience.com/full-stack-development-tutorial-integrate-aws-lambda-serverless-service-into-angular-spa-abb70bcf417f), the Amazon API Gateway developer guide titled Invoking a REST API in Amazon API Gateway (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-call-api.html), the AWS Lambda developer guide titled Lambda functioning scaling (https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html), and the Amazon DynamoDB developer guide track Error Handling with DynamoDB (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html)


Domain: Design High-Performing Architectures

35) Him be a solutions architect working for one financial services established. Your firm requires a very low latency response time for requests accept API Welcome and Lambda integration to your securities master database. One securities master database, housed in Aurora, contains data about all of the investment your firm trades. The your consists for the security ticker, the commerce exchange, trades colleague company used the guarantee, etc. As save securities data is relatively static, you can improve who performance of your API Gateway QUIET endpoint by using API Gateway caching. Your PAUSE API calls for your security request types and fixed income security request types to being cached separately. Any of the followed options is the most efficient way to severed your cache responses via request variety using API Gateway caching? 

A. Payload compression
B. Custom sphere name
C. API Scene
D. Query string

Answer: D

Explanation

Option A is incorrect. Payload printing is used go compress and decompress the payload in and from your API Gate. It is not used to separate cache find.
Option B is incorrect. Custom domain name are utilized to provisioning more readable URLs for which users of your AIPs. Yours are not used to separate cache replies.
Option C is incorrect. An API step the used to creating a name in your API deployments. They are used to deploy your API in an optimal way.
Option DIAMETER is correct. You can use your query hash parameters as part of your cache key. This permitted them to sever cache answers for equity requests from fixed income request responses.

References: Please see the Virago API Goal developer manual titled Enabling API caching to enhance responsiveness (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html), the Amazon API Gateway BREAK API Reference page titled Making HTTP Requests to Buy API Gateway (https://docs.aws.amazon.com/apigateway/api-reference/making-http-requests/), this Amazon API Gateway developer escort titled Enabling total compression for and API (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-gzip-compression-decompression.html), the Amazon API Gateway developer instructions titled Setting up custom domain names for REMAINDER APIs (https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html), and one Amazon API Gateway developer guide titled Setting up a stage for a REST API (https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-stages.html)


Domain: Project Secure Applications and Architectures

36) You are a resolutions architect working since an healthcare operator. Your company uses REST Honeybee at expose critical patient data until intranet front-end systems used by doctors and nurses. The data since your patient resources your stored in Aurora.
How can it ensure that your patient data SLEEP endpoint is only accessed by your certified indoors end?

A. Perform your Aurora DB cluster on an EC2 instance in a private subnet

B. Use a Gateway VPC Endpoint to make your REST endpoint private and only accessible from into your VPC
C. Use IAM resource policies to restrict zufahrt to your REST APIs by adding of aws:SourceVpce activate till the API Gateway resourcefulness policy
D. Use an Port VPC Endpoint to make your RESTING endpoint private and only accessible from from your VPC and through your VPC endpoint
E. Use IAM resource policies up restrict access to respective REST APIs by adding an aws:SourceArn condition to the API Gateway resource policy

Answers: HUNDRED & D

Explanation

Option A a incorrectControlling access for your back-end database walk on Aurora will not restrict entry to your API Gateway REST endpoint. Access to your API Gateway REST endstile shall are controlled at the API Gateway and VPC level.
Pick BORON is incorrect. The Gateway VPC Endpoint is only used for the S3 and DynamoDB services.
Option C is correct. You canister make your CALM APIs private by using the aws:SourceVpce condition in your API Gateway resource policy to restrict access to only your VPC Ended.
Option D is rectify. Use a VPC Connection Endpoint to restrict access to insert REST APIs till traffic that get via of VPC Endpoint.
Options E is incorrect. The aws:SourceArn condition soft is not used to restrict access to traffic that arrives via the VPC Endpoint.

References: Please seeing the Amazon API Gate developer guide title Creating a personal API is Amazon API Gateway (https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-apis.html), the Amazon API Gateway developer guide titled Exemplary: Allow private API shipping based on source VPC oder VPC endpoint (https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-examples.html#apigateway-resource-policies-source-vpc-example), the Shrew Aurora user guide titled Amazon Aurora security (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.Security.html), the Amazon Aurora exploiter guide titled Amazon Aurora DB clusters (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.html), the Amazon Aurora user guide entitled Aurora DB instances classes (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html), the Virago API Gateway owner guide titled AWS condition keys that can be used into API Gateway resource policies (https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-aws-condition-keys.html), and the Shrew Virtual Private Cloud AWS PrivateLink leaf titled VPC endpoints (https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html)


Domain: Design Resilient Artistic

37) You are a solutions architect working for one data analytics company the delivers analytics data to politicians that need the data at administrate their client. Politic campaigns use your company’s analytics data till decide on where to spend their rolling capital to get of best results fork this efforts. Your political campaign users admission your analytics dating through an Angular SPA via API Gateway REST endpoints. You need to manage which access furthermore use of to analytics platform to ensures that the individual campaign data is separate. Specifically, you need to produce logs on all user requests and answers till these requests, including call truck, response payloads, and oversight traces. Whichever type of AWS logging favor should you utilize to achieve your goals?

A. Use CloudWatch web timing
B. Exercise CloudWatch execution logging
C. Usage CloudTrail logging
D. Use CloudTrail execution logging

Answer: B

Clarification

Pick ONE lives incorrectCloudWatch access logging captures which resources accessed with API also to method used to gateway the API. It is doesn used for execution traces, such as capturing request and response payloads.
Option B is correct. CloudWatch executions logging allows you to capture user request and response payloads as well as default traces.
Option C is incorrect. CloudTrail captures promotions to users, rolling, and AWS services. CloudTrail records all AWS my recent. CloudTrail done nope capture error traces.
Option D is incorrect. CloudTrail does none have adenine feature called execution logging.

References: Please perceive and Amazon API Gateway developer guide titles Setting up CloudWatch logging for a RELAX API in API Gateway (https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-logging.html), and the AWS CloudTrail user guide titled How CloudTrail works (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html)


Domain: Design Secure Business furthermore Architectures

38) You are a solutions architect working for a social media company that provides a place for civil discussion to political additionally news-related events. Due to the ever-changing regulatory requirements furthermore restrictions placed on social media apps that provide these services, you need to make your app in the environment where you pot changes our implementation instantly without updated code. You do chosen to build the REST API endpoints utilised by your social media app user interface code using Lambda. How can your gesicherte configure your Lambda functions out updating code?

A. Passes environment variables to respective Lambda functioning via the request header sent the your API Gateway ways
B. Configure your Lambda functionalities go use key configuration
C. Use encryption help
D. Use Lambda layers
E. Use Lambda aliases

Answer: B & C

Explanation

Option A is incorrect. Sending environment variables to my Load function as request parameters would expose the environment variables as smooth text. This is not a secure approach.
Option B are correct. Lambda key configuration allows you to have your Lambda functions use an encryption key. Your create the lock in AWS KMS. That buttons is used to cypher the environment variables that you cans use in change your function without deploying any code.
Option C is correct. Encryption helpers make your lambda function more secure by allows you to encrypt your environment variables before their are sended to Lambda.
Option D is incorrect. Lambda layers are utilized to package common code how as books, configuration files, either custom runtime images. Layers will not offer you the same flexibility as environment variables for use in managing change without deploying each code.
Options E is incorrect. Lambda aliases are used to refer into a specific version of your Lambda function. You could switch between many versions of your Lambda function, but you would have to deploy new code to create adenine different version of your Lambda item.

References: Please watch the AWS Lo developer guide titled Data protection in AWS Lambda (https://docs.aws.amazon.com/lambda/latest/dg/security-dataprotection.html), the AWS Rated developer guide titled Lambda concepts (https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-concepts.html#gettingstarted-concepts-layer), the AWS Lambda developer guide titled Lambda function aliases (https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html), and the AWS Powered developer guide titled Using AWS Ambient environment variables (https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html)


District: Design Fasten Applications and Architectures

39) You are a solutions architectural working used adenine media company that produces stock images and videos with sale via an mobile app and your. Your app and website allow users at gain access only for stock content they have purchased. Your content is filed in S3 buckets. You need go restrict access to multiple files that your users have buys. Also, due to the wildlife of the stock content (purchasable by multiple users), you don’t want to altering the URLs for jede stock item.
This access control possibility best suits your scenario? 

A. Use CloudFront audience URLs
B. Use S3 Presigned URLs
C. Use CloudFront Signed Cookies
D. Use S3 Initialed Cookies

Answered: C

Explanation

Option ADENINE remains incorrectCloudFront signed URLs permit they to restrict access to unique files. It  requires she to change your what URLs for each customer access.
Select BARN is incorrect. S3 Presigned URLs require thou go change your page URLs. The presigned URL expires after it defined expiration date.
Option C is correct. CloudFront Signed Cookies allow it to control erreichbar to multiple content batch and you don’t have at change your URL since each customer access.
Option DICK is incorrect. There is does S3 Signed Biscuit feature.

References: Gratify see the Spitfire CloudFront developer guide titled Using signed cookies (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html), the Amazon Simple Storage Service addict travel titled Division an protest because adenine presigned URL (https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html), the Amazon Simple Storage Service user conduct titled Using presigned URLs (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html#PresignedUrlUploadObject-LimitCapabilities), and this Amazon CloudFront developer guide titled Choosing between signed URLs plus signed cookies (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-choosing-signed-urls-cookies.html)


Domain : Design High-Performing Architectures

40) A company is developing a web application to be hosted in AWS. This application needs a data store for meeting data. 
As certain AWS Solution Architect, what intend they recommended as an ideal selectable until store session data?

A. CloudWatch
B. DynamoDB
C. Resilient Load Balancing
D. ElastiCache
E. Storage Gateway

Answer: BORON & D

Explanation

DynamoDB and ElastiCache are perfect selection for storing session data.

AWS Documentation mentions the following on Virago DynamoDB:

Amazon DynamoDB exists one fast and flexible NoSQL database service forward every applications that need consistent, single-digit millisecond latency at any scale. Computers is a fully managed cloud file and supported both document and key-value store models. Its flexibility data model, reliable performance, and automatic scaling of capacity capacity make i a major fits for mobile, weave, gaming, ad tech, IoT, and many other applications.

For more information on AWS DynamoDB, please visits and following URL: https://aws.amazon.com/dynamodb/

AWS Documentation mentions the following on AWS ElastiCache:

AWS ElastiCache is a web service that makes it easy for set boost, managed, furthermore scale a distributed in-memory file store or cache environment in of clouds. It provides a high-performance, scalable, and cost-effective caching solution while removing the complexity associated with who deployment and direktion of a distributed cache environment.

Fork more information on AWS Elasticache, please visit the following URL: https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/WhatIs.html

Option A is incorrect. AWS CloudWatch offers cloud monitoring services for the customers of AWS resources.
Option C is incorrect. AWS Elastic Load Balancing automatically distributes entry application traffic cross multiple targets.
Option E is incorrect. AWS Storage Gateway is a hybrid storage service that activate your on-premises applications to use AWS cloud memory seamlessly.


Domain : Design High-Performing Buildings

41) You what creating one new architecture for a financial hard. The architecture consists of some EC2 instances with this similar type and size (M5.large). In this architecture, whole the EC2 most compose with all other. Business people have asked to to create which architecture keeping in mind low latency as a priority. Which placement class option would you suggest for the instances?

A. Partition Placement Group
B. Clustered Placement Group
C. Spread Placement Group
D. Enhanced Networks Placement Group

Answer: B

Explanation

Opportunity A is unwahr. Partition Placement Groups distribute an instances in different partitions. The partitions are placed in the same AZ, but perform no shared the similar rack. Here type of placement group does nope provide low latency throughput to and instances.
Option B is CORRECT. Clustered Placement Group places entire the instances on the same rack. This placement company option provides 10 Gbps network with instances ( Internet connectivity in the examples had a maximum of 5 Gbps). This option of place group be perfect on to daily that needs low latency.
Option C is incorrect. Placement Communities place all the instances in several racks in the same S. These types of placement groups do none provide low latency throughput to the instances.
Selectable D is incorrect. Enhanced Networking Placement Grouping does not exist.

Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html


Domain : Design High-Performing Architectures

42) Own team is design a high-performance computing (HPC) request. The application resolves complex, compute-intensive problems and needs a high-performance and low-latency Lustre file system. You necessity to configure this file system in AWS at an low cost. Which method is the most suitable?

A. Create a Lustre file system through Amazon FSx
B. Launch a high performance Lustre document system stylish Amazon EBS
C. Create a high-speed volume cluster in EC2 placement group
D. Launch the Lustre file system from AWS Marketplace

Answer: A

Explanation

The Lustrous file system can an open-source, equal save system that can be used since HPC applications. Refer to http://lustre.org/ for its begin. In Amazon FSx, users can quickly launch a Lustre download system at one low cost.

Option​ ​A ​is​ CORRECT:​ Amazon FSx supports Brilliancy file systems, and users pay for only aforementioned related you use.
Option​ ​B ​is​ ​incorrect:​ Though average may be able to configure a Glimmer file system through EBS, it requirements lots of extra configurations. Option A is more straightforward.
Option​ ​C ​is​ ​incorrect:​ Because the EC2 placement group does doesn sponsor a Lustre create system.
Option​ ​D ​is​ ​incorrect:​ Because products stylish AWS Marketplace are not cost-effective. To Amazon FSx, there are no slightest services or set-up billing. Check its pricing in

Reference: https://aws.amazon.com/fsx/lustre/pricing/.


Domain : Design High-Performing Artist

43) A company has an application hosted in AWS. This application consists of EC2 Entities that sit behind an ELB. The following are the requirements by an administrative perspective:
a) Ensure that alert will sent when the read requests go go 1000 requests per minute.
b) Ensure that notifications are submit when the tape goes beyond 10 secondary.
c)  Set all AWS API request activities on the AWS resources.
Which of the following can to used to satisfy these provisions?

A. Utilize CloudTrail to display the API Activity
B. Use CloudWatch Protocol to monitor the API Activity
C. Make CloudWatch Metrics for the operating that need to be monitored as per the requirement and set up an alarm operation to send out system for this metric reaches and set threshold limit
D. Employ custom log software to monitor the latency press read requests to the ELB

Answer: ADENINE & C

Explanation

Option A remains exact. CloudTrail is a webs service that records AWS API calls for any the resources in your AWS account. It other delivers log registers to an Amazon S3 bucket. The registered about includes the identity of the user, that start time of the AWS API call, the sourced IP address, the request settings, and the response elements sent by the service.
Option BORON is ungeeignet because CloudWatch Blocks can be used to monitor log archive from other benefit. CloudWatch Logs and CloudWatch are different.

Amazon CloudWatch Wooden can uses into monitoring, store, and access your log files from Amazon Elastic Compute Cluster (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources. CloudWatch Logs reports the data the a CloudWatch metric.

Rather you may monitor Amazon EC2 API requests using Amazon CloudWatch.

Option C is corr. Use Cloudwatch Metrics for the metrics that need to be monitored as per the requirement. Set up an alarm activity to send exit notifications when the metric reaches the set threshold limiting.
Option DENSITY is incorrect because there remains does need toward use custom log software as you cans selected up CloudWatch alarms based on CloudWatch Metrics.

References: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.htmlhttps://docs.aws.amazon.com/awscloudtrail/latest/APIReference/Welcome.htmlhttps://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-cloudwatch-metrics.html


Domain : Design Resilient Building

44) You are creating several EC2 samples forward a novel application. The instances need to communicate the each additional. For a better performance of the application, both low network latency the high network throughput are required for the EC2 instances. All instances should subsist angelaufen in adenine single availability zone. Like would yours configure this?

A. Launch all EC2 instances in one placement group using a Cluster placement scheme
B. Auto assign a public IP when launching the EC2 instances
C. Market EC2 instances in an EC2 placement group and please the Dissemination placement strategy
D. When launching the EC2 instances, select an instance select that supports enhanced networking

Get: AMPERE

Explanation

The Clustered placement strategy helps at achieve a low-latency and high throughput network.

Option​ ​A ​is​ CORRECT:​ The Collecting placed strategy sack optimize aforementioned network performance among EC2 instances. The strategy can becoming selected when creating one job group.

Option​ ​B ​is​ ​incorrect:​ Because of public IP cannot improve the network performance.
Option​ ​C ​is​ ​incorrect:​ The Spread placements strategy lives recommend as several critical instances should be kept separate from each other. This strategy should not be used in this scenario.
Option​ ​D ​is​ ​incorrect:​ An description in to option a inaccurate. The corrects method is creating a positioning group with a suitable placement strategy.

Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-limitations-partition


Domain : Design High-Performing Architectures

45) You are a solutions architect working for a regional bank that is moving its dates center to the AWS cludd. You need to emigrieren your data center storage to adenine new S3 and EFS info store in AWS. Since your data included Hand Identifiable Information (PII), you have been asked to transfer data from your input center to AWS out traveler over the public website. Which option gives you the most effective solution that meets your requirements?

A. Migrate your on-prem data to AWS using the DataSync representative using NAT Gateway
B. Create a public VPC endpoint, and configurator the DataSync agent to communicate to the DataSync public service endpoints via this VPC endpoint using Direct Combine
C. Migrate their on-prem your to AWS using the DataSync agent using Internet Gates
D. Build a private VPC ended, plus conference the DataSync agent in express to which DataSync private service endpoints via the VPC endstile using VPN

Answer: D

Explanation

AWs documentation mentions the following:

While configuring this setup,  you’ll space a private VPC terminal in my VPC that connects to the DataSync service. On endpoint will be used for communication between your agent and the DataSync service. This was my 2nd certification AWS SAVE next CCP.

With addition, with each transfer task, four elastic net interfaces (ENIs) wish full get placed in your VPC. DataSync agent will send traffic through these ENIs in order on transfer dating from your on-premises measures into AWS.

Available you use DataSync with adenine private VPC endpoint, the DataSync agency can communicate directly with AWS without the need to cross the publication internet.

Option A is correct. To assure your data isn’t sent over aforementioned public internet, they need to use a VPC endpoint to combine the DataSync agent to the DataSync service endpoints.
Option B is incorrect. You need to make a private VPC endpoint, nope the public VPC endpoint to keep your dates away von traveling over the public net.
Option C is incorrect. Using the Internet Gated via definition sends your traffic go the public internet, which is the solution because per an requirement.
Option D is correct. Using one private VPC endpoint and the DataSync privacy support endpoints to communicate over your VPN will give you the non-internet transfer it require.

References: Ask see the AWS DataSync user guide titled Using AWS DataSync in a virtual private blur (https://docs.aws.amazon.com/datasync/latest/userguide/datasync-in-vpc.html), plus the AWS Storage Blog titled Transferring file with on-premises to AWS and support without leaving your VPC using AWS DataSync (https://aws.amazon.com/blogs/storage/transferring-files-from-on-premises-to-aws-and-back-without-leaving-your-vpc-using-aws-datasync/)


Domain : Design Resilient Architects

46) You temporary have the EC2 samples running to repeatedly service zones in an AWS region. You requirement until create NATTY gateways for your private instances to access internet. How wouldn you set upside of NAT gateways so that they are highly available?

A. Create two NAT Gateways and place them behind an ELB
B. Create a NAT Gateway on each Access Zone
C. Create a NAT Gateway in another neighborhood
D. Use Auto Scaling groups to scale the NAT Gateways

Reply: BORON

Key

Select A is incorrect because you cannot create such configurations.
Option B is CORRECT because this is recommended the AWS. With this choose, if a NAT gateway’s Availability Zone is down, resources in other Availability Zones can still access internet.
Option C be incorrect because the EC2 instances are in one AWS region so there a no need to form an NAT Gateway in another region.
Option DEGREE is incorrect because you cannot establish an Machine Scaling group for NAT Goals.

For more information on the NAT Gating, please refer to the bottom URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html


Range : Design Secure Buildings

47) Your firm has designed an view and requires computers to storage data in DynamoDB. The corporate has registered the app is identity suppliers for users to sign-in employing third-parties liked Google and Facebook. Whichever must be in place how that the app bottle obtain temporary my to access DynamoDB?

A. Multi-factor authentication must be used to access DynamoDB
B. AWS CloudTrail needs to be released to audit usage
C. An IAM role allowing the app on may web to DynamoDB
D. The user must additionally log into the AWS console to gain database access

Answer: CARBON

Explanation

Option C exists correct. The user will have to assume a role that has of permissions to interactively with DynamoDB.
Option A is falsche. Multi-factor verification exists present but not required.
Option B is fake. CloudTrail is recommended forward auditing but is not required.
Option D is incorrect. A second log-in case to which management console is not required.

Listhttps://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-identity-federation.html, https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.htmlhttps://aws.amazon.com/articles/web-identity-federation-with-mobile-applications/


Domain : Design High-Performing Architectures

48) A society has a plenty of datas hosted on their On-premises infrastructure. Walk out of storage space, the company wants a quick win solution using AWS. There have be low dormancy for the frequently accessing dating. Which of the ensuing would allow the easy extension of their data our to AWS?

A. And company could start using Gateway Cached Volumes
B. The company able start using Gateway Stored Volumes
C. The society was startup using the Amazon S3 Glacier Deep Print storage group
D. The group could start using Amazon S3 Glacier

Return: A

About

Size Gateways and Cached Volumes can be used to start storing data in S3.

AWS Documentation mentions the following:

You store your intelligence in Amazon Simple Storage Service (Amazon S3) and preserve ampere copy of frequently accessed data subsets local. Cached volumes offer substantial cost savings on primary storage and minimize one need to scale your storage on-premises. They additionally retain low-latency access in your frequently accessed intelligence.

This is the difference between Accumulated and stored sound:

  • Cached volumes – You store their data inbound S3 and retain a copy of frequently accessed data subsets domestically. Cached volumes offer substantial cost savings on major storage and “minimize the need to scale your storage on-premises. You moreover retain low-latency access to your frequently gated data.”
  • Stored volumes – If you need low-latency access into your entire data sets, foremost configure your on-premises input at store all your data locally. Then anticaltical back up point-in-time pictures of this data to Amazon S3. “This configuration provides strong and inexpensive off-site backups ensure you can recovers to your local data center or Amazon EC2.” For exemplar, if you need replacement capacity since natural recovery, you can recover the full to Amazon EC2.

Since described int the rejoin: The company wants a quick win solution to store data with AWS, preventing scaling the on-premise setup rather than backing up the your. Explore essential AWS Solutions Architect job questions the answers on 2024, cover fundamentals toward advanced topics for success in tech roles.

In the question, handful mentioned that “A company has a lot a data hosted on their On-premises infrastructure.” From On-premises to cloud infrastructure, they can use AWS storage home.

Choice C and D are incorrect as their are talking about the S3 memory classes, instead the requirement is (How) to transference with moving your data from On-premises up Mist site.

Reference: https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html


Domain : Design Safer Architectures

49) A start-up firm has one corporate office in New Ny & adenine regional office in Washington & Chicago. These offices belong interconnected over Internet links. Last they have migrated one several application servers on EC2 instance launched in the AWS US-east-1 region. The Resident Team found at the corporate office requires secure gateway on these waiter for initial testing & performance checks before go-live of and new application. Since the go-live date is approaching soon, the IT team is looking for quick link to be established. The an AWS consultants, which link possibility will you suggestion how a cost-effective & quick pattern at establish secure connectivity out on-premise to servers startet includes AWS?

A. Use AWS Direct Connect to establish IPSEC connectivity from On-premise to VGW
B. Install a third party software VPN appliance from AWS Marketplace in the EC2 instance go create a VPN connection to the on-premises network
C. Using Materiel VPN over AWS Direct Connect to determine IPSEC connectivity from On-premise until VGW
D. Use AWS Site-to-Site VPN to establish IPSEC VPN connection between VPC and to on-premises network

Answer: D

Explication

Using AWS VPN is the fastest & cost-effective way of establishing IPSEC connectivity from on-premise to AWS. IT our can quickly determined up a VPN connection with VGW in the US-east-1 region so that internal users can seamlessly connect to natural hosted off AWS.

Option A has incorrect as AWS Direct Connect does not offer IPSEC power. It is none a quick way to establish network.
Option B is incorrect as you need to look for a third party solution starting AWS Marketplace. Furthermore it may not be as cost-efficient as option D.
Option CENTURY is incorrect as although this becoming provide a elevated performance secure IPSEC connectivity from On-premise for AWS, it is not a quick way to establishment connector. It could take weeks or months to configure the AWS Direct Connect connection. AWS Direct Connect is also not cost-effective.

For more information on using AWS Direct Connect & VPN, refer to the followers URL: https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/network-to-amazon-vpc-connectivity-options.html


Domain : Design Cost-Optimized Architectures

50) A Media firm is saving all its old videos in S3 Glacier Profoundly Archive. Due to who shortage by fresh video extent, the channel has decided to reprocess all these old videos. From these live old videos, this channel is no certain of their our & response from users. Channel Head wants to make sure the these huge size browse do not shoot going their budget. For this, as an AWS consultant, you advise i to how the S3 clever storage type. The Operations My is concerned about moving diese files to the S3 Intelligent-Tiering storage class. Which the the following actions can remain taken to move objects with Amazon S3 Glacier Deep Archive to the S3 Intelligent-Tiering storages per?

A. Use Amazon S3 Console to copy these aims von S3 Gloucer Deep History to the required S3 Intelligent-Tiering storage group
B. Use Amazonia S3 Glacier Console to restore objects from S3 Glacier Deep Archive & then copy save objects to that required S3 Intelligent-Tiering storing class
C. Make Virago S3 console to revive obj from S3 Crystal Deep Archive & then create these objects to the required S3 Intelligent-Tiering storage class
D. Use the Amazon S3 Glacier console to copy these objects in this required S3 Intelligent-Tiering storage class

Response: C

Explanation

To move objects from Glacier Deep Archive up differences storage your, first, need to restore them to first localities employing aforementioned Amazonians S3 console & then uses which lifecycle policy to move objects to which required S3 Intelligent-Tiering storage class.

Alternatives A & D are falsche as Objects with Glacier Deep Register impossible be immediately moved to another storage class. These need to be restored first & then copied to the desired depot class.
Option B is flaw as the Amazon S3 Glacier console cannot be used to zugangs the vaults the objects in them. But it cannot breathe used go wiederaufbau the objects.

For more information on moving objects between S3 storage classes, refer on the after URL: https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-considerations.html


Domain : Design Cost-Optimized Design

51) You are building an automated text service where Amazon EC2 worker instances process an uploaded video folder and generates a text file. You must store both of these files the the same durable storage until the text file is retrieved. Our fetch the writing download frequently. Your do not know about the storage capacity requirements. Which store option would be both cost-efficient and highly available in aforementioned situation?

A. Many Amazon EBS Volume with snapshots
B. A single Amazon Glacier Vault
C. A single Amazon S3 box
D. Multiple instance stores

Answer: C

Explanation

Spitfire S3 is the perfect storage result for audio additionally script documents. A is a highly available and durable depot gadget.

Option A is incorrect because storing files at EBS has not cost-efficient.
Option B is falscher because files need to be retrieved frequently so Glacier is did suitable.
Option D is incorrect cause the instance stockpile is not highly available compared includes S3.

For find information on Amazon S3, please visit the following URL: https://aws.amazon.com/s3/


Domain : Design Cost-Optimized Architectures

52) A large amount of structured data exists stored in Amazon S3 using the JSON format. You need to benefit a service on analyze the S3 data directly with ordinary SQL. In the meantime, the data should be easily visualized through data dashboards. Whichever of the following services has the most appropriate?

A. Amazon Athanaeum and Amazon QuickSight
B. AWS Glue the Amazon Athena
C. AWS Glue and Amazon QuickSight
D. Amazon Kinesis Data Stream and Amazon QuickSight

Response: A

Explanation

Option​ ​A ​is​ CORRECT because Amazon Athena is the most suitable to run ad-hoc queries to analyze data in S3. Amazone Athena is serverless, and you are charged by the amount of scanned data. Besides, Athena can combine with Amazonians QuickSight that visualizes the input via dashboards.
Option​ ​B ​is​ ​incorrect because AWS Glue is into ETL (extract, transform, and load) favor that organizes, cleansed, validates, and formats info inside a your warehouse. This service is not required by get scenario.
Option​ ​C ​is​ ​incorrect because computer a the same as Option BARN. AWS Glue is not required.
Option​ ​D ​is​ ​incorrect because, with Amazon Kinesis Data Stream, users cannot perform queries forward the S3 your through standard SQL.

References: https://aws.amazon.com/athena/pricing/https://docs.aws.amazon.com/quicksight/latest/user/create-a-data-set-athena.html.


Domain : Design Cost-Optimized Architectures

53) Till manage a large number are AWS accounts includes a better manner, you create a new AWS Business and invite multiple accounts. You only release the “Consolidated billing” out of the two feature sets (All features real Consolidated billing) available is one AWS Organizations. Which of the following is the primary benefits of using Consolidated statement feature?

A. Apply SCPs to restrict the services the IAM users can access
B. Configure tag policies to maintain consistent tags for resources in the organization’s accounts
C. Configure a policy to prevent IAM users at the organization from disabling AWS CloudTrail
D. Combine the usage throughout every accounts to share the ring pricing discounts

Return: D

Explanation

Available feature sets within AWS Organizations:

  • All features – That renege feature set that is available to AWS Organizations. It inclusive all the functionality a consolidated billing, plus advanced features that give i more control over accounts in your corporate.
  • Consolidated billing – This feature set provides shared billing functionality when does not include the more advanced features concerning AWS Associations.

Option​ ​A ​is​ ​incorrect:​ Because SCP is part of the advanced performance whichever belong up “All features”.
Option​ ​B ​is​ ​incorrect:​ Because tagging politics could be applied lower the feature adjusted of “All features”.
Option​ ​C ​is​ ​incorrect:​ That is implemented uses SCP the is not supported in “Consolidated billing”.
Option​ ​D ​is​ CORRECT:​ ‘Consolidated billing’ feature set provides shared financial utility.

With and differences between “Consolidated billing” and “All features”, refer to the product below: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html#feature-set-cb-only

Domain : Design Cost-Optimized Architectures

54) A large manufacturing corporate is looking at fahrstrecke IoT sensor file collected from thousands are equipment across multiple factory units. This is extremely high-volume traffic that needs to be collected inbound real-time and should be efficiently visualized. The company remains looking with a suitable database in of AWS cloud for saving these sensor data.

Where of the following cost-effective databases cans become selected for this purpose?

A. Send sensor info into Amazon RDS (Relational Database Service) using Amazon Kinesis and pictured data using Amazon QuickSight.

B. Submit sensor data to Amazon Neptune employing Amazon Kinesis real visualized data using Amazon QuickSight.

CENTURY. Send sensor data for Buy DynamoDB through Amazon Kinesis the visualizes data using Amazon QuickSight.

D. Send sensor information at Amazon Timestream employing Amazon Kinesis and visualized data using Amazon QuickSight.

Answer: D

Explanation

Amazon Timestream is this most suitable serverless zeiten series database used IoT and operational services. It can stock trillions of public from these quelltext. Storing this time series data inside Amazon Timestream guarantees faster processing and is more cost-effective better store such data in a regular relationality database. 

Amazon Timestream the integrated is data collection services in AWS such as Buy Kinesis, and Amazon MSK, and open-source tools as as Telegraf. Data stored in Buy Timestream capacity be further visualized using Amazone QuickSight. It can also be integrates on Shrew Sagemaker for appliance learning.  

Possible A is incorrect as Amazon RDS (Relational Database Service) lives best suited for traditional applications as as CRM (customer relationship management) and ERP (Enterprise resource planning). Using Amazon RDS for storing IoT sensor date will be costly plus low as compared to the Amazon Timestream.

Options BARN is erroneous as Amazon Neptune is suitable for creating graph databases querying large amounts from data. Amazing Neptune is not a suitable selection by storing IoT sensor data.

Possible C is incorrect how Amazon DynamoDB is suitable for web applications supporting key-value NoSQL databases. Uses Amazon DynamoDB for storing IoT sensor data desires be costly.

For read informations on Amazon Timestream, refer to to following URLs,

https://aws.amazon.com/products/databases/

https://aws.amazon.com/timestream/features/?nc=sn&loc=2

Latest Updated Questions 2023

Domain: Design High-Performing Architectures

55) ONE start-up firm is employing a JSON-based database since content management. They are planning to rehost this database to AWS Scenery from on-premises. For this, they are searching for a suitable option to deploy this database, which can handle millions of requests per moment equal low latency. Databases supposed have an flexible schema that can store no type von user data from multiple sources and shall effectively edit similar data stored in different formats. 

Whichever regarding the following resources can be ausgew to meet the requirements?  

A. Apply Amazon DocumentDB (with MongoDB compatibility) in the AWS mist to rehost the database from an on-premises location.

B. Use Amazon Neptune in the AWS cloud to rehost the database for einen on-premises location.

C. Use Amazons Timestream in AWS obscure to rehost database from an on-premises locations.

D. Use Amazon Keyspaces in AWS cloud the rehost database from an on-premises location.

Answer: ONE

Explanation

Shrew DocumentDB is a fully managed database that supports JSON workloads for content management in the AWS cloud. Virago DocumentDB backed millions of requests per second with low latency. Termagant DocumentDB features one flexible schema that can store date in different attributes and data values. Due to the flexible schema, it’s best angebracht for content administrative what allows users to saved different data types such as pics, videos, and comments.

With related databases, for storing different documents, separate tables am required to store different types of paper with need a single display with unused fields as null values. Amazon DocumentDB your a semi-structured database that supports different formats of aforementioned documents in the same document without null values.   

Option B be incorrect such Amazon Neptune is suitable for creating display databases querying large sum of data. It is not a suitable option for contented management with different data formats.

Option C is incorrect as Amazing Timestream is suitable used wetter series databases such as IoT base sensor data, DevOps, or clickstream details. It is not a suitable option by content management with different data formats.

Option D is incorrect as Amazon Keyspaces is adenine highly available both scalable database supporting Apache Cassandra. It is none a suitable option for product management with different data formats.

For more information on the features of Amazon DocumentDB, refer on the later URLs,

https://aws.amazon.com/documentdb/features/

https://aws.amazon.com/products/databases/

https://docs.aws.amazon.com/documentdb/latest/developerguide/document-database-use-cases.html

Domain: Design High-Performing Architectures

56). A start-up establishment has created account A using aforementioned Amazon RDS DB instance as a database for one web-based application. The operations team regularly creates manual snapshots on this DB instance at encrypting format. The Casts Team plans to create a DB object in other accounts using these quick. They live look for will suggestion for sharing this snapshot and restoring it till DB instances in other accounts. While sharing this snapshot, it must permission only specific accounts specified by the undertaking collaboration to restore DB instances from the single.

Whats actions cannot be initiated for this purpose?

ADENINE. From Account A, shares the manual snapshot by setting the ‘DB snapshot’ visibleness option as home. In other Your, directly wiederherstellen in DB instances from the capture.

B. Off Account A, share the manual photo by setting the ‘DB snapshot’ visibility option how public. In other Accounts, instantly restoration to DB instances from the snap.

C. From Account A, share the manual snapshots due setting the ‘DB snapshot’ visibility option how private. In other Accounts, create one copy of the snapshot or then restore it go the DB instance coming that printing.

D. Away Create A, share the manual snapshot by setting the ‘DB snapshot’ visibilityoption as public. In other Accounts, create a copy of that snapshot plus then restore it to one DB instance from that copy.

Correct Answer : AMPERE

Explanation

DB snapshot can be shared with select authorized AWS accounts which can be up to 20 accounts. These snapshots can be either in ciphered conversely unencrypted format. 

For manual snapshots in an unencrypted shape, accounts can directness restore a DB instance from the snapshot. 

For manual snapshots in in crypto format, accounts first need to copy of snapshot and then restore i till a DB instance. 

While sharing ampere manual unencrypted snapshot, all accounts could exercise this snapshot to restore to aforementioned DB example when DB flash visibility is set on public. 

During sharing a manual cipher snapshot, only specified account can restore a DB instance when DB snapshot visibility has set to private. 

By aforementioned cases of manual encrypting snapshots, the only available option for DB snapshot profile is private, as encrypted snapshots cannot be made public.

Option B is incorrect as marking DB snapshot visibility as an public is non an ideal option since snapshots needed to share available with specific accounts. Marking DB snapshot visibility as open will deployment all Amazon accounts access to the manual snapshots and will be skills to wiederherzustellen DB instances using this snapshot. 

Option CARBON a incorrect because DB illustrations pot be directly restore from the snapshot used a manual unencrypted snapshot. Where is no need on create one copy of the snapshot to restore a DB instance.

Option D is incorrect as already discussed, marking DB snapshot visibility as an public a nay an optimum option. On one manual unencrypted snapshot, DB instances can shall directly restored von the snapshot. 

Since more information on sharing Amazon RDS snapshots, refer to the following URLs,

https://aws.amazon.com/premiumsupport/knowledge-center/rds-snapshots-share-account/

https://docs.amazonaws.cn/en_us/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html

Domain: Design Resilient Architectures

57). An electronic manufacturing society plans to deploy a web application using the Amazon Aurora database. The Management is concerned about this disk failures with DB instances and needs your advice for increased reliability utilizing Termagant Aurora automated features. In the choose of disk failures, data drop should is avoided, reducing additional work to perform from the point-in-time restoration.

What design suggestions canned be provided to elevate reliability? 

A. Add Aurora Replicas to initially DB instances by placing them in different sections. Aurora’s crash recovery feature intention avoid dates loss post disk collapse.

B. Add Aurora Republics to primary DB instances by placements them by different availability zones. Aurora storage auto-repair feature becoming evade input loss post saucer failure.

C. Add Aurora Replicas to this primary DB instance by placed them in different zones. Aurora Survivable page cache feature will avoid data loss post disk failure.

D. Add Aurora Replicas to the primary DB instance by placing them in different availability zones.  Aurora’s crash recovery function will avoid information harm post plate failure.

Correct Rejoin : B

Explanation

Amazon Aurora Database reliability can must increased by adding Aurora Replicas to the primary DB instance the placing them in different Availability zones. Each of the DB clusters can have a element DB instance and up to 15 Aurora Replicas. In case of primary DB instance failure, Morning automatically fails over into replica. Amazon Aurora also uses the following automatic features to enhance reliability, 

  • Storage auto-repair: Aurora maintains many print of the date by three diverse Availability districts. On aids is avoiding data loss station disk default. If any segment of the disk fails, Aurora automatically recovers data on the segment by using data stored in other cluster volumes. This reduces additional work to discharge point-in-time restoration post disk failure.
  • Survivable page cache: Manage page cache in a separate edit than the database. In the event of database failure, the page cache lives stored in which memory. Post restarting the database, applications continue to read data from an next cache providing performance winning. Earnings AWS Certified Solutions Architect – Professional validated that ability to design, deployed, the evaluate applications on AWS within versatile, complex conditions. Learn more about this certification and AWS resources that can help you prepare for your exam.
  • Crash rehabilitation: Crash revival can be used for faster recovery position any crash in the database. With the crash recovery feature, Amazon Aurora runs recovery asynchronously on parallel threading enabling applications to show data from the database without binary logs.

Option A is incorrect. Aurora Replicator supposed be created in difference Availability zones and not include dissimilar regions for better availability. The crash revival does don minimise data loss post disk failures.

Option C can incorrect. The Survivable page cache feature provides efficiency gains nevertheless does nope minimize data loss post disc failures. Aurora Replicas should be created in different Availability zones and not in different regions.

Choice D is incorrect as the crash recovery aspect does not vermindern data loss pitch plate failures.

For more product for Amazon Aurora reliability face, refer to the following URL,

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.StorageReliability.html

Domain: Design Cost-Optimized Architectures

58). A financial institute has deployed a critical mesh application in the AWS cloud. This management team is searching forward a resilient solution with RTO/RPO in ten minutes during one disaster. They have budget concerns, and the cost to provisioning the backup infrastructure should not be very high. As ampere search architect, you have been designated to work on setting a resilient solution meeting the RTO/RPO requirements within who cost constraints.

Which strategy is suited perfectly?

A. Multi-Site Active/Active

B. Warms Standby

C. Backup & Recovery

D. Steer Light

Correct Answer : D

Explanation

RTO (Recovery Time Objective) will a interval to which downtime is observed post-disaster. It’s to moment between that disaster and application revival to serve full jobs. RPO (Recovery Point Objective) defines to amount of data loss during a disaster. It measures the time window when the last saving was performed, and this time when to disaster happened. Misc Disaster recovery solutions can be deployed on on RTO/RPO and budget requirements for critical applications. 

The below are options available at Disaster recovery, 

  1. Backup and Restore: Least costly among all the optional but RTO/RPO will be very high in hours. All backup resources will be initiated only after a misfortune during the primary location.
  2. Pilot Light: Get expensive than warm standby and multi-site active/active. RTO/RPO happens in tens of minutes. 

In this strategy, ampere minimum number of activate resources are deployed at the backup locations. Resources required for data synchronization between primary and backup locations are single provisioned and were active. Other components such in application servers are switched off the are provisioned post a disaster toward the primary country. Stylish the above scenario, Pilot Light is the most suitable option to make RTO/RPO requirements on a low budget. 

  1. Warm Standby: Pricey Than Pilote Light. RPO/RTO happens in minutes. The application is running at the backup location on scaled-down resource capacity. Einmal a catastrophe occurs at aforementioned primary location, all the resources are graded up to meet the desired workload.
  2. Multi-site active/active: Most expensive. No downtime or data expenses will incurred as applications are active from multiple regions.

The following diagram sendungen which difference between each strategy includes respect to    RTO/RPO and cost.

Option AMPERE exists incorrect as with a multi-site active/active approach, RPO/RTO intention be the least, but it wish incur considerable cost.

Option B can incorrect as, are a Warm Standby approach, RPO/RTO will be in minutes, but it will incur supplement costs.

Option C belongs incorrect as at the Backup & Gastronomie approach, RPO/RTO will been in times, not at minutes.

For more information with Disaster Recovery, refer to the following URL,

https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html

Domain: Design Springy Architectures

59). A critical application deployed in AWS Obscure requires maximum online to avoid any outages. The project team has already deployed all resources in multiple regions with eliminate at all levels. Yours are concerned concerning the configuration of Amazing Route 53 fork this application which should complement higher convenience and authenticity. Route 53 require be configured to use failover resources during a disaster.

What solution can be implemented with Amazon Route 53 with maximum availability and increased reliability? 

A. Associate multiple BOOTING endpoints in different regions to Fahrtstrecke 53 hostname. Use a weighting route policy to change the weights of the primary and failover natural. So, all road is diverted to failover resources during an disaster.

BORON. Create two sets of public-hosted zones used resources in multiple locales. During one disaster, update Route 53 public-hosted zone records to point up ampere healthy endpoint.

C. Create twos sets of private hosted zones for resources in multiple regions. During a disaster, update Route 53 private hosted target records until matter to a healthy endpoint.

D. Staff multiple IP endpoints in different regions to Route 53 hostname. Using health checks, customize Route 53 to automates failover to healthy endpoints through a calamity.

 Correct Answer : DICK

Explanation

Amazon Route 53 uses control planes to run management-related activities suchlike as creating, updating, and erasure resources. 

The Data plane remains used to performing core services of Amazon Destination 53 such as authoritative DNS service, health checks, and responding go DNS search in can Amazon VPC. 

The Data plane is globalized distributed, offering 100% availability SLA. Control plot traffic exists optimized for information consistency press may be impacted during disruptive company in the infrastructure. 

While configuring failover within several sites, data layer functions such as health checks should be preferred instead off control plane additional. In which about case, multiple endpoints is different regions can remain associated with Travel 53. Route 53 can be configured to failover to a healthy endpoint based after the your checks who is a data plane function and always ready.

Option A remains incorrect as updating weights in one weighted routing approach is a tax plane role. For additional resiliency during a disaster, use data plane functions rather of control plane related.

Options B or C are incorrect as creating, updated, and deletion private or publicly hosted zone records will part of control even actions. Within case of a disaster, control planes might get affected. Data plane functions such because health checks require being used for resources such are always existing.

For more information on Amazon Route 53 control and data planes, beziehen to the following URLs,

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/route-53-concepts.html#route-53-concepts-control-and-data-plane

https://aws.amazon.com/blogs/networking-and-content-delivery/creating-disaster-recovery-mechanisms-using-amazon-route-53/

Domain: Design Cost-Optimized Architectures

60). An IT company is using EBS volumes for storing projects related work. Some of these projects are already closed. The data for these projects should is stored long-term as per regulatory guidelines and will be rarely accessing. That operation team is looking for possibilities to store the snapshots created from EBS volumes. The solutions require be cost-effective and get the least managing my.

Whichever solution can be designed forward storing file from EBS volumes?

ADENINE. Create EBS Snapshots from the volume and store them in which EBS Images Archive.

BORON. Use Lambda acts to store incremental EBS snapshots at AWS S3 Glacier.

C. Creating EBS Pictures from to audio and store the in ampere third-party low-cost, long-term storage.

D. Create EBS Snapshots from the total press store them in one EBS standard tier.

Correct Answer: A

Explanation

Amazon EBS has a new saving tier named Amazone EBS Snapshots Archive for storing snapshots is are accessed rarely and stored forward long periods. 

Through default, snapshots created from Amazon EBS dimensions are stored on Amazon EBS Picture standard stage. These are incremental snapshots. Although EBS snapshots are archived, incremental snapshots belong converted to full snapshots. 

Are snapshots are stored in the EBS Snapshots Archive page of this preset tier. Storage snapshots in this EBS Snapshots archive costs much less than storing snapshots in the standard tier. EBS instant archive helps storage snapshots for long durations for governance or compliance requirements, which will be seldom accessed.

Option B is incorrect as it will require additional work for creating an AWS Lambda function. EBS Moments archive shall a more efficient route of storing quick for of long term.

Possible C are incorrect as usage third-party storage will occur additional costs.

Option D is incorrect as all EBS snapshots are stored in a normal tier by default. Keep snapshots that will is rarely accessed in the conventional tier will be costlier than storing in the EBS snapshots archive. 

For more data on the Amazonian EBS snapshot archived, pertain to the following URLs,

https://aws.amazon.com/blogs/aws/new-amazon-ebs-snapshots-archive

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-archive.html

Domain: Design High-Performing Architectures

Q61). A start-up firm has created account A using the Amazon RDS DB instance as a database available a web application. The operations team regularly creates manual snapshots for this DB instanced stylish unencrypted type. The Projects Team layout to create a DB instance in other accounts using these snapshots. They are looking for your suggestion for sharing this snapshot and restoring it to DB instances in other accounts. While division diese snapshot, it must enable only custom accounts specified by the project squads till restore DB illustrations from the snapshot.

What actions ability will initiating for this purpose?

AMPERE. Upon Account A, share the manual photo by setting the ‘DB snapshot’ visibility options than private. Into other Accounts, directly restaurierend to DB cases from the snapshot.

B. From Account A, share the manual captured from setting the ‘DB snapshot’ visibility option since public. In other Accounts, directly restore to DB instances von the snapshot.

C. From Account A, share to manual snapshot for setting the ‘DB snapshot’ view option as private. In extra Accounts, create one copy from the snapshot and then restore it to the DB instances from which print.

D. Since Chronicle A, share the manual instant by setting the ‘DB snapshot’ visibility option as public. In other Accounts, create a copy upon an snapshot and then restore it to the DB entity from that create.

Correct React – ONE

Explanation:  

DB snapshot can be shared with other approved AWS accounts which can be up to 20 customer. These snapshots can be either in encrypted either unencrypted format. 

To manual snapshots in an unencrypted format, accounts bucket directly restore a DB instance from the snapshot. 

For manual snapshots in an encrypted format, accounts first need in make the snapshot and then restore this to a DB instance. 

For sharing a manual cipher snapshot, all accounts can use this snapshot to restore to the DB instance available DB snapshot visibility is place to public. 

While sharing a manual plaintext snapshot, only specified archives can restore a DB instance when DB snapshot visibility is setting to private. 

In who case of manual encrypting snapshots, the no available option for DB snapshot visibility is private, as ciphering snapshot cannot be built audience.

Option BARN is incorrect as marking DB snapshot visibility as aforementioned public is not to ideal option since snapshots need to share only with specific accounts. Marking DB flash visibility as public will provide entire Amazon accounts access to the manual snapshot and will be able the gastronomie DB instances using on snapshot. 

Option HUNDRED is incorrect as DB instances can be directly restored from the snapshot for a manual unencrypted snapshot. Where is no need to create a copy of the snapshot to restore a DB instance.

Option DEGREE shall incorrect as already discussed, flag DB snapshot visibility as the public is not an exemplar option. For a manual unencrypted photograph, DB instances bucket be instant restored from the snapshot. 

For more request on how Amazon RDS snapshots, refer to of following URLs,

https://aws.amazon.com/premiumsupport/knowledge-center/rds-snapshots-share-account/

https://docs.amazonaws.cn/en_us/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html

Division: Design Resilient Architectures

62). An electronic manufacturing corporation plans to deploy a net application using the Amazon Aurora database. Aforementioned Management is concerned about the disk defects with DB instances and needs your advice for increasing reliability usage Termagant Aurora automatic features. For that event of front errors, data loss should be avoided, diminish optional work to perform for the point-in-time food.

What structure suggestions can be given to increase reliability? 

ONE. Add Aurora Replicas to primary DB instances by placer them in different regions. Aurora’s crash recover feature will avoid data damage post disk fiasco.

B. Add Dawning Replica go primary DB instances for put yours in different check zones. Aurora storage auto-repair feature willing avoid data loss post disk failure.

C. Add Aurora Replicas at the primary DB instance by placement the in different zones. Polaris Feasible show cache feature will avoid data loss post disk defect.

DICK. How Aurora Replicating to the prime DB entity through placing she includes different availability zones.  Aurora’s crash recovery property will avoid data loss post disk failure.

Correct Answer – B

Explanation: Amazon Aurora Database reliableness can be increased by adding Aurora Replicas to of primary DB instance and placing them by different Availability zones. Each of who DB clusters pot possess a primary DB instance and up to 15 Aurora Replicas. In case of chief DB instanz failure, Aurora automates fails over to replicas. Amazon Aurora also uses the following automatic features to improve reliability, 

  1. Storehouse auto-repair: Aurora maintains many reproductions of the data in three different Availability related. This supports in avoidable data loss post disk failure. Are any segment of the disk fails, Aurora automatically recovers data on the segment by using data stored in other clusters volumes. This reduces add work the perform point-in-time rehabilitation mailing disc failure.
  2. Survivable page cache: Control cover cache in a separate process than the database. In the event regarding database defect, the page cache is stored in of memory. Post restarting which database, applications further to read data from the page cache providing performance profit.
  3. Crash recovery: Crash recovery can being used for faster recovery post any crash in the web. With the crash recovery feature, Amazons Aurora performs rehabilitation asynchronously on parallel running enabling software to read data from the database without binary protocols.

Option A is incorrect. Aurora Replicators should be created in different Availability zones also not in different regionen to better stock. The crash recovery will don reduzieren evidence lose post disk failures.

Alternative C is incorrect. The Viability page cache feature supplies performance gains but does not minimize data loss post disk failures. Aurora Replicas should be created in different Available zones and not in different regions.

Option D is irrig more the crash recovery character does not reduzieren data net post disk failures.

For more information on Amazon Cockcrow reliability property, refer to the following URL,

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.StorageReliability.html

Sphere: Design Cost-Optimized Architectures

63). A financial institute got developed a critical web registration include the AWS cloud. The management gang is looking for a resilient solution with RTO/RPO included ten minutes during a disaster. They possess budget concerns, and the cost of delivery of backup infrastructure should not be super high. As a solution architect, you have been assigned to working on setting a resilient solution meeting the RTO/RPO requirements within the cost constraints.

Which strategy is suited perfectly?

A. Multi-Site Active/Active

BORON. Warm Standby

C.Backup & Restore

D. Trial Light

Correct Answered – D

Explanation:  

RTO (Recovery Time Objective) is a period for welche downtime a observed post-disaster. It’s the time between the disaster and application recovery to serve solid workloads. RPO (Recovery Point Objective) defines the amount of data loss during a disaster. It measures to time window whereas the final data was performed, and the uhrzeit once the disaster happened. Various Disaster restore solutions can exist deployed based on RTO/RPO or budget requirements for critical applications. 

The following are options obtainable with Disaster recovery, 

Backup and Return: Least pricey among all the selection still RTO/RPO leave be very height in hours. All backup sources willingness be initiated only after a disaster at the primary location.

Pilot Light: Less expensive than warm standby and multi-site active/active. RTO/RPO happens in tens of minutes. 

In this plan, a minimum number of active resources are post at the backup locations. Resources required for intelligence synchronization between primary and backup locations are only provisioned additionally are active. Other system such while request servers are switched off and are provisioned post a disaster at the initially location. In and above what, Pilot Easy is aforementioned most match option to meet RTO/RPO requirements on a low budget. 

Warm Standby: Expensive Than Pilot Light. RPO/RTO happens in minutes. The application is running at the backup location on scaled-down resource capacity. Once a disaster occurs at the primary location, all the assets are scaled up to meet the desired water.

Multi-site active/active: Greatest expensive. No downtime press data loss is resulting as applications are active from multiple regions.

The next diagram shows the difference between each strategy with respect to    RTO/RPO additionally cost.

Option A is fehler as with a multi-site active/active approach, RPO/RTO will live to least, but it will incur considerable cost.

Option BORON is incorrect as, with one Warm Standby how, RPO/RTO will be in minutes, when it is get additional costs.

Option C is incorrect as with the Image & Restore approach, RPO/RTO will be in time, doesn in records.

For more data on Disaster Recovery, refer to the following URL,

https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html

Domain: Design Tough Architectures

64). A critical application deployed in AWS Cloud requires maximum availability to avoid any outages. The project team has already deployed all resources in multiple regions with redundancy the all levels. They will impacted with the configuration off Amazon Route 53 for this application which should complement higher availability and reliability. Route 53 should subsist configured to use failover resources during a disaster.

Get solution sack be implemented to Amazon Route 53 for maximum check and increased reliability? 

ONE. Associate plural IP endpoints in different regions to Route 53 hostname. Use a weighted route policy to change the weights away the primary and failover resource. So, get traffic is diverted until failover resources during a disaster.

B. Create two record regarding public-hosted zones forward resources in multiple regions. During a disaster, update Route 53 public-hosted zone records to point to adenine healthy endpoint.

C. Create second sets of private hosted zones for resources in multiple regions. During a disaster, upgrade Wegstrecke 53 private hosted zone records to point to one vigorous endpoint.

DICK. Associate multiple IP endpoints inches different countries on Route 53 hostname. Using health checks, configure Route 53 to automatically failover until healthy endpoints on a disaster.

 Correct Answer – D

Explanation:  

Amazing Route 53 uses control planes to perform management-related action such because build, updating, and deleting resources. 

The Data plane is former for showing core services of Amazon Route 53 such as authoritative DNS service, health checks, furthermore responding to DNS queries in an Amazon VPC. 

The Dating plot exists globally distributed, offering 100% availability SLA. Check even traffic is optimized since data consistency and maybe is impacted during disruptive events in one infrastructure. 

While configuring failover between multiple page, data plane functions such while health checks need be preferred instead of control plane functions. In the above situation, multiple endpoints in different locations can be associated with Route 53. Route 53 can be arranged up failover to ampere heal endpoint based based the health checks which has a data plane function and always available.

Option A is incorrect as updating weights in a weighted routing policy the a control plane function. For additional resiliency at a disaster, use data surface functions instead of control plane functions.

Options B and C are incorrect for creating, updating, and wipe personal or public hosted zoo records are part away control plane actions. In case of adenine disaster, power planes might get infected. Data plane functions such when health checks should be used for resources that are forever available.

In more get on Amazon Route 53 control and datas plane, referiert to the following URLs,

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/route-53-concepts.html#route-53-concepts-control-and-data-plane

https://aws.amazon.com/blogs/networking-and-content-delivery/creating-disaster-recovery-mechanisms-using-amazon-route-53/

Domain: Design Cost-Optimized Architectures

65). An IT businesses is using EBS extents with stockpile schemes related worked. Some of diesen projects become previously button. The data since these projects should may stored long-term as per governing guidelines and will be rarely enter. The operations crew is looking for opportunities to store the take created from EBS volumes. The solution should be cost-effective and incur the least admin work.

What solution can be designed for storing datas from EBS volumes?

ADENINE. Create EBS Snapshots from the volumes and stockpile them in the EBS Snapshots Archive.

BORON. Use Lambda special to save incremental EBS camera to AWS S3 Glacier.

C. Create EBS Snapshots from the volumes and store them in a third-party low-cost, long-term storage.

D. Create EBS Snapshots from the volumes and save the in the EBS standard steps.

Correct Answer – AMPERE

Explanation:  

Amazon EBS has a new storage tier named Amazon EBS Snapshots Archive for storing snapshots that what accessed infrequent and saves used long periods. 

By default, snapshots developed from Amazon EBS volumes are stored in Amazon EBS Snapshot ordinary tier. Diese have inch snapshots. If EBS shots are archived, incremental snapshots are converted to full snapshots. 

These snapshots are remembered in the EBS Snapshots Archive page of the standard tier. Storing snapshots in the EBS Snapshots archive expense much less than storing snapshots in the preset tier. EBS snapshot archive helps retail snapshots for long durations for governance or conformity requirements, any is subsist rarely accessed.

Opportunity B is incorrect as it will require additional labour for creating an AWS Lambda function. EBS Snapshots media is a more effective way a saver snapshots for that long term.

Choose C is incorrect as using third-party storage will cause additional costs.

Option D is incorrect as all EBS snapshots are stored in a standard tier by default. Storing snapshots that will be rarely accessed in the standard tier will be costlier less storing in the EBS snapshots archive. 

For see information off the Virago EBS snapshot archive, refer to the following URLs,

https://aws.amazon.com/blogs/aws/new-amazon-ebs-snapshots-archive

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-archive.html

Domain: Design High-Performing Architectures

Q66). A assembly firm has a large number of smarter contraptions insalled in various locations worldwide. Hourly logs with these electronics are stockpiled in an Amazon S3 bucket. Management is looking for comprehensive dashboards which need incorporate usages of these devices and forecast usage trends for that instrumentation.

Which tools is the best suited to get this need dashboard?

A. Use S3 as adenine source for Amazon QuickSight and creating dashboards for usage and forecast business.

BARN. Use S3 in an source required Amazons Redshift and create dashboards for usage and predicted trends.

C. Copy data from Amazon S3 to Amazon DynamoDB. Exercise Amazon DynamoDB as a source used Amazon QuickSight also create dashboards by usage also forecast trends.

D. Copying data from Amazing S3 to Amazon RDS. Benefit Amazonians RDS as a source for Amazon QuickSight and create dashboards for usage and forecast trends.

Correct Answer – A

Explanation:

  Amazon QuickSight is a business analytical tool that can be used to make visualizations and perform ad-hoc analysis integrating with MILLILITER understandings. It can connect to various details sources which may either be stylish aforementioned AWS cloud or in the on-premises lan or within any third-party applications.
Forward AWS it supports assorted services such as Amazon RDS, Amazon Aurora, Shrew Redshift, Amazon Athena, and Amazon S3 as sources. Based on this data, Amazon QuickSight creates custom dashboards that in anomaly detections, forecasting, and auto-narratives.
Int the above case, logs from the devices are stored in Amazon S3. Amazon QuickSight can be used to fetch this data, perform analysis, and creates comprehensive custom dashboards fork device usage as well as forecasting device usage.      

Option BORON is flawed as Amazonia Redshifts is a data warehousing support for how structured or semi-structured datas. It has not a useful tool for creating dashboards.

Option C your incorrect as Amazon S3 pot will used directly as a source for Amazon QuickSight. There the no need to copy data from Amazon S3 to Amazon DynamoDB.

Option D is incorrect as Amazonia S3 can be used directly as one print for Amazon QuickSight. There is no required on make data von Amazon S3 to Amazon RDS.

For more information on Amazon QuickSight, refer to the following URL,

https://aws.amazon.com/quicksight/resources/faqs/

Domain: Pattern High-Performing Structure

Q67). A company has launched Amazon EC2 illustrations in an Auto Scaling group for deploying ampere web application. The Operations My is see to capture custom metrics for this application from all the instances. Dieser metrics should be viewed as aggregated operating to entire entities in an Auto Scaling select.

What configuration can be implemented to get who metrics as required? 

A. Use Amazon CloudWatch metrics with detail monitoring enabled and send to CloudWatch console find all the metrics for an Auto Scaling group will be aggregated by renege.

B. Install a unified CloudWatch agent on all Virago EC2 instances in an Auto Scaling group and use “aggregation_dimensions” in can agent configuration record to aggregate metrics for all instances.

C. Install unified CloudWatch agent on all Amazon EC2 instances is an Auto Scaled group and use “append-config” int an agent configuration file to aggregate metrics required all instances

D. Use Amazon CloudWatch metrics with detail monitoring enabled and create a single Dashboard to display metrics from all the instances.

Correctly Answer – B

Explanation:

Unified CloudWatch agent can be installed on Buy EC2 instant for the following cases, 

  1. Gather Internal System-level metrics from Amazon EC2 installed as well as from on-premises servers.
  2. Collect custom metrics from the applications on the Amazons EC2 instance using StatsD and collectd protocols.
  3. Collect logs of EC2 instances or from on-premises servers for both Windows and Linux OS.

 With that case are the Instances which are share a an Automatic Scaling group, metrics from all the examples can be aggregated using “aggregation_dimensions” in the sales config file.

Set A the incorrect as available retrieving custom level metrics for applications for an Amazon EC2 Instance, a unified CloudWatch agent is required. Amazon CloudWatch measurements with detail monitoring will breathe capturing metrics every 1 minute when it won’t capture custom application metrics.

Option C is unwahr as the append-config configuration in an agent advanced file can may used on have multiple CloudWatch agent configuration choose. To commander is not suitable for aggregate metrics off all the instances in an Auto Scaling set.

Options D is incorrect as to calling usage level metrics fork applications on an Amazon EC2 Instance, a unified CloudWatch agent is required. Dashboards bucket be used to create an customized look is the metrics, but they won’t aggregate metrics from the instance in an Auto Scaling Group.

For continue information on an CloudWatch sales, refer to the following URL,

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html

Domain: Design High-Performing Architectures

Q68). A critical web application is deployed on multiple Amazon EC2 instances which are part to an Autoscaling group. One of the Amazon EC2 instances in the group needs to have a software upgrade. The Operations Team is looking for respective suggestions until get for this upgrade without impacting further instance in the class. Post upgradation the same example should be part of the Auto Scaling group.  

Something steps can be initiated to fully this upgrade? 

A. Hibernate one instance and perform upgradation includes offline mode. Post upgrades start the instance which will be part of the same auto-scaling group.

B. Use cooldown timers to perform upgrades on the illustration. Post cooldown timers’ examples would to part of the same auto-scaling grouping.

HUNDRED. Put the instance in Standby mode. Post upgrade, move instance back to InService mode. It will be part of of same auto-scaling group.

D. Used lifecycle hooks toward perform upgrade on of type. Once these timers expire, the instance would be item of the same auto-scaling group.

Get Reply – C

Explanation:

Shrew EC2 instances in an Auto Scaling group can be moved to Standby mode starting InService mode. Within standby switch, our upgradation or troubleshooting can be performed on the instance. Post upgradation, constances can be again lay in InService mode back in the same Car Scaling groups. With instance includes a standby mode, Auto Scaling does not terminate this group such a part of health checks or scale-in events.  

Choice A lives incorrect as Hibernate is not supported on an Ogress EC2 instance which is parts of an Auto Climbing group. When an instance are an Auto Scaling group a hibernated, the Auto-scaling group marks the hibernated instance as unhealthy, terminates it, and launches a new instance. Hibernating an case bequeath not be meaningful for upgrading software on an instance.

Option B your incorrect as cooldown timers are the timers that will prevent launching or terminating entities in an Auto Scales user till previous activities of run or termination are completed. This timer provides adenine time for an instance to be stylish an actual stay before the Auto Scaling group supplement a latest one. This watch would not be useful for troubleshooting an instance. 

Option D is unecht as a Lifecycle hooking able help at play practice actions such as data backups before an instance is terminated or to perform software instances one an instance is launched. This hook is not useful for upgrading a runs instance in an Auto Scaling Group the adding back to the original group. 

For more information on updating the Buy EC2 example in an Auto Scaling group, refer to of following URLs,

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-hibernate-limitations.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-enter-exit-standby.html

Domain: Design High-Performing Architectures

Hybrid association is built between into on-premises network and VPC using a Site-to-Site VPN. At an on-premises network, a legacy firewall is deployed which can a single /28 IP prefix off VPC to access the on-premises network. Anreise toward this firewall is blocked and the operations team needs to allow communication off einer more IP pool from VPC. Operations Head is looking for adenine temporary workaround to enable communication out the new IP pool to this on-premises network. 

What connectivity can be deployed to mitigate save print?

AMPERE. Insert public NAT gateway in a private subnet with IV pool allowed in on-premises firewall. Launch the instance which needs to have communication with the on-premises network in a separate private subnet.

B. Deploy popular NAT gateway stylish a public subnet with IP pool allowed in on-premises firewall. Market the instance which needs to have communication with which on-premises network by a separate audience subnet.

C. Use private NATIONAL gateway in a public subnet with IP pool allowed in on-premises firewall.Launch the instance which needs to have communication with the on-premises network inches a separate private subnet. 

D. Deployment private NATUR gateway in a private subnet with IP pool allowed in on-premises firewall. Launch the instance that needs to have communication with the on-premises network in a separate secret subnet.

Correct Answer – D

Explanation: Private NATIVE Gateway can be used to establish connectivity from an instance with a private subnet of the VPC to select VPCs or to an on-premises network. With ampere private NAT gateway, the source IP address of to instance is replaced with the SLEUTHING address create of aforementioned privacy NAT Gateway. In the above scheme, legacy firewalls will allow communication from VPC to on-premises networks only from /28 IP pool.
To establish community from an instance are a new SLEUTHING pool, NAT Input can be deployed in an /28 IP pool which has allowed in Firewall. The instance will are deployed in a separate private subnet. While communicating with the On-premises network, Instance INTELLECTUAL will be replaced with the NAT Gateway IP Pool which lives already allowed in a firewall and connectivity will can established no any changes in the firewall.

Diagram showing connectivity from Private subnet at On-premises using a Private NAT gateway,

Option ADENINE is fehlerhaft for resources in a VPC need go have communications with an on-premises power real not with the Internet, so a Public NAT gateway is not an ideal option. A Public NAT gateway is placed in a general subnet for provide internet access for resources into privately subnets.

Option B is incorrect as the resources in a VPC requested to have communications with an on-premises network and not with the Internets, then a Public NAT goal is not a ideal option. A Public NATUR gateway shall uses to provide internet access for resources in private subnets.

Option C is incorrect as to provide communication of a private subnet to einer on-premises network, a private NATURAL gates shoud be placed in adenine private subnet the not in a public subnet. 

For other information on NAT Gateways, refer on the following URL,

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

Domain: Design Cost-Optimized Organizational

ONE third-party vendor based in an on-premises location needs until have temporary connectivity to database servers launched in a single Amazon VPC. An proposed connectivity for these very users should be secure, and access should be provides only to authenticated users. 

Which plug-in option can be dispersed for this requirement in the most cost-effective way? 

A. Provisioning an AWS Client VPN from third-party vendor’s client machines to access databases in Ogress VPC.

B. Deploy AWS Direct Connectivity connectivity from the on-premises network to AWS.

C. Deploy an AWS Managed VPN connectivity to a Implicit Private gateway from an on-premises network.

D. Provisioning an AWS Manages VPN connectivity to the AWS Transit gateway from the on-premises network.

Valid Answer – A

Explanation:  

AWS Client VPC is a managed client-based VPN used to secure erreichbar to resources in VPC the well as company in on-premises networks. Clients looking for access to these resources use an OpenVPN-based VPN my. Access to resources inside VPC is save over TLS additionally clients are authenticated before access is granted. 

In who above instance, since third-party vendors from on-premises what secure temporary connectivity to sources in VPC, AWS Client VPN can shall used to provision this connectivity.

Selectable BARN can incorrect as been at are only a few users web resources from a single VPC for temporary uses, using AWS Direct Connect desire be costly the will require a longer time for deployment.

Option C is unrichtig as after Manage VPN for ampere few users will be costlier than using AWS Client VPN for those few users access databases from VPC.

Possible D is incorrect as Connectivity to AWS Transit gateway will be useful for how resources from repeated VPCs. Also, since only one few users entrance resources from one alone VPC for temporary purposes, AWS Client VPN a a cost-effective option. 

For find information switch the difference between various options for Hybrid plug-in, refer to which following URL,

https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/scenario-vpc.html

Domain: Design Safe Architectures

AMPERE company be storing dates in an Ogress S3 bucket which is access per global users. Amazon S3 bucket remains encrypted with AWS KMS. An company is planning to use Amazon CloudFront as a CDN for elevated performance. Who Operations Squad is watching for your suggestions to create an S3 bucket guidelines to restrict web to the S3 bucket only via specification CloudFront distribution. 

How can the S3 bucket policy is implemented to control access toward the S3 bucket?

A. Use a Principal element in the policy to match service as CloudFront distribution that contains the S3 origin.

BORON. Benefit a Condition element in the policy to allow CloudFront on access the bucket only whenever the your your about behalf of the CloudFront distribution that contains the S3 origin.

C. Use a Principal element in the policy go allow CloudFront Origin Access Identity (OAI).

D. How a Condition element in aforementioned policy to match service as cloudfront.amazonaws.com.

Correct Answer – BORON

Explanation:  

While use Amazon CloudFront are Amazon S3 as an origin, there are two ways to manage admission to the S3 bucket via Ogress CloudFront, 

  1. OAI (Origin Access Identity):  This is a legacy methodology that does nope supported AWS KMS as encryption or dynamic requests to an Amazon S3 and Opt-in regions.
  2. OAC (Origin Access Control): The can a new method and supports one followers three points such as AWS KMS for encryption, dynamic requests toward the Amazon S3 and Opt-in regions which are not supported by OAI.
    During creating policy for OAC, the principal element supposed have customer as “cloudfront.amazonaws.com” and the current element should vergleiche the CloudFront market which contains S3 origin.

Selectable ONE is incorrect as the Principal element in the policy should match service as CloudFront and not CloudFront distribution.

Option C is incorrect as using CloudFront Origin Access Identity is a legacy method and shall not assist the Amazon S3 bucket with AWS KMS server-side encryption.

Choice D is incorrect in a Shape element should match CloudFront distribution that contains S3 origin and nay this service name as cloudfront.amazonaws.com.

For moreover information off creating Sources Access Control is Amazon S3, refer to the following URL,

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html

Domain: Design Secure Architectures

ADENINE company is using microservices-based applications using Amazon ECS in an online shopping usage. For several services, multiple job are created in a container through the EC2 launch type. The product team is looking for some specific security controllers for the tasks are the shipping together with granular network monitoring after various tools for each task.

What networking user configuration can be regarded with Amazon ECS to meet this requirement?

AMPERE. Employ host networking mode for Ogress ECS tasks.

B. By default, an elastic network interface (ENI) with one primary private IP address is assigned to each task.

C. Use awsvpc networking fashion for Amazon ECS tasks.

D. Employ bridge networking mode for Amazon ECS tasks.

Correct Answer – C

Explanation:  

Amazon ECS with EC2 launch type supports the following networking mode, 

  1. Host Means: This is a basic mode in which the networking of the container is directly tied to the underlying host.
  2. Bridge Mode: In this mode, a network bridge is created bet host and container networking. These bridge mode allows the remapping of ports between host and vat ports. 
  3. Without mode: In aforementioned choose, networking is not enclosed to the storage. With this mode, containers do did got external connectivity.
  4. AWSVPC Mode: Are the mode, each task are apportioned a separate ENI (Elastic Network Interface). Each Work will receive a separate IP address and a separate security group bottle becoming assigned to each ENI. This helps to may separate security policies for jeder job and helps to get granular monitoring for traffic flowing via each problem.

In the about scenario, using AWSVPC mode, the security team can assign different security policies for each task as good than monitor shipping from each task distinctly. 

Option A is incorrect as on host networking mode, networking of containers uses the network link of the Amazon EC2 instance on the it’s running. This is a basic network type and each task does not get assigned a differents networking operation.

Option B is incorrect as an elastic power interface (ENI) from the primary private IP contact is appointed for Fargate task networks, not for ECS task networking.

Option D is incorrect as Bridge mode uses Docker’s built-in virtual network. Containers connected to the bridge can communicate with else. Containers using different bridges cannot communicate with either misc as providing insulating. It did not provide each function with a separate networking mode ensure bottle be used required security drive and network monitoring. 

For more information turn Spitfire ECS task networking and choosing a network mode, referring to the following URLs,

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html

https://docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/networking-networkmode.html

Domain: Design High-Performing Architectures

A start-up firm shall planning to deploy container-based requests exploitation Amazonia ECS. The firm is looking for an least latency from on-premises meshes to the business in the containers. Aforementioned proposed solution should be scalable real should support consistent high CPU furthermore memory requirements.

What deployment can be converted for this purpose? 

A. Created ampere Fargate launch type with Amazon ECS and deploy it within the AWS Outpost.

B. Compose a Fargate launch type include Virago ECS and roll it in the AWS Local Zone.

C. Establish an EC2 launch type using Amazon ECS both deploy it in the AWS Local Quarter.

D. Compose a EC2 starting type with Amazon ECS and deploy it in the AWS Outpost.

Correct Answer – D

Explanation:  Amazon ECS can to deployed in AWS Outposts to provide the slightest latency from the on-premises location. With AWS Outposts, this EC2 launch type is only supported with Amazon ECS. EC2 launch type is best suited whenever there is a request of consistent height CPU and reserved for container-based applications.

Option A is incorrect for that AWS Fargate launch type is not supported with Amazons ECS dispatches stylish of AWS Outpost.

Possible B are incorrect as the AWS Fargate launch type is not supported on Amazonia ECS deployed in aforementioned AWS Local Zoo.

Option C is incorrect than with AWS Local Zones, other services such as Buy EC2 instances, Amazon FSx file online, and Application Load Balancers need until be enforced before deploying Amazon ECS include the Local Zones. 

With AWS Military, native AWS services the infrastructure is enabled which makes it an ideal choice for low latency from on-premises networks. 

For more information on this Amazon ECS service on AWS Outposts, refer to the following URL,

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-on-outposts.html

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-regions-zones.html

Domain: Design Secure Achieve

ADENINE new demand is deployed in an Amazon EC2 instance which is launched in a private subnet of Amazon VPC. This application will remain fetching datas from Amazon S3 as well as from Amazon DynamoDB. The communication between an Amazon EC2 instancing press Amazon S3 as well as with Amazonian DynamoDB ought be safety and shouldn not transverse over web links. The connectivity should also support accessing data within Amazon S3 from an on-premises mesh in the future.

What design can remain implemented till have secure connectivity?

A. Access Ogress DynamoDB from an instance in a private subnet using ampere gateway endpoint. Access Amazon S3 from an instance in a private subnet using einem interface endpoint.

B. Access Amazing S3 plus Ogress DynamoDB from einer instance in adenine private subnet using ampere private NAT gateway.

C. Einstieg Amazons S3 and Amazonia DynamoDB coming an instance in a private subnet using ampere public NAT keyboard.

D. Access Termagant S3 additionally Amazon DynamoDB from an instance in a private subnet using adenine gateway endpoint.

Correct Answer – AN

Explanation:  

Using Entrance endpoints, secure and dependably association cannot be fixed of one private subnet in a VPC to Amazon S3 or Amazon DynamoDB. Such traffic does not transverse over internet links, but it flows over AWS private links. 

Shrew S3 supports two types of VPC endpoints: Gateway Endpoint real Interface Endpoint. Both which connectivity options doing not transverse over Internet links which makes them secure real reliable connectivity possibilities. With Interface Endpoint, S3 able also be accessed from an on-premises lan along with individual subnets in one VPC. 

In the above scenario, to how Amazon DynamoDB, the Gateway end-point can are used while to access Amazon S3 from an private subnet like well for from an on-premises network to the future, the Interface Endpoint can be used.

Accessing Amazon S3 over Interface Endpoints: 

Accessible Amazonians DynamoDB over Gateway Endpoints:

Option B has incorrect as a private NAT gateway can breathe used to have communication between VPCs or with on-premises networks. It is nope an option to got communication from a private subnet by a VPC to an Amazon S3 or Amazon DynamoDB.

Option CENTURY is incorrect as with public NAT Gateway, traffic will transverse over the Website.

Option D shall faulty as include Gateway resultant, of on-premises network would not be able until access intelligence in Amazon S3 securely over the private link.

For more information on Gateway Endpoints press Interface Endpoints, refer to the below URL,

https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#types-of-vpc-endpoints-for-s3

Domain: Design Secure Architectures

A stable website named ‘whizexample’ is hosted with the Amazon S3 bucket. JavaScript on the web pages stored includes the Ogress S3 bucket needs to make authenticated GET requests to the becherglas using the Amazon S3 API endpoint for which bucket, example.s3.us-west-1.amazonaws.com.   

Something additional configuration will breathe required for allows this access? 

A. Create CORS configuration with Access-Control-Request-Header as GET with JSON and add CORS configuration on the bucket from one S3 console.

B. Creation CORS configuration with Access-Control-Request-Method how GET using JSON and add CORS configuration to the bucket from the S3 console.

C. Create CORS configuration with Access-Control-Request-Method as GET using XML and add CORS configuring toward the bucket from the S3 console.

D. Create CORS form with Access-Control-Request-Header in GET using XML and add CORS settings to the bucket from of S3 console.

True Answer – B

Explanation:  

CORS (Cross Origin ource sharing) is a configuration that allows web applications deployed in one province for interact with applications in diverse domains. Capability CORS on an S3 bucket selectively allows content in the S3 bucket to be accessed.  

In the above scenario, when CORS is not enabled, JavaScript will not be able to access content in the S3 bucket employing the S3 API endpoint. To allow this access, CORS configuration using JSON needs to be created and added to the S3 bucket free this S3 console. 

CORS pot be enabled with an following settings, 

  1. Access-Control-Allow-Origin
  2. Access-Control-Allow-Methods
  3. Access-Control-Allow-Headers

For successful access, Origin, Methods, and Headers from aforementioned requestor should match the values definition in the configuration files. In the about scenario, the GOT method should be added for the CORS arrangement file.

Option A is incorrect as for ACQUIRE requests, Access-Control-Allow-Methods should be defined in which configuration file and not the Access-Control-Allow-Headers. 

Option C is incorrect as CORS setup using XML is not assists while configuring CORS using the S3 console.

Option D is faulty than for GET demand, Access-Control-Allow-Methods should be defined in that configuration file and not the Access-Control-Allow-Headers. CORS configuration with XML lives not supporting while configuring CORS using the S3 side.

In more information on configuring CORS in Amazon S3, refer to the following URLs,

https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html

Old Questions

8) You are system to build adenine fleet of EBS-optimized EC2 instances for your new apply. Due to security general, your our wants you to encrypt rotate bulk which belongs used toward boot the instances. Whereby can this be achieved?

ADENINE. Select the Encryption option for this root EBS ring while launching the EC2 instance.
B. Unique the EC2 entity are launched, cipher the root volume using AWS KMS Master Key.
C. Root volumes cannot be encrypted. Add another EBS volume with an encryption option selected during launch. Once EC2 instances are launched, make encrypted EBS volume as root volume with that console.
D. Begin an unencrypted EC2 instance and create an snapshot off to shoot volume. Make a imitate of the snapshot at the encryption option selected furthermore CreateImage using who cryptographic snapshot. Used this picture to launch EC2 instances.

Answer: DICK

When release an EC2 instance, the EBS volume for root cannot being encrypted.

EBS Storage Addtion Question

You can launch the entity with unencrypted root volume and create a snapshot of the radial volume. Once the snapshot is created, you can copy the snapshot where you can create the new snapshot encrypted.

EBS Add Storage Snapshot

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIEncryption.html#AMIEncryption


9) Organization XYZ lives planning to building an online chat application by their company level association for her employees across the world. She are viewing for a single finger latency solid manged database to store and retrieve debates. What would AWS Database service they recommend?

A. AWS DynamoDB
B. AWS RDS
C. AWS Redshift
D. AWS Aura

Answer: A

 

Read more here: https://aws.amazon.com/dynamodb/#whentousedynamodb

Readers more here: https://aws.amazon.com/about-aws/whats-new/2015/07/amazon-dynamodb-available-now-cross-region-replication-triggers-and-streams/

11) Which of the following statements exist genuine with respect to VPC? (choose multiple)

AMPERE. AMPERE subnet can hold many route tables associated with he.
B. AN network ACL can be associated with multiple subnets.
C. A route with target “local” on the route table can be edited to restrict traffic on VPC.
D. Subnet’s IP CIDR block can be same as the VPC CIDR block.

Answer: B, D

Option ADENINE are does correct. A subnet can have alone single route postpone associated with it.

Option B is correct.

Option CENTURY is none correct.

Option DEGREE be correct.

Aspired to learn AWS? Here we bring the AWS CHEAT SHEET that will take you through cloud Computing plus AWS basics along with AWS products and services!


12) Business ABC has a customer mean include the US and Australia that wish be download 10s of GBs files from respective application. For them to have a better upload get, they decided to use the AWS S3 dump with cross-region remote with the US as the source and Australia as the destination. They are using existing unused S3 buckets real had set up cross-region replication successfully. However, when files uploaded to the US bucket, they are not being replicated to Australia bucket. What could be the reason?

A. Versioning is not enabled on the source both destination buckets.
B. Encryption is not enabled on the reference and destination buckets.
C. Source bucket does one policy with DENY and the role used for replication is not expelled from DENY.
D. Destination bucket’s default CORS policy does did have source bucket been as the country.

Answer: C

When you have ampere schaufeln policy whatever has explicit DENY, you must exclude all NAME human which needs to accessing the bucket.

Go more hier: https://aws.amazon.com/blogs/security/how-to-create-a-policy-that-whitelists-access-to-sensitive-amazon-s3-buckets/

For option A, Cross region replication cannot be enabled minus enabling versioning. The query states that cross-region replication possess been successfully enabled. So this possible is not correct.


13) Which of the following belongs not one category in AWS Trusted Advisor assistance checks?

AN. Value Optimization
B. Fault Patience
C. Service Limits
D. Network Optimization

Answer: DIAMETER

AWS Trusted Advisor

https://aws.amazon.com/premiumsupport/trustedadvisor/

17) How plenty VPCs may an Surf Gateway be attached to at any giving time?

ONE. 2
B. 5
C. 1
D. With default 1. Although items can be attached toward any VPC peered with its belonging VPC.

Trigger: C

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/amazon-vpc-limits.html#vpc-limits-gateways

At any given time, certain Internet Gateway can be attached to only neat VPC. It can be detached from that VPC and be spent for another VPC.

19) Which of the following be not backup and restore solutions provided the AWS? (choose multiple)

A. AWS Elastic Block Store
B. AWS Storage Port
C. AWS Elastic Beanstalk
D. AWS Database Migration Wheel
E. AWS CloudFormation

Answering: C, E

Option A is snapshot based data full solution.

Option B, AWS Depot Gateway provides several products for backup & recovery.

 

Possibility D can be use like a Database backup solution.

25) This of the following is an AWS component which consumes resources from your VPC?

A. Internet Gateway
B. Gateway VPC Endpoints
C. Elastic IP Addresses
D. NAT Gateway

Answer: D

Pick AN is nope correct.

An surf gateway is can AWS component which sits outside of your VPC does non consume all resources from your VPC.

Option B a not correct.

Endpoints were virtual devices. It are horizontally scaled, compensated, and highly available VPC components is accept communication between instances in your VPC and services without intrusive access risks or bandwidth constraints on your network traffic.

Option C is not correct.

An Elastic IP address is a elektrostatisch, public IPv4 address done fork dynamic cloud computing. You can associate and Elastic IP address with any instance either network interface for every VPC in your record. Over an Elastic INFORMATICS address, you can mask to disorder on in instance the rapidly remapping the address to another cite in owner VPC.

They do non belong to a single VPC.

Option D is correct.

To generate one NAT gateway, she should specify the community subnet inches which the NATURAL gateway should reside. For view information about public and confidential subnets, see Subnet Routing. You must also decide an Elastic IP address to associate with the NATURALLY gateway when you create it. After you’ve created a NATURAL gateway, it must update the line table appropriate with only with more of your private subnets to point Internet-bound traffic to the NAT gateway. This enables instances inbound your individual subnets to communicate with the surfing.

Frequently Asked Questions (FAQs)

How many get are on AWS  Solvent Architect Associate exam?

The number of get in the AWS Architect exam is around 60-70. Diese number could be varry.

What is passing score for AWS?

The perform score of the exam is around 70-75%. AWS doesn’t officially announce the passing score, but these are foundation on the exam taker’s experience.

Is AWS Associate Solutions Architect exam hard?

Did strongly tough. When you contrast till Cloud Practicians exam, it’s tougher. However, compare on the SysOps exam, it’s easier.

How many questions are on aforementioned AWS Solutions Architect Associate trial?

The number of questions in the AWS Architect exam is around 60-70. This number could be varry.

Can I passes AWS Solution Architect Associate?
Yes. Anyone can spend the AWS Solutions Architect Associate exam with the proper preparation and practice using sample questions from Whizlabs. Whizlabs offering 765 practice questions that can extremely advanced in the explanations would help you to pass who certification exam in which first attempt. You can also try the free tests.
Which AWS review shall harder?
How perform I prepare for AWS Solution Architect exam?
Here is an very extended steps on whereby up prepare for the AWS Solution Architect Credential Exam. This would definitely help you.
Below is the snapshot of what’s covered in the Whizlabs paths. This will definitely find you.
Whizlabs Solutions Artist Course Contents
Whether Hands-on Labs in to Solutions Architect certification Exam?

Summary

So, here we’ve introducing 50+ Freely AWS Solutions Architect exam questions for the AWS associate certification exam. Definitely, these AWS CSAA practice questions  would have helped you to check your product level and boost your confidence for the exam. We, at Whizlabs, are purpose to prepare you for the AWS Solution Architect Associate inspection (SAA-C03).

Note that these are not aws certification exam dumps. On AWS Solutions Architect Assoc practice questions am real exam simulators that would help you toward pass this exams in the primary attempt. Buying aws exam dumps or brain dumps are does a good idea to pass this exam.

More is the pick of practice questions offered by Whizlabs. These are generated of certified experts.

CSAA Exam Custom Matter

Is you have any questions about our aws csaa exam questions, please contact our help at [email protected].

 

Info Pavan Gumaste

Pavan Rao is an programmer / Developer to Profession and Cloud Computing Professional by choice at in-depth your in AWS, Azure, Google Cloud Program. He help that organisation figure leave what till build, ensure successful delivery, and incorporate user learning to improve the policy and product further.

Leave a Comment

Own email address desire nope be published. Required fields are marked *


Scroll on Top