AWS Certified Database – Specialty – Question181

A financial services company has an application deployed on AWS that uses an Amazon Aurora PostgreSQL DB cluster. A recent audit showed that no log files contained database administrator activity. A database specialist needs to recommend a solution to provide database access and activity logs. The solution should use the least amount of effort and have a minimal impact on performance.
Which solution should the database specialist recommend?

A.
Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.
B. Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.
C. Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.
D. Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.

AWS Certified Database – Specialty – Question180

A media company wants to use zero-downtime patching (ZDP) for its Amazon Aurora MySQL database.
Multiple processing applications are using SSL certificates to connect to database endpoints and the read replicas.
Which factor will have the LEAST impact on the success of ZDP?

A.
Binary logging is enabled, or binary log replication is in progress.
B. Current SSL connections are open to the database.
C. Temporary tables or table locks are in use.
D. The value of the lower_case_table_names server parameter was set to 0 when the tables were created.

Correct Answer: A

Explanation:

Explanation:
Aurora MySQL 2.10 and higher, Aurora can perform a zero-downtime patch when binary log replication is enabled.
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora…

AWS Certified Database – Specialty – Question179

A database specialist is responsible for designing a highly available solution for online transaction processing (OLTP) using Amazon RDS for MySQL production databases. Disaster recovery requirements include a cross-Region deployment along with an RPO of 5 minutes and RTO of 30 minutes.
What should the database specialist do to align to the high availability and disaster recovery requirements?

A.
Use a Multi-AZ deployment in each Region.
B. Use read replica deployments in all Availability Zones of the secondary Region.
C. Use Multi-AZ and read replica deployments within a Region.
D. Use Multi-AZ and deploy a read replica in a secondary Region.

Correct Answer: C

Explanation:

Explanation:
DR for Multi-AZ with in-Region read replicas ­ While Amazon RDS Multi-AZ provides HA and data protection, the associated in-Region read replica renders the scalability of read-only workloads, and the cross-Region automated backups feature provides DR.
Reference: https://dataintegration.info/managed-disaster-recovery-with-amazon-…

AWS Certified Database – Specialty – Question178

A company is using Amazon Neptune as the graph database for one of its products. The company's data science team accidentally created large amounts of temporary information during an ETL process. The Neptune DB cluster automatically increased the storage space to accommodate the new data, but the data science team deleted the unused information.
What should a database specialist do to avoid unnecessary charges for the unused cluster volume space?

A.
Take a snapshot of the cluster volume. Restore the snapshot in another cluster with a smaller volume size.
B. Use the AWS CLI to turn on automatic resizing of the cluster volume.
C. Export the cluster data into a new Neptune DB cluster.
D. Add a Neptune read replica to the cluster. Promote this replica as a new primary DB instance. Reset the storage space of the cluster.

Correct Answer: B

Explanation:

Explanation:
In addition, the post offers programmatic approaches for automatically stopping or detecting idle resources that are incurring costs, allowing you to avoid unnecessary charges.
Reference: https://aws.amazon.com/blogs/machine-learning/right-sizing-resource…

AWS Certified Database – Specialty – Question177

A company is building a web application on AWS. The application requires the database to support read and write operations in multiple AWS Regions simultaneously. The database also needs to propagate data changes between Regions as the changes occur. The application must be highly available and must provide latency of single-digit milliseconds.
Which solution meets these requirements?

A.
Amazon DynamoDB global tables
B. Amazon DynamoDB streams with AWS Lambda to replicate the data
C. An Amazon ElastiCache for Redis cluster with cluster mode enabled and multiple shards
D. An Amazon Aurora global database

Correct Answer: A

Explanation:

Explanation:
Global tables enable you to read and write your data locally providing single-digit-millisecond latency for your globally distributed application at any scale.
Reference: https://aws.amazon.com/dynamodb/global-tables/

AWS Certified Database – Specialty – Question176

A company is running its critical production workload on a 500 GB Amazon Aurora MySQL DB cluster. A database engineer must move the workload to a new Amazon Aurora Serverless MySQL DB cluster without data loss.
Which solution will accomplish the move with the LEAST downtime and the LEAST application impact?

A.
Modify the existing DB cluster and update the Aurora configuration to "Serverless."
B. Create a snapshot of the existing DB cluster and restore it to a new Aurora Serverless DB cluster.
C. Create an Aurora Serverless replica from the existing DB cluster and promote it to primary when the replica lag is minimal.
D. Replicate the data between the existing DB cluster and a new Aurora Serverless DB cluster by using AWS Database Migration Service (AWS DMS) with change data capture (CDC) enabled.

AWS Certified Database – Specialty – Question175

An online advertising company is implementing an application that displays advertisements to its users. The application uses an Amazon DynamoDB table as a data store. The application also uses a DynamoDB Accelerator (DAX) cluster to cache its reads. Most of the reads are from the GetItem query and the BatchGetItem query. Consistency of reads is not a requirement for this application.
Upon deployment, the application cache is not performing as expected. Specific strongly consistent queries that run against the DAX cluster are taking many milliseconds to respond instead of microseconds.
How can the company improve the cache behavior to increase application performance?

A.
Increase the size of the DAX cluster.
B. Configure DAX to be an item cache with no query cache
C. Use eventually consistent reads instead of strongly consistent reads.
D. Create a new DAX cluster with a higher TTL for the item cache.

AWS Certified Database – Specialty – Question174

An ecommerce company is migrating its core application database to Amazon Aurora MySQL. The company is currently performing online transaction processing (OLTP) stress testing with concurrent database sessions. During the first round of tests, a database specialist noticed slow performance for some specific write operations.
Reviewing Amazon CloudWatch metrics for the Aurora DB cluster showed 90% CPU utilization.
Which steps should the database specialist take to MOST effectively identify the root cause of high CPU utilization and slow performance? (Choose two.)

A.
Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.
B. Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.
C. Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.
D. Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.
E. Enable Advance Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.

AWS Certified Database – Specialty – Question173

A company runs a MySQL database for its ecommerce application on a single Amazon RDS DB instance.
Application purchases are automatically saved to the database, which causes intensive writes. Company employees frequently generate purchase reports. The company needs to improve database performance and reduce downtime due to patching for upgrades.
Which approach will meet these requirements with the LEAST amount of operational overhead?

A.
Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and enable Memcached in the MySQL option group.
B. Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and set up replication to a MySQL DB instance running on Amazon EC2.
C. Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.
D. Add a read replica and promote it to an Amazon Aurora MySQL DB cluster master. Then enable Amazon Aurora Serverless.

AWS Certified Database – Specialty – Question172

A company is using Amazon DynamoDB global tables for an online gaming application. The game has players around the world. As the game has become more popular, the volume of requests to DynamoDB has increased significantly. Recently, players have reported that the game state is inconsistent between players in different countries. A database specialist observes that the ReplicationLatency metric for some of the replica tables is too high.
Which approach will alleviate the problem?

A.
Configure all replica tables to use DynamoDB auto scaling.
B. Configure a DynamoDB Accelerator (DAX) cluster on each of the replicas.
C. Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity.
D. Configure the table-level write throughput limit service quota to a higher value.

Correct Answer: A

Explanation:

Explanation:
Using DynamoDB auto scaling is the recommended way to manage throughput capacity settings for replica tables that use the provisioned mode.
Reference: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/gl…