AWS Certified Database – Specialty – Question211

A company is launching a new Amazon RDS for MySQL Multi-AZ DB instance to be used as a data store for a custom-built application. After a series of tests with point-in-time recovery disabled, the company decides that it must have point-in-time recovery reenabled before using the DB instance to store production data.
What should a database specialist do so that point-in-time recovery can be successful?

A.
Enable binary logging in the DB parameter group used by the DB instance.
B. Modify the DB instance and enable audit logs to be pushed to Amazon CloudWatch Logs.
C. Modify the DB instance and configure a backup retention period
D. Set up a scheduled job to create manual DB instance snapshots.

AWS Certified Database – Specialty – Question210

A software company uses an Amazon RDS for MySQL Multi-AZ DB instance as a data store for its critical applications. During an application upgrade process, a database specialist runs a custom SQL script that accidentally removes some of the default permissions of the master user.
What is the MOST operationally efficient way to restore the default permissions of the master user?

A.
Modify the DB instance and set a new master user password.
B. Use AWS Secrets Manager to modify the master user password and restart the DB instance.
C. Create a new master user for the DB instance.
D. Review the IAM user that owns the DB instance, and add missing permissions.

Correct Answer: A

Explanation:

Explanation:
If you accidentally delete the permissions for the master user, you can restore them by modifying the DB instance and setting a new master user password.
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS…

AWS Certified Database – Specialty – Question209

A database specialist is working on an Amazon RDS for PostgreSQL DB instance that is experiencing application performance issues due to the addition of new workloads. The database has 5 of storage space with Provisioned IOPS. Amazon CloudWatch metrics show that the average disk queue depth is greater than 200 and that the disk I/O response time is significantly higher than usual.
What should the database specialist do to improve the performance of the application immediately?

A.
Increase the Provisioned IOPS rate on the storage.
B. Increase the available storage space.
C. Use General Purpose SSD (gp2) storage with burst credits.
D. Create a read replica to offload Read IOPS from the DB instance.

Correct Answer: C

Explanation:

Explanation:
General Purpose SSD.
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage…

AWS Certified Database – Specialty – Question208

A company has a 4 on-premises Oracle Real Application Clusters (RAC) database. The company wants to migrate the database to AWS and reduce licensing costs. The company's application team wants to store JSON payloads that expire after 28 hours. The company has development capacity if code changes are required.
Which solution meets these requirements?

A.
Use Amazon DynamoDB and leverage the Time to Live (TTL) feature to automatically expire the data.
B. Use Amazon RDS for Oracle with Multi-AZ. Create an AWS Lambda function to purge the expired data. Schedule the Lambda function to run daily using Amazon EventBridge.
C. Use Amazon DocumentDB with a read replica in a different Availability Zone. Use DocumentDB change streams to expire the data.
D. Use Amazon Aurora PostgreSQL with Multi-AZ and leverage the Time to Live (TTL) feature to automatically expire the data.

Correct Answer: A

Explanation:

Explanation:
Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput.
Reference: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TT…

AWS Certified Database – Specialty – Question207

A finance company migrated its 3 on-premises PostgreSQL database to an Amazon Aurora PostgreSQL DB cluster. During a review after the migration, a database specialist discovers that the database is not encrypted at rest. The database must be encrypted at rest as soon as possible to meet security requirements. The database specialist must enable encryption for the DB cluster with minimal downtime.
Which solution will meet these requirements?

A.
Modify the unencrypted DB cluster using the AWS Management Console. Enable encryption and choose to apply the change immediately.
B. Take a snapshot of the unencrypted DB cluster and restore it to a new DB cluster with encryption enabled. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.
C. Create an encrypted Aurora Replica of the unencrypted DB cluster. Promote the Aurora Replica as the new master.
D. Create a new DB cluster with encryption enabled and use the pg_dump and pg_restore utilities to load data to the new DB cluster. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster.

Correct Answer: A

Explanation:

Explanation:
To enable encryption for a new DB cluster, choose Enable encryption on the console. For information on creating a DB cluster.
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Overvi…

AWS Certified Database – Specialty – Question206

A company plans to use AWS Database Migration Service (AWS DMS) to migrate its database from one Amazon EC2 instance to another EC2 instance as a full load task. The company wants the database to be inactive during the migration. The company will use a dms.t3.medium instance to perform the migration and will use the default settings for the migration.
Which solution will MOST improve the performance of the data migration?

A.
Increase the number of tables that are loaded in parallel.
B. Drop all indexes on the source tables.
C. Change the processing mode from the batch optimized apply option to transactional mode.
D. Enable Multi-AZ on the target database while the full load task is in progress.

Correct Answer: C

Explanation:

Explanation:
Optimizing change processing.
By default, AWS DMS processes changes in a transactional mode, which preserves transactional integrity.
If you can afford temporary lapses in transactional integrity, you can use the batch optimized apply option instead. This option efficiently groups transactions and applies them in batches for efficiency purposes.
Using the batch optimized apply option almost always violates referential integrity constraints. So we recommend that you turn these constraints off during the migration process and turn them on again as part of the cutover process.
Reference: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices…

AWS Certified Database – Specialty – Question205

A database specialist is designing an enterprise application for a large company. The application uses Amazon DynamoDB with DynamoDB Accelerator (DAX). The database specialist observes that most of the queries are not found in the DAX cache and that they still require DynamoDB table reads.
What should the database specialist review first to improve the utility of DAX?

A.
The DynamoDB ConsumedReadCapacityUnits metric
B. The trust relationship to perform the DynamoDB API calls
C. The DAX cluster's TTL setting
D. The validity of customer-specified AWS Key Management Service (AWS KMS) keys for DAX encryption at rest

AWS Certified Database – Specialty – Question204

A company conducted a security audit of its AWS infrastructure. The audit identified that data was not encrypted in transit between application servers and a MySQL database that is hosted in Amazon RDS.
After the audit, the company updated the application to use an encrypted connection. To prevent this problem from occurring again, the company's database team needs to configure the database to require in- transit encryption for all connections.
Which solution will meet this requirement?

A.
Update the parameter group in use by the DB instance, and set the require_secure_transport parameter to ON.
B. Connect to the database, and use ALTER USER to enable the REQUIRE SSL option on the database user.
C. Update the security group in use by the DB instance, and remove port 80 to prevent unencrypted connections from being established.
D. Update the DB instance, and enable the Require Transport Layer Security option.

Correct Answer: A

Explanation:

Explanation:
You can set the require_secure_transport parameter to ON to require SSL/TLS for connections to your DB cluster.
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora…

AWS Certified Database – Specialty – Question203

A company stores session history for its users in an Amazon DynamoDB table. The company has a large user base and generates large amounts of session data. Teams analyze the session data for 1 week, and then the data is no longer needed. A database specialist needs to design an automated solution to purge session data that is more than 1 week old.
Which strategy meets these requirements with the MOST operational efficiency?

A.
Create an AWS Step Functions state machine with a DynamoDB DeleteItem operation that uses the ConditionExpression parameter to delete items older than a week. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule that runs the Step Functions state machine on a weekly basis.
B. Create an AWS Lambda function to delete items older than a week from the DynamoDB table. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule that triggers the Lambda function on a weekly basis.
C. Enable Amazon DynamoDB Streams on the table. Use a stream to invoke an AWS Lambda function to delete items older than a week from the DynamoDB table
D. Enable TTL on the DynamoDB table and set a Number data type as the TTL attribute. DynamoDB will automatically delete items that have a TTL that is less than the current time.

AWS Certified Database – Specialty – Question202

A vehicle insurance company needs to choose a highly available database to track vehicle owners and their insurance details. The persisted data should be immutable in the database, including the complete and sequenced history of changes over time with all the owners and insurance transfer details for a vehicle. The data should be easily verifiable for the data lineage of an insurance claim.
Which approach meets these requirements with MINIMAL effort?

A.
Create a blockchain to store the insurance details. Validate the data using a hash function to verify the data lineage of an insurance claim.
B. Create an Amazon DynamoDB table to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.
C. Create an Amazon QLDB ledger to store the insurance details. Validate the data by choosing the ledger name in the digest request to verify the data lineage of an insurance claim.
D. Create an Amazon Aurora database to store the insurance details. Validate the data using AWS DMS validation by moving the data to Amazon S3 to verify the data lineage of an insurance claim.