AWS Certified Database – Specialty – Question191

A company's development team needs to have production data restored in a staging AWS account. The production database is running on an Amazon RDS for PostgreSQL Multi-AZ DB instance, which has AWS KMS encryption enabled using the default KMS key. A database specialist planned to share the most recent automated snapshot with the staging account, but discovered that the option to share snapshots is disabled in the AWS Management Console.
What should the database specialist do to resolve this?

A.
Disable automated backups in the DB instance. Share both the automated snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account and enable automated backups.
B. Copy the automated snapshot specifying a custom KMS encryption key. Share both the copied snapshot and the custom KMS encryption key with the staging account. Restore the snapshot to the staging account within the same Region.
C. Modify the DB instance to use a custom KMS encryption key. Share both the automated snapshot and the custom KMS encryption key with the staging account. Restore the snapshot in the staging account.
D. Copy the automated snapshot while keeping the default KMS key. Share both the snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account.

Correct Answer: B

Explanation:

Explanation:
You then create a copy of that snapshot and specify a KMS key to encrypt that snapshot copy. You can then restore an encrypted DB instance from the encrypted snapshot.
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySna…

AWS Certified Database – Specialty – Question190

Application developers have reported that an application is running slower as more users are added. The application database is running on an Amazon Aurora DB cluster with an Aurora Replica. The application is written to take advantage of read scaling through reader endpoints. A database specialist looks at the performance metrics of the database and determines that, as new users were added to the database, the primary instance CPU utilization steadily increased while the Aurora Replica CPU utilization remained steady.
How can the database specialist improve database performance while ensuring minimal downtime?

A.
Modify the Aurora DB cluster to add more replicas until the overall load stabilizes. Then, reduce the number of replicas once the application meets service level objectives.
B. Modify the primary instance to a larger instance size that offers more CPU capacity.
C. Modify a replica to a larger instance size that has more CPU capacity. Then, promote the modified replica.
D. Restore the Aurora DB cluster to one that has an instance size with more CPU capacity. Then, swap the names of the old and new DB clusters.

Correct Answer: B

Explanation:

Explanation:
The automatic storage increase setting of a primary instance automatically applies to any read replicas of that instance. The automatic storage increase setting cannot be independently set for read replicas.
Reference: https://cloud.google.com/sql/docs/postgres/instance-settings

AWS Certified Database – Specialty – Question189

A company wants to build a new invoicing service for its cloud-native application on AWS. The company has a small development team and wants to focus on service feature development and minimize operations and maintenance as much as possible. The company expects the service to handle billions of requests and millions of new records every day. The service feature requirements, including data access patterns are well-defined. The service has an availability target of 99.99% with a milliseconds latency requirement. The database for the service will be the system of record for invoicing data.
Which database solution meets these requirements at the LOWEST cost?

A.
Amazon Neptune
B. Amazon Aurora PostgreSQL Serverless
C. Amazon RDS for PostgreSQL
D. Amazon DynamoDB

Correct Answer: A

Explanation:

AWS Certified Database – Specialty – Question188

A company has an application that uses an Amazon DynamoDB table as its data store. During normal business days, the throughput requirements from the application are uniform and consist of 5 standard write calls per second to the DynamoDB table. Each write call has 2 KB of data.
For 1 hour each day, the company runs an additional automated job on the DynamoDB table that makes 20 write requests per second. No other application writes to the DynamoDB table. The DynamoDB table does not have to meet any additional capacity requirements.
How should a database specialist configure the DynamoDB table's capacity to meet these requirements MOST cost-effectively?

A.
Use DynamoDB provisioned capacity with 5 WCUs and auto scaling.
B. Use DynamoDB provisioned capacity with 5 WCUs and a write-through cache that DynamoDB Accelerator (DAX) provides.
C. Use DynamoDB provisioned capacity with 10 WCUs and auto scaling.
D. Use DynamoDB provisioned capacity with 10 WCUs and no auto scaling.

AWS Certified Database – Specialty – Question187

A company with 500,000 employees needs to supply its employee list to an application used by human resources. Every 30 minutes, the data is exported using the LDAP service to load into a new Amazon DynamoDB table. The data model has a base table with Employee ID for the partition key and a global secondary index with Organization ID as the partition key.
While importing the data, a database specialist receives ProvisionedThroughputExceededException errors.
After increasing the provisioned write capacity units (WCUs) to 50,000, the specialist receives the same errors. Amazon CloudWatch metrics show a consumption of 1,500 WCUs.
What should the database specialist do to address the issue?

A.
Change the data model to avoid hot partitions in the global secondary index.
B. Enable auto scaling for the table to automatically increase write capacity during bulk imports.
C. Modify the table to use on-demand capacity instead of provisioned capacity.
D. Increase the number of retries on the bulk loading application.

AWS Certified Database – Specialty – Question186

A financial services company runs an on-premises MySQL database for a critical application. The company is dissatisfied with its current database disaster recovery (DR) solution. The application experiences a significant amount of downtime whenever the database fails over to its DR facility. The application also experiences slower response times when reports are processed on the same database. To minimize the downtime in DR situations, the company has decided to migrate the database to AWS. The company requires a solution that is highly available and the most cost-effective.
Which solution meets these requirements?

A.
Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the replica instance endpoint and report queries to reference the primary DB instance endpoint.
B. Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint.
C. Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the cluster endpoint and report queries to reference the reader endpoint.
D. Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint.

Correct Answer: A

Explanation:

AWS Certified Database – Specialty – Question185

A database specialist is designing an application to answer one-time queries. The application will query complex customer data and provide reports to end users. These reports can include many fields. The database specialist wants to give users the ability to query the database by using any of the provided fields.
The database's traffic volume will be high but variable during peak times. However, the database will not have much traffic at other times during the day.
Which solution will meet these requirements MOST cost-effectively?

A.
Amazon DynamoDB with provisioned capacity mode and auto scaling
B. Amazon DynamoDB with on-demand capacity mode
C. Amazon Aurora with auto scaling enabled
D. Amazon Aurora in a serverless mode

AWS Certified Database – Specialty – Question184

A financial company is hosting its web application on AWS. The application's database is hosted on Amazon RDS for MySQL with automated backups enabled. The application has caused a logical corruption of the database, which is causing the application to become unresponsive. The specific time of the corruption has been identified, and it was within the backup retention period.
How should a database specialist recover the database to the most recent point before corruption?

A.
Use the point-in-time restore capability to restore the DB instance to the specified time. No changes to the application connection string are required.
B. Use the point-in-time restore capability to restore the DB instance to the specified time. Change the application connection string to the new, restored DB instance.
C. Restore using the latest automated backup. Change the application connection string to the new, restored DB instance.
D. Restore using the appropriate automated backup. No changes to the application connection string are required.

Correct Answer: A

Explanation:

AWS Certified Database – Specialty – Question183

A company is migrating a database in an Amazon RDS for SQL Server DB instance from one AWS Region to another. The company wants to minimize database downtime during the migration.
Which strategy should the company choose for this cross-Region migration?

A.
Back up the source database using native backup to an Amazon S3 bucket in the same Region. Then restore the backup in the target Region.
B. Back up the source database using native backup to an Amazon S3 bucket in the same Region. Use Amazon S3 Cross-Region Replication to copy the backup to an S3 bucket in the target Region. Then restore the backup in the target Region.
C. Configure AWS Database Migration Service (AWS DMS) to replicate data between the source and the target databases. Once the replication is in sync, terminate the DMS task.
D. Add an RDS for SQL Server cross-Region read replica in the target Region. Once the replication is in sync, promote the read replica to master.

Correct Answer: A

Explanation:

Explanation:
Amazon RDS supports native backup and restore for Microsoft SQL Server databases using full backup files (.bak files). When you use RDS, you access files stored in Amazon S3 rather than using the local file system on the database server.
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Pr…

AWS Certified Database – Specialty – Question182

A company uses a single-node Amazon RDS for MySQL DB instance for its production database. The DB instance runs in an AWS Region in the United States.
A week before a big sales event, a new maintenance update is available for the DB instance. The maintenance update is marked as required. The company wants to minimize downtime for the DB instance and asks a database specialist to make the DB instance highly available until the sales event ends.
Which solution will meet these requirements?

A.
Defer the maintenance update until the sales event is over.
B. Create a read replica with the latest update. Initiate a failover before the sales event.
C. Create a read replica with the latest update. Transfer all read-only traffic to the read replica during the sales event.
D. Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.

Correct Answer: D

Explanation: