AWS Certified Database – Specialty – Question201

An ecommerce company uses Amazon DynamoDB as the backend for its payments system. A new regulation requires the company to log all data access requests for financial audits. For this purpose, the company plans to use AWS logging and save logs to Amazon S3
How can a database specialist activate logging on the database?

A.
Use AWS CloudTrail to monitor DynamoDB control-plane operations. Create a DynamoDB stream to monitor data-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.
B. Use AWS CloudTrail to monitor DynamoDB data-plane operations. Create a DynamoDB stream to monitor control-plane operations. Pass the stream to Amazon Kinesis Data Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store the data in an Amazon S3 bucket.
C. Create two trails in AWS CloudTrail. Use Trail1 to monitor DynamoDB control-plane operations. Use Trail2 to monitor DynamoDB data-plane operations.
D. Use AWS CloudTrail to monitor DynamoDB data-plane and control-plane operations.

AWS Certified Database – Specialty – Question200

A company is using Amazon Aurora MySQL as the database for its retail application on AWS. The company receives a notification of a pending database upgrade and wants to ensure upgrades do not occur before or during the most critical time of year. Company leadership is concerned that an Amazon RDS maintenance window will cause an outage during data ingestion.
Which step can be taken to ensure that the application is not interrupted?

A.
Disable weekly maintenance on the DB cluster.
B. Clone the DB cluster and migrate it to a new copy of the database.
C. Choose to defer the upgrade and then find an appropriate down time for patching.
D. Set up an Aurora Replica and promote it to primary at the time of patching.

AWS Certified Database – Specialty – Question199

A company has deployed an application that uses an Amazon RDS for MySQL DB cluster. The DB cluster uses three read replicas. The primary DB instance is an 8XL-sized instance, and the read replicas are each XL-sized instances.
Users report that database queries are returning stale data. The replication lag indicates that the replicas are 5 minutes behind the primary DB instance. Status queries on the replicas show that the SQL_THREAD is 10 binlogs behind the IO_THREAD and that the IO_THREAD is 1 binlog behind the primary.
Which changes will reduce the lag? (Choose two.)

A.
Deploy two additional read replicas matching the existing replica DB instance size.
B. Migrate the primary DB instance to an Amazon Aurora MySQL DB cluster and add three Aurora Replicas.
C. Move the read replicas to the same Availability Zone as the primary DB instance.
D. Increase the instance size of the primary DB instance within the same instance class.
E. Increase the instance size of the read replicas to the same size and class as the primary DB instance.

Correct Answer: BC

AWS Certified Database – Specialty – Question198

A database specialist needs to delete user data and sensor data 1 year after it was loaded in an Amazon DynamoDB table. TTL is enabled on one of the attributes. The database specialist monitors TTL rates on the Amazon CloudWatch metrics for the table and observes that items are not being deleted as expected.
What is the MOST likely reason that the items are not being deleted?

A.
The TTL attribute's value is set as a Number data type.
B. The TTL attribute's value is set as a Binary data type.
C. The TTL attribute's value is a timestamp in the Unix epoch time format in seconds.
D. The TTL attribute's value is set with an expiration of 1 year.

Correct Answer: C

Explanation:

Explanation:
Attribute’s value is a timestamp in Unix epoch time format in seconds.
Reference: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ho…

AWS Certified Database – Specialty – Question197

A company has an on-premises production Microsoft SQL Server with 250 GB of data in one database. A database specialist needs to migrate this on-premises SQL Server to Amazon RDS for SQL Server. The nightly native SQL Server backup file is approximately 120 GB in size. The application can be down for an extended period of time to complete the migration. Connectivity between the on-premises environment and AWS can be initiated from on-premises only.
How can the database be migrated from on-premises to Amazon RDS with the LEAST amount of effort?

A.
Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Download the backup files on an Amazon EC2 instance and restore them from the EC2 instance into the new production RDS instance.
B. Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Restore the backup files from the S3 bucket into the new production RDS instance.
C. Provision and configure AWS DMS. Set up replication between the on-premises SQL Server environment to replicate the database to the new production RDS instance.
D. Back up the SQL Server database using AWS Backup. Once the backup is complete, restore the completed backup to an Amazon EC2 instance and move it to the new production RDS instance.

AWS Certified Database – Specialty – Question196

A company is loading sensitive data into an Amazon Aurora MySQL database. To meet compliance requirements, the company needs to enable audit logging on the Aurora MySQL DB cluster to audit database activity. This logging will include events such as connections, disconnections, queries, and tables queried. The company also needs to publish the DB logs to Amazon CloudWatch to perform real-time data analysis.
Which solution meets these requirements?

A.
Modify the default option group parameters to enable Advanced Auditing. Restart the database for the changes to take effect.
B. Create a custom DB cluster parameter group. Modify the parameters for Advanced Auditing. Modify the cluster to associate the new custom DB parameter group with the Aurora MySQL DB cluster.
C. Take a snapshot of the database. Create a new DB instance, and enable custom auditing and logging to CloudWatch. Deactivate the DB instance that has no logging.
D. Enable AWS CloudTrail for the DB instance. Create a filter that provides only connections, disconnections, queries, and tables queried.

AWS Certified Database – Specialty – Question195

An application reads and writes data to an Amazon RDS for MySQL DB instance. A new reporting dashboard needs read-only access to the database. When the application and reports are both under heavy load, the database experiences performance degradation. A database specialist needs to improve the database performance.
What should the database specialist do to meet these requirements?

A.
Create a read replica of the DB instance. Configure the reports to connect to the replication instance endpoint.
B. Create a read replica of the DB instance. Configure the application and reports to connect to the cluster endpoint.
C. Enable Multi-AZ deployment. Configure the reports to connect to the standby replica.
D. Enable Multi-AZ deployment. Configure the application and reports to connect to the cluster endpoint.

Correct Answer: C

Explanation:

AWS Certified Database – Specialty – Question194

A company's applications store data in Amazon Aurora MySQL DB clusters. The company has separate AWS accounts for its production, test, and development environments. To test new functionality in the test environment, the company's development team requires a copy of the production database four times a day.
Which solution meets this requirement with the MOST operational efficiency?

A.
Take a manual snapshot in the production account. Share the snapshot with the test account. Restore the database from the snapshot.
B. Take a manual snapshot in the production account. Export the snapshot to Amazon S3. Copy the snapshot to an S3 bucket in the test account. Restore the database from the snapshot.
C. Share the Aurora DB cluster with the test account. Create a snapshot of the production database in the test account. Restore the database from the snapshot.
D. Share the Aurora DB cluster with the test account. Create a clone of the production database in the test account.

Correct Answer: D

Explanation:

Explanation:
Creating and restoring a database snapshot. You can create a clone of one of your Aurora DB clusters and share the clone.
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora…

AWS Certified Database – Specialty – Question193

A retail company uses Amazon Redshift Spectrum to run complex analytical queries on objects that are stored in an Amazon S3 bucket. The objects are joined with multiple dimension tables that are stored in an Amazon Redshift database. The company uses the database to create monthly and quarterly aggregated reports. Users who attempt to run queries are reporting the following error message:
error: Spectrum Scan Error: Access throttled
Which solution will resolve this error?

A.
Check file sizes of fact tables in Amazon S3, and look for large files. Break up large files into smaller files of equal size between 100 MB and 1 GB
B. Reduce the number of queries that users can run in parallel.
C. Check file sizes of fact tables in Amazon S3, and look for small files. Merge the small files into larger files of at least 64 MB in size.
D. Review and optimize queries that submit a large aggregation step to Redshift Spectrum.

AWS Certified Database – Specialty – Question192

A software-as-a-service (SaaS) company is using an Amazon Aurora Serverless DB cluster for its production MySQL database. The DB cluster has general logs and slow query logs enabled. A database engineer must use the most operationally efficient solution with minimal resource utilization to retain the logs and facilitate interactive search and analysis.
Which solution meets these requirements?

A.
Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.
B. Download the logs from the DB cluster and store them in Amazon S3 by using manual scripts. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.
C. Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Elasticsearch Service (Amazon ES) and Kibana to search and analyze the logs.
D. Use Amazon CloudWatch Logs Insights to search and analyze the logs when the logs are automatically uploaded by the DB cluster.