AWS Certified Data Analytics – Specialty DAS-C01 – Question134

A data architect at a large financial institution is building a data platform on AWS with the intent of implementing fraud detection by identifying duplicate customer accounts. The fraud detection algorithm will run in a batch mode to identify when a newly created account matches one for a user that was previously fraudulent.
Which approach MOST cost-effectively meets these requirements?

A.
Build a custom deduplication script by using Apache Spark on an Amazon EMR cluster. Use PySpark to compare the data frames that represent the new customers and the fraudulent customer set to identify matches.
B. Load the data to an Amazon Redshift cluster. Use custom SQL to build deduplication logic.
C. Load the data to Amazon S3 to form the basis of a data lake. Use Amazon Athena to build a deduplication script.
D. Load the data to Amazon S3. Use the AWS Glue FindMatches transform to implement deduplication logic.

Correct Answer: D