AWS Certified Data Analytics – Specialty DAS-C01 – Question026

A company is running Apache Spark on an Amazon EMR cluster. The Spark job writes to an Amazon S3 bucket. The job fails and returns an HTTP 503 "Slow Down" AmazonS3Exception error.
Which actions will resolve this error? (Choose two.)

A.
Add additional prefixes to the S3 bucket
B. Reduce the number of prefixes in the S3 bucket
C. Increase the EMR File System (EMRFS) retry limit
D. Disable dynamic partition pruning in the Spark configuration for the cluster
E. Add more partitions in the Spark configuration for the cluster

Correct Answer: AC

Explanation:

Explanation:
Add more prefixes to the S3 bucket.
Increase the EMR File System (EMRFS) retry limit.
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/emr-s3-503-slow-down/