AWS Certified Solutions Architect – Professional SAP-C01 – Question089

You need a persistent and durable storage to trace call activity of an IVR (Interactive Voice Response) system. Call duration is mostly in the 2-3 minutes timeframe. Each traced call can be either active or terminated. An external application needs to know each minute the list of currently active calls. Usually there are a few calls/second, but once per month there is a periodic peak up to 1000 calls/second for a few hours.
The system is open 24/7 and any downtime should be avoided. Historical data is periodically archived to files. Cost saving is a priority for this project.
What database implementation would better fit this scenario, keeping costs as low as possible?

A.
Use DynamoDB with a "Calls" table and a Global Secondary Index on a "State" attribute that can equal to "active" or "terminated". In this way the Global Secondary Index can be used for all items in the table.
B. Use RDS Multi-AZ with a "CALLS" table and an indexed "STATE" field that can be equal to "ACTIVE" or 'TERMINATED". In this way the SQL query is optimized by the use of the Index.
C. Use RDS Multi-AZ with two tables, one for "ACTIVE_CALLS" and one for "TERMINATED_CALLS". In this way the "ACTIVE_CALLS" table is always small and effective to access.
D. Use DynamoDB with a "Calls" table and a Global Secondary Index on a "IsActive" attribute that is present for active calls only. In this way the Global Secondary Index is sparse and more effective.

Correct Answer: D

Explanation:

Explanation:
Q: Can a global secondary index key be defined on non-unique attributes? Yes. Unlike the primary key on a table, a GSI index does not require the indexed attributes to be unique.
Q: Are GSI key attributes required in all items of a DynamoDB table? No. GSIs are sparse indexes. Unlike the requirement of having a primary key, an item in a DynamoDB table does not have to contain any of the GSI keys. If a GSI key has both hash and range elements, and a table item omits either of them, then that item will not be indexed by the corresponding GSI. In such cases, a GSI can be very useful in efficiently locating items that have an uncommon attribute. Reference: https://aws.amazon.com/dynamodb/faqs/

AWS Certified Solutions Architect – Professional SAP-C01 – Question088

Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a database hosted on AWS.
Which service should you use?

A.
Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
B. Amazon Simple Queue Service (SQS) for capturing the writes and draining the queue to write to the database.
C. Amazon ElastiCache to store the writes until the writes are committed to the database.
D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.

Correct Answer: B

Explanation:

Explanation: Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. By using Amazon SQS, developers can simply move data between distributed application components performing different tasks, without losing messages or requiring each component to be always available. Amazon SQS makes it easy to build a distributed, decoupled application, working in close conjunction with the Amazon Elastic Compute Cloud (Amazon EC2) and the other AWS infrastructure web services. What can I do with Amazon SQS? Amazon SQS is a web service that gives you access to a message queue that can be used to store messages while waiting for a computer to process them. This allows you to quickly build message queuing applications that can be run on any computer on the internet. Since Amazon SQS is highly scalable and you only pay for what you use, you can start small and grow your application as you wish, with no compromise on performance or reliability. This lets you focus on building sophisticated message-based applications, without worrying about how the messages are stored and managed. You can use Amazon SQS with software applications in various ways.
For example, you can: Integrate Amazon SQS with other AWS infrastructure web services to make applications more reliable and flexible. Use Amazon SQS to create a queue of work where each message is a task that needs to be completed by a process. One or many computers can read tasks from the queue and perform them. Build a microservices architecture, using queues to connect your microservices. Keep notifications of significant events in a business process in an Amazon SQS queue. Each event can have a corresponding message in a queue, and applications that need to be aware of the event can read and process the messages.

AWS Certified Solutions Architect – Professional SAP-C01 – Question087

A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity. The web server currently shares read-only data using a network distributed file system. The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast. The database tier uses shared-storage clustering to provide database fall over capability, and uses several read slaves for scaling. Data on all servers and the distributed file system directory is backed up weekly to off-site tapes.
Which AWS storage and database architecture meets the requirements of the application?

A.
Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more read replicas. Backup: web servers, app servers, and database backed up weekly to Glacier using snapshots.
B. Web servers: store read-only data in an EC2 NFS server; mount to each web server at boot time. App servers: share state using a combination of DynamoDB and IP multicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
C. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment and one or more Read Replicas. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.
D. Web servers: store read-only data in S3, and copy from S3 to root volume at boot time. App servers: share state using a combination of DynamoDB and IP unicast. Database: use RDS with multi-AZ deployment. Backup: web and app servers backed up weekly via AMIs, database backed up via DB snapshots.

Correct Answer: C

Explanation:

Explanation: Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention. Benefits Enhanced Durability Multi-AZ deployments for the MySQL, Oracle, and PostgreSQL engines utilize synchronous physical replication to keep data on the standby up-to-date with the primary. Multi-AZ deployments for the SQL Server engine use synchronous logical replication to achieve the same result, employing SQL Server-native Mirroring technology. Both approaches safeguard your data in the event of a DB Instance failure or loss of an Availability Zone.
If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. Compare this to a Single-AZ deployment: in case of a Single-AZ database failure, a user-initiated point-in-time-restore operation will be required. This operation can take several hours to complete, and any data updates that occurred after the latest restorable time (typically within the last five minutes) will not be available. Amazon Aurora employs a highly durable, SSD-backed virtualized storage layer purpose-built for database workloads. Amazon Aurora automatically replicates your volume six ways, across three Availability Zones. Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and replaced automatically.
Increased Availability You also benefit from enhanced database availability when running Multi-AZ deployments. If an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete: typically under one minute for Amazon Aurora and one to two minutes for other database engines (see the RDS FAQ for details). The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups. In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to complete. Unlike Single-AZ deployments, I/O activity is not suspended on your primary during backup for Multi-AZ deployments for the MySQL, Oracle, and PostgreSQL engines, because the backup is taken from the standby. However, note that you may still experience elevated latencies for a few minutes during backups for Multi-AZ deployments. On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS Multi-AZ technology to automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three Availability Zones. If no Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to create a new Amazon Aurora DB instance for you automatically. No Administrative Intervention DB Instance failover is fully automatic and requires no administrative intervention. Amazon RDS monitors the health of your primary and standbys, and initiates a failover automatically in response to a variety of failure conditions. Failover conditions Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following:

  • Loss of availability in primary Availability Zone
  • Loss of network connectivity to primary
  • Compute unit failure on primary
  • Storage failure on primary

Note: When operations such as DB Instance scaling or system upgrades like OS patching are initiated for Multi-AZ deployments, for enhanced availability, they are applied first on the standby prior to an automatic failover. As a result, your availability impact is limited only to the time required for automatic failover to complete. Note that Amazon RDS Multi-AZ deployments do not failover automatically in response to database operations such as long running queries, deadlocks or database corruption errors.

AWS Certified Solutions Architect – Professional SAP-C01 – Question086

Your system recently experienced down time during the troubleshooting process. You found that a new administrator mistakenly terminated several production EC2 instances.
Which of the following strategies will help prevent a similar situation in the future?
The administrator still must be able to:

  • launch, start stop, and terminate development resources.
  • launch and start production instances.


A.
Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection.
B. Leverage resource based tagging, along with an IAM user which can prevent specific users from terminating production, EC2 resources.
C. Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances
D. Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.

Correct Answer: B

Explanation:

Explanation: Working with volumes When an API action requires a caller to specify multiple resources, you must create a policy statement that allows users to access all required resources. If you need to use a Condition element with one or more of these resources, you must create multiple statements as shown in this example. The following policy allows users to attach volumes with the tag “volume_user=iam-user-name” to instances with the tag “department=dev”, and to detach those volumes from those instances. If you attach this policy to an IAM group, the aws:username policy variable gives each IAM user in the group permission to attach or detach volumes from the instances with a tag named volume_user that has his or her IAM user name as a value. {
“Version”: “2012-10-17”,
“Statement”: [{ “Effect”: “Allow”, “Action”: [
“ec2:AttachVolume”,
“ec2:DetachVolume” ], “Resource”: “arn:aws:ec2:us-east-1:123456789012:instance/*”, “Condition”: {
“StringEquals”: { “ec2:ResourceTag/department”: “dev” }
} }, {
“Effect”: “Allow”,
“Action”: [ “ec2:AttachVolume”, “ec2:DetachVolume”
], “Resource”: “arn:aws:ec2:us-east-1:123456789012:volume/*”, “Condition”: {
“StringEquals”: { “ec2:ResourceTag/volume_user”: “${aws:username}” } } }
] } Launching instances (RunInstances) The RunInstances API action launches one or more instances. RunInstances requires an AMI and creates an instance; and users can specify a key pair and security group in the request. Launching into EC2-VPC requires a subnet, and creates a network interface. Launching from an Amazon EBS-backed AMI creates a volume. Therefore, the user must have permission to use these Amazon EC2 resources. The caller can also configure the instance using optional parameters to RunInstances, such as the instance type and a subnet. You can create a policy statement that requires users to specify an optional parameter, or restricts users to particular values for a parameter. The examples in this section demonstrate some of the many possible ways that you can control the configuration of an instance that a user can launch. Note that by default, users don’t have permission to describe, start, stop, or terminate the resulting instances. One way to grant the users permission to manage the resulting instances is to create a specific tag for each instance, and then create a statement that enables them to manage instances with that tag. For more information, see 2: Working with instances.
a. AMI The following policy allows users to launch instances using only the AMIs that have the specified tag, “department=dev”, associated with them. The users can’t launch instances using other AMIs because the Condition element of the first statement requires that users specify an AMI that has this tag. The users also can’t launch into a subnet, as the policy does not grant permissions for the subnet and network interface resources. They can, however, launch into EC2-Classic. The second statement uses a wildcard to enable users to create instance resources, and requires users to specify the key pair project_keypair and the security group sg-1a2b3c4d. Users are still able to launch instances without a key pair. {
“Version”: “2012-10-17”,
“Statement”: [{ “Effect”: “Allow”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region::image/ami-*” ], “Condition”: {
“StringEquals”: { “ec2:ResourceTag/department”: “dev” }
} }, {
“Effect”: “Allow”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region:account:instance/*”, “arn:aws:ec2:region:account:volume/*”, “arn:aws:ec2:region:account:key-pair/project_keypair”, “arn:aws:ec2:region:account:security-group/sg-1a2b3c4d”
] }
] } Alternatively, the following policy allows users to launch instances using only the specified AMIs, ami-9e1670f7 and ami45cf5c3c. The users can’t launch an instance using other AMIs (unless another statement grants the users permission to do so), and the users can’t launch an instance into a subnet. {
“Version”: “2012-10-17”,
“Statement”: [{ “Effect”: “Allow”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region::image/ami-9e1670f7”, “arn:aws:ec2:region::image/ami-45cf5c3c”, “arn:aws:ec2:region:account:instance/*”, “arn:aws:ec2:region:account:volume/*”, “arn:aws:ec2:region:account:key-pair/*”, “arn:aws:ec2:region:account:security-group/*”
] }
] } Alternatively, the following policy allows users to launch instances from all AMIs owned by Amazon. The Condition element of the first statement tests whether ec2:Owner is amazon. The users can’t launch an instance using other AMIs (unless another statement grants the users permission to do so). The users are able to launch an instance into a subnet. {
“Version”: “2012-10-17”,
“Statement”: [{ “Effect”: “Allow”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region::image/ami-*” ], “Condition”: {
“StringEquals”: { “ec2:Owner”: “amazon” }
} },
{ “Effect”: “Allow”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region:account:instance/*”, “arn:aws:ec2:region:account:subnet/*”, “arn:aws:ec2:region:account:volume/*”, “arn:aws:ec2:region:account:network-interface/*”, “arn:aws:ec2:region:account:key-pair/*”, “arn:aws:ec2:region:account:security-group/*” ]
} ] }
b. Instance type The following policy allows users to launch instances using only the t2.micro or t2.small instance type, which you might do to control costs. The users can’t launch larger instances because the Condition element of the first statement tests whether ec2:InstanceType is either t2.micro or t2.small. {
“Version”: “2012-10-17”,
“Statement”: [{ “Effect”: “Allow”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region:account:instance/*” ], “Condition”: {
“StringEquals”: { “ec2:InstanceType”: [“t2.micro”, “t2.small”] }
} }, {
“Effect”: “Allow”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region::image/ami-*”, “arn:aws:ec2:region:account:subnet/*”, “arn:aws:ec2:region:account:network-interface/*”, “arn:aws:ec2:region:account:volume/*”, “arn:aws:ec2:region:account:key-pair/*”, “arn:aws:ec2:region:account:security-group/*” ]
}
] } Alternatively, you can create a policy that denies users permission to launch any instances except t2.micro and t2.small instance types. {
“Version”: “2012-10-17”,
“Statement”: [{ “Effect”: “Deny”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region:account:instance/*” ], “Condition”: {
“StringNotEquals”: { “ec2:InstanceType”: [“t2.micro”, “t2.small”] }
} }, {
“Effect”: “Allow”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region::image/ami-*”, “arn:aws:ec2:region:account:network-interface/*”, “arn:aws:ec2:region:account:instance/*”,
“arn:aws:ec2:region:account:subnet/*”, “arn:aws:ec2:region:account:volume/*”, “arn:aws:ec2:region:account:key-pair/*”, “arn:aws:ec2:region:account:security-group/*” ]
} ] }
c. Subnet The following policy allows users to launch instances using only the specified subnet, subnet-12345678. The group can’t launch instances into any another subnet (unless another statement grants the users permission to do so). Users are still able to launch instances into EC2-Classic. {
“Version”: “2012-10-17”,
“Statement”: [{ “Effect”: “Allow”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region:account:subnet/subnet-12345678”, “arn:aws:ec2:region:account:network-interface/*”, “arn:aws:ec2:region:account:instance/*”, “arn:aws:ec2:region:account:volume/*”, “arn:aws:ec2:region::image/ami-*”, “arn:aws:ec2:region:account:key-pair/*”, “arn:aws:ec2:region:account:security-group/*”
] }
] } Alternatively, you could create a policy that denies users permission to launch an instance into any other subnet. The statement does this by denying permission to create a network interface, except where subnet subnet-12345678 is specified. This denial overrides any other policies that are created to allow launching instances into other subnets. Users are still able to launch instances into EC2-Classic. {
“Version”: “2012-10-17”,
“Statement”: [{ “Effect”: “Deny”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region:account:network-interface/*” ], “Condition”: {
“ArnNotEquals”: { “ec2:Subnet”: “arn:aws:ec2:region:account:subnet/subnet-12345678” }
} }, {
“Effect”: “Allow”, “Action”: “ec2:RunInstances”, “Resource”: [
“arn:aws:ec2:region::image/ami-*”, “arn:aws:ec2:region:account:network-interface/*”, “arn:aws:ec2:region:account:instance/*”, “arn:aws:ec2:region:account:subnet/*”, “arn:aws:ec2:region:account:volume/*”, “arn:aws:ec2:region:account:key-pair/*”, “arn:aws:ec2:region:account:security-group/*” ]
} ] }

AWS Certified Solutions Architect – Professional SAP-C01 – Question085

You have an application running on an EC2 instance which will allow users to download files from a private S3 bucket using a pre-signed URL. Before generating the URL, the application should verify the existence of the file in S3.
How should the application use AWS credentials to access the S3 bucket securely?

A.
Use the AWS account access keys; the application retrieves the credentials from the source code of the application.
B. Create an IAM role for EC2 that allows list access to objects In the S3 bucket; launch the Instance with the role, and retrieve the role's credentials from the EC2 instance metadata.
C. Create an IAM user for the application with permissions that allow list access to the S3 bucket; the application retrieves the 1AM user credentials from a temporary directory with permissions that allow read access only to the Application user.
D. Create an IAM user for the application with permissions that allow list access to the S3 bucket; launch the instance as the IAM user, and retrieve the IAM user's credentials from the EC2 instance user data.

AWS Certified Solutions Architect – Professional SAP-C01 – Question084

You are designing a data leak prevention solution for your VPC environment. You want your VPC Instances to be able to access software depots and distributions on the Internet for product updates. The depots and distributions are accessible via third party CDNs by their URLs. You want to explicitly deny any other outbound connections from your VPC instances to hosts on the internet.
Which of the following options would you consider?

A.
Configure a web proxy server in your VPC and enforce URL-based rules for outbound access Remove default routes.
B. Implement security groups and configure outbound rules to only permit traffic to software depots.
C. Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only.
D. Implement network access control lists to all specific destinations, with an Implicit deny all rule.

Correct Answer: A

Explanation:

Explanation: Organizations usually implement proxy solutions to provide URL and web content filtering, IDS/IPS, data loss prevention, monitoring, and advanced threat protection. Reference: https://d0.awsstatic.com/aws-answers/Controlling_VPC_Egress_Traffic…

AWS Certified Solutions Architect – Professional SAP-C01 – Question083

You are running a successful multitier web application on AWS and your marketing department has asked you to add a reporting tier to the application. The reporting tier will aggregate and publish status reports every 30 minutes from user-generated information that is being stored in your web application s database.
You are currently running a Multi-AZ RDS MySQL instance for the database tier. You also have implemented Elasticache as a database caching layer between the application tier and database tier.
Please select the answer that will allow you to successfully implement the reporting tier with as little impact as possible to your database.

A.
Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests.
B. Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ.
C. Launch a RDS Read Replica connected to your Multi AZ master database and generate reports by querying the Read Replica.
D. Generate the reports by querying the ElastiCache database caching tier.

Correct Answer: C

Explanation:

Explanation: Amazon RDS allows you to use read replicas with Multi-AZ deployments. In Multi-AZ deployments for MySQL, Oracle, SQL Server, and PostgreSQL, the data in your primary DB Instance is synchronously replicated to to a standby instance in a different Availability Zone (AZ). Because of their synchronous replication, Multi-AZ deployments for these engines offer greater data durability benefits than do read replicas. (In all Amazon RDS for Aurora deployments, your data is automatically replicated across 3 Availability Zones.) You can use Multi-AZ deployments and read replicas in conjunction to enjoy the complementary benefits of each. You can simply specify that a given Multi-AZ deployment is the source DB Instance for your Read replicas. That way you gain both the data durability and availability benefits of Multi-AZ deployments and the read scaling benefits of read replicas. Note that for Multi-AZ deployments, you have the option to create your read replica in an AZ other than that of the primary and the standby for even more redundancy. You can identify the AZ corresponding to your standby by looking at the “Secondary Zone” field of your DB Instance in the AWS Management Console.

AWS Certified Solutions Architect – Professional SAP-C01 – Question082

A company is running a batch analysis every hour on their main transactional DB, running on an RDS MySQL instance, to populate their central Data Warehouse running on Redshift. During the execution of the batch, their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data. The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required. The on-premises system cannot be modified because is managed by another team.
How would you optimize this scenario to solve performance issues and automate the process as much as possible?

A.
Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
B. Replace RDS with Redshift for the oaten analysis and SQS to send a message to the on-premises system to update the dashboard
C. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
D. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.

Correct Answer: C

AWS Certified Solutions Architect – Professional SAP-C01 – Question081

Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon S3 versus acquiring more hardware The outcome was that ail employees would be granted access to use Amazon S3 for storage of their personal documents.
Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? (Choose three.)

A.
Setting up a federation proxy or identity provider
B. Using AWS Security Token Service to generate temporary tokens
C. Tagging each folder in the bucket
D. Configuring IAM role
E. Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket

Correct Answer: ABD

AWS Certified Solutions Architect – Professional SAP-C01 – Question080

Dave is the main administrator in Example Corp., and he decides to use paths to help delineate the users in the company and set up a separate administrator group for each path-based division. Following is a subset of the full list of paths he plans to use:

  • /marketing
  • /sales
  • /legal

Dave creates an administrator group for the marketing part of the company and calls it Marketing_Admin. He assigns it the /marketing path. The group's ARN is arn:aws:iam::123456789012:group/marketing/Marketing_Admin. Dave assigns the following policy to the Marketing_Admin group that gives the group permission to use all IAM actions with all groups and users in the /marketing path. The policy also gives the Marketing_Admin group permission to perform any AWS S3 actions on the objects in the portion of the corporate bucket.
{
"Version": "2012-10-17",
"Statement":
[
{
"Effect": "Deny",
"Action": "iam:*",
"Resource":
[
"arn:aws:iam::123456789012:group/marketing/*",
"arn:aws:iam::123456789012:user/marketing/*"
]
},
{ "Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::example_bucket/marketing/*"
},
{
"Effect": "Allow",
"Action": "s3:ListBucket*", "
Resource": "arn:aws:s3:::example_bucket",
"Condition":{"StringLike":{"s3:prefix": "marketing/*"}}
}
]
}

A.
True
B. False

Correct Answer: B

Explanation:

Explanation: Effect Deny