Launched back in 2006, AWS has succeeded in becoming the leading provider of on-demand cloud computing services. The cloud computing services provider secures a staggering 32% of the cloud computing market share up until the last quarter of 2018.
Every aspiring developer looking to make it big in the cloud computing ecosphere must have a stronghold on AWS. If you’re eyeing for the role of an AWS Developer, then these most important 20 AWS interview questions will help you take a step further towards your desired job avenue.
AWS Interview Questions
Q: Please explain the difference between stopping and terminating an instance.
A: Both stopping and terminating are states in an EC2 instance:
- Stopping – As soon as an instance is stopped, it performs a normal shutdown and transitions to a stopped state. You can start the instance at a later time and all of its Amazon EBS volumes remain attached. While the instance is in a stopped state, no additional instance hours are incurred.
- Terminating – As soon as an instance is terminated, it performs a normal shutdown and transitions to the terminated state. The attached Amazon EBS volumes are deleted, save for the case when the volume’s deleteOnTermination attribute is set to false. As the instance itself is deleted, it is not possible to start the instance again at some later time.
Q: How will you use the processor state control feature available on the c4.8xlarge instance?
A: The processor state control has 2 states, namely:
- The C State – Represents sleep state. Varies from c0 to c6, where c6 is the deepest sleep state for a processor.
- The P State – Represents performance state. Varies from p0 to p15, where p15 is the lowest possible frequency.
A processor has multiple cores, and each of them requires thermal headroom for gaining a boost in performance. Hence, the temperature needs to be kept at an optimal level so that the cores can perform at their highest.
When a core is put into the sleep state then it results in a reduction of the overall temperature of the processor. This gives an opportunity to other cores for giving out a better performance. Hence, a strategy can be devised by properly putting some cores to sleep and others in a performance state to get an overall performance boost from the processor.
Instances like the c4.8xlarge allow customizing the C and P states for customizing the processor performance according to the workload.
Q: Which instance type can be used for deploying a 4 node cluster of Hadoop in AWS?
A: While the c4.8xlarge instance will be preferred for the master machine, the i2.large instance seems fit for the slave machine. Another way is to launch the Amazon EMR instance that automatically configures the servers.
Hence, you need not deal with manually configuring the instance and installing Hadoop cluster while using Amazon EMR instance. Simply dump the data to be processed in S3. EMR picks it up from there, processes the same and then dumps it back into S3.
Q: Can you differentiate between a Spot instance and an On-Demand instance?
A: Both spot instances and on-demand instances are pricing models. A spot instance allows customers to purchase compute capacity with no upfront commitment. Moreover, the hourly rates for a spot instance are usually lower than what has been set for on-demand instances.
The bidding price for a spot instance is known as the spot price. It fluctuates based on the supply and demand for spot instances. In case the spot price gets higher than a customer’s maximum specified price, the EC2 instance will shut down automatically.
Q: Please enumerate some of the best practices to enhance security in Amazon EC2.
- Allow only trusted hosts or networks to access ports on your instance
- Control access to the AWS resources with AWS Identity and Access Management (IAM)
- Disable password-based logins for instances launched from the AMI
- Frequently review rules in the security groups
Q: Is it possible to use Amazon S3 with EC2 instances? Please elaborate.
A: Yes, it is possible to use Amazon S3 with EC2 instances. It can be used for instances with root devices backed by the local instance storage. Amazon provides an array of tools to load the AMIs into Amazon S3 and to move them amongst Amazon S3 and Amazon EC2 instances.
With Amazon S3, AWS developers enjoy accessing the same highly fast, reliable, inexpensive, and scalable data storage infrastructure used by Amazon to operate its very own global network of websites and services.
Q: How will you speed up data transfer in Amazon Snowball?
A: Data transfer in Amazon Snowball can be enhanced by:
- Copying from different workstations to the same snowball
- Creating a batch of small files or transferring large files for reducing the encryption overhead
- Eliminating needless hops
- Performing multiple copy operations simultaneously
Q: Can you explain the difference between Amazon RDS and Amazon DynamoDB?
A: Amazon RDS is a database management service for relational databases. It allows automating several relational database-related operations like backup, patching, and upgrading. The service deals with structured data only.
Amazon DynamoDB, on the other hand, is a NoSQL database service. Contrary to the Amazon RDS, it deals with unstructured data only. Check out this detailed explanation on NoSQL vs SQL to know more about the important differences between SQL and NoSQL databases.
Q: What AWS services will you choose to collect and process eCommerce data for real-time analysis?
A: DynamoDB will be appropriate for collecting eCommerce data as it will be an unstructured form of data. Real-Time analysis of the collected eCommerce data can be carried out using Amazon Redshift.
Q: Could you tell us what happens to the backups and DB Snapshots if a DB instance is deleted?
A: While deleting a DB instance, there is an option for creating a final DB snapshot. It can be used later for restoring the database.
The Amazon RDS retains the user-created DB snapshot alongside other manually-created DB snapshots once the instance is deleted. All automated backups are deleted along with the instance.
Q: How will you load data to Amazon Redshift from different data sources such as Amazon EC2, DynamoDB, and Amazon RDS?
A: There are two ways of loading data to Amazon Redshift from different data sources, namely:
- Using the AWS Data Pipeline – Offers high performance, fault tolerant, and reliable way of loading data from a range of AWS data sources. It allows specifying the data source, required data transformations, and then executing a pre-written import script for loading data
- Using the COPY command – Load data in parallel directly from Amazon DynamoDB, Amazon EMR, or any other SSH-enabled host
Q: Can you explain how elasticity differs from scalability?
A: The ability of a system to handle an increase in the workload by simply adding hardware resources when the demand rises and also rolling back the scaled resources when there is no longer a need for the same is known as elasticity.
Scalability, on the other hand, is the ability of a system to increase the hardware resources for handling an increase in demand. It can be achieved by either increasing the hardware specs or increasing the processing nodes.
Q: What do you understand by Connection draining?
A: Connection draining is responsible for re-routing the traffic from instances that are either to be updated or fails during a health check to other, available instances. It is an ELB service that continuously monitors the health of instances.
Q: Suppose a user has set up an Auto Scaling group but due to some reason the group fails to launch a single instance for over 24 hours. In this condition, what will happen to Auto Scaling?
A: In such a case, the Auto Scaling will suspend the scaling process. The Auto Scaling feature allows suspending and resuming one or many Auto Scaling processes belonging to the Auto Scaling group.
The Auto Scaling feature is immensely useful when a web application needs to be investigated for a configuration or some other issue.
Q: How will you transfer an existing domain name registration to Amazon Route 53 without disrupting the extant web traffic?
- Get a list of DNS record data for the domain name. It is typically available in the form of a zone file that can be gained from the extant DNS provider.
- After receiving the DNS record data, use the Route 53 Management Console or the simple web-services interface for creating a hosted zone for storing the DNS records for the domain name and continue the transfer process. Here, you can also include other non-essential steps such as updating nameservers for the domain name to the ones associated with the hosted zone.
- Contact the registrar with whom you have registered the domain name and then follow the transfer process. The DNS queries will start getting answered as soon as the registrar propagates the new name server delegations.
Q: What are the ideal cases for using the Classic Load Balancer and the Application Load Balancer?
A: The Classic Load Balancer is the befitting option for simple load balancing of traffic across several EC2 instances.
On the contrary, the Application Load Balancer is suitable for container-based or microservices architecture where there is either a requirement for routing traffic to different services or carrying out load balancing across multiple ports on the same EC2 instance.
Q: Can you explain how does the AWS Elastic Beanstalk apply updates?
A: Before updating the original instance, AWS Elastic Beanstalk readies a duplicate copy of the instance. Thereafter, it routes the traffic to the duplicate instance so as to avoid a scenario where the update application fails.
In case there is a failure in the update process, the AWS Elastic Beanstalk will switch back to the original instance using the very same duplicate copy it created before beginning the update process.
Q: Please explain what happens if an application stops responding to requests in AWS Elastic Beanstalk.
A: Even though the underlying infrastructure appears healthy, Beanstalk is able to detect if the application isn’t responding on the custom link. It then logs the situation as an environmental event, which can then be checked in detail and thus, acted upon.
AWS Elastic Beanstalk apps have a built-in system for avoiding underlying infrastructure failures. The Beanstalk uses the Auto Scaling feature to automatically launch a new instance in case an Amazon EC2 instance fails.
Q: How is the AWS CloudFormation different from AWS OpsWorks?
A: Although both AWS CloudFormation and AWS OpsWorks provide support for application modeling, deployment, configuration, and management activities, the two differ in terms of the abstraction level and the areas of focus.
AWS CloudFormation is a building block service that allows managing almost any AWS resource via JSON-based domain specific language. Even without prescribing a distinct model for development and operations, CloudFormation offers foundational capabilities for the AWS.
With AWS CloudFormation, customers can define templates and then use the same to the provision as well as manage AWS application code, resources, and operating systems.
AWS OpsWorks, on the other hand, is a high-level service focusing on providing highly reliable and productive DevOps experience for IT admins and ops-oriented developers.
OpsWorks features a configuration management model and offers integrated experiences for activities like auto-scaling, automation, deployment, and monitoring.
Compared to CloudFormation, OpsWorks provides support for less number of application-oriented AWS resource types, including Amazon CloudWatch metrics, EBS volumes, EC2 instances, and Elastic IPs.
Q: Can you tell us what happens when one of the resources in a stack can’t be created successfully in AWS OpsWorks?
A: The automatic rollback on error feature is enabled when one of the resources in a stack can’t be created successfully in AWS OpsWorks. The feature results in the deletion of all the successfully created AWS resources until the point of the occurrence of the error.
Doing so ensures that no error-causing data is left behind as well as abiding by the principle that the stacks are either created completely or not created at all.
The automatic rollback on error feature is useful especially in cases where one might unknowingly exceed the limit of the total number of Elastic IP addresses or does not have access to the EC2 AMI.
That sums up the list of most important 20 AWS interview questions list. These will surely help you tighten up your AWS interview preparation.
Do you have some other AWS queries not covered in the list? Ask us via the dedicated comments window below. We’ll try our best to provide you with a relevant answer. Also, don’t forget to check out these best AWS tutorials to refine and enhance your AWS knowledge.
Developer Might be Interested In: