In a traditional web hosting architecture, dense peak traffic periods and wild swings in traffic patterns result in low utilization rates of expensive hardware. This yields high operating costs to maintain idle hardware, and inefficient use of capital for underused hardware.
The ability to adapt to the traffic and scale in real-time is the key selling point of cloud computing platform.
A typical web hosting architecture is separated into presentation, application and persistence layers.
An equivalent architecture in AWS would look as follows and each tier can scale independently.
There is another diagram with more details in the next section.
AWS Web Hosting Best Practice
Probably the most important shift in how you might architect your AWS application is that Amazon EC2 hosts should be considered ephemeral and dynamic. Any application built for the AWS Cloud should not assume that a host will always be available and should be designed with the knowledge that any data that is not on an EBS volume will be lost if an EC2 instance fails. Additionally, when a new host is brought up, you shouldn't make assumptions about the IP address or location within an Availability Zone of the host.
- Inbound network traffic filtering should not be confined to the edge; it should also be applied at the host level. => security group.
- Elastic Load Balancing (ELB) also supports sticky sessions to address more advanced routing needs.
- Elastic IP addresses are static IP addresses designed for dynamic cloud computing, that you can move from one instance to another.
- In the traditional web hosting architecture, most of the hosts have static IP addresses. In the cloud, most of the hosts will have dynamic IP addresses. Although every EC2 instance can have both public and private DNS entries and will be addressable over the internet, the DNS entries and the IP addresses are assigned dynamically when you launch the instance. They cannot be manually assigned.
- Elastic IP addresses should be used for instances and services that require consistent endpoints, such as master databases, central file servers, and EC2-hosted load balancers.
- Server roles that can easily scale out and in, such as web servers, should be made discoverable at their dynamic endpoints by registering their IP addresses with a central repository. Because most web application architectures have a database that is always one, the database server is a common repository for discovery information.
- It is recommended that databases running on Amazon EC2 use Amazon Elastic Block Store (Amazon EBS) volumes, which are similar to network-attached storage.
- Amazon S3 is a great storage solution for somewhat static or slow-changing objects, such as images, videos, and other static media. Amazon S3 also supports edge caching and streaming of these assets by interacting with CloudFront.
- Amazon EBS is great for data that needs to be accessed as block storage and that requires persistence beyond the life of the running instance, such as database partitions and application logs.
- One of the key differences between the AWS Cloud architecture and the traditional hosting model is that AWS can automatically scale the web application fleet on-demand to handle changes in traffic.
- It is recommended that you deploy EC2 hosts across multiple Availability Zones to make your web application more fault-tolerant. (Think of Availability Zones within an AWS Region as multiple data centers.)
- One of the steps in planning an AWS deployment is the analysis of traffic between hosts. The use of network access control lists within Amazon VPC can help lock down your network at the subnet level.
----- END -----
©2019 - 2022 all rights reserved