AWS basics For DevOps Engineer

Cloud computing with AWS

Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

Most functionality

AWS has significantly more services and features within those services than any other cloud provider, from infrastructure technologies like computing, storage, and databases to emerging technologies such as machine learning and artificial intelligence, data lakes and analytics, and the Internet of Things. This makes it faster, easier, and more cost-effective to move your existing applications to the cloud and build nearly anything you can imagine.

AWS also has the deepest functionality within those services. For example, AWS offers the widest variety of databases that are purpose-built for different types of applications, so you can choose the right tool for the job to get the best cost and performance.

Largest community of customers and partners.

AWS has the largest and most dynamic community, with millions of active customers and tens of thousands of partners globally. Customers across virtually every industry and of every size, including startups, enterprises, and public sector organizations, are running every imaginable use case on AWS. The AWS Partner Network (APN) includes thousands of systems integrators who specialize in AWS services and tens of thousands of independent software vendors (ISVs) who adapt their technology to work on AWS

Most secure

AWS is architected to be the most flexible and secure cloud computing environment available today. Our core infrastructure is built to satisfy the security requirements of the military, global banks, and other high-sensitivity organizations. This is backed by a deep set of cloud security tools, with over 300 security, compliance, and governance services and features. AWS supports 143 security standards and compliance certifications, and all 117 AWS services that store customer data offer the ability to encrypt that data.

Featured Services

Amazon EC2: Virtual servers in the cloud

Amazon Simple Storage Service (S3): Scalable storage in the Cloud

Amazon Aurora is a high-performance, managed relational database with full MySql and PostgreSQL compatibility.

Amazon DynamoDB is a managed NoSQL database.

Amazon RDS: Manage relational database services for MySQL, PostgreSQL, Oracle, SQL Server, and MariaDB.

AWS Lambda: Run Code without thinking about servers

Amazon VPC: Isolated cloud resources

Amazon Lightshall: Launch and manage virtual private servers.

Amazon SageMaker: Build, train, and deploy machine learning models at scale.

AWS Global Infrastructure

Available Zone-one or more data centers, like a building or big server rooms 87 Availability zones

Region: Has at least 2 Availability Zones in it; 32 regions All this is connected using AWS Global Infrastructure.

AWS Pricing

Pay as you Go (PAYG) It depends on the number of resources and the amount of time.

EC2 (t2.micro) 750 Hrs.

Storage: size of data used for storage Inbound vs. Outbound data: pricing when we download something

Pricing Calculator: https://calculator.aws/#/

We can set up AWS billing alerts using AWS Cloudwatch and AWS SNS.

AWS IAM

Identity Access Management gives you two types of access. programmatically and console-based for users, groups, and roles by Policies.

Policies are the rights to read, write, and access a particular service.

AWS infrastructure

Link: https://aws.amazon.com/about-aws/global-infrastructure/

Compute Services

  1. AWS EC2 (Amazon Elastic Compute Cloud)

  2. AWS ECS (Amazon Elastic Container Service)

  3. AWS Lambda

1.AWS EC2

Amazon Elastic Compute Cloud (Amazon EC2) is a core web service provided by Amazon Web Services (AWS) that offers scalable computing capacity in the cloud. It enables users to easily provision and manage virtual servers, known as instances, in a flexible and cost-effective manner. EC2 instances can be used for a wide range of applications, including hosting websites, running applications, processing data, and more.

Key features and concepts of Amazon EC2 include:

  1. Instances: These are virtual machines that you can launch in the AWS cloud. Each instance type has different computing, memory, and storage capabilities, allowing you to choose the resources that best match your application's requirements.

  2. Amazon Machine Images (AMIs): AMIs are pre-configured templates that contain the necessary operating system, software, and configurations needed to launch an instance. You can choose from a variety of AWS-provided AMIs or create your own.

  3. Instance Types: EC2 offers a wide range of instance types optimized for various use cases, such as general-purpose computing, memory-intensive tasks, compute-intensive workloads, GPU acceleration, and more.

  4. Elastic Block Store (EBS): EBS provides block-level storage volumes that can be attached to EC2 instances. It offers various types of storage options, including standard magnetic, SSD-backed, and high-performance SSD storage.

  5. Security Groups: These act as virtual firewalls for your instances, controlling inbound and outbound traffic. You can define rules that allow or deny specific types of traffic to and from instances.

  6. Key Pairs: Key pairs are used to securely log in to your instances. You create a key pair and then use the corresponding private key to connect to your instances using SSH (for Linux) or Remote Desktop Protocol (RDP) for Windows.

  7. Auto Scaling: This feature allows you to automatically scale the number of instances up or down based on demand. It helps maintain performance and reduce costs by only provisioning resources when needed.

  8. Load Balancing: EC2 instances can be placed behind a load balancer to distribute incoming traffic across multiple instances. This improves the availability and fault tolerance of your application.

  9. Amazon CloudWatch: CloudWatch provides monitoring and management capabilities for your EC2 instances. You can collect and analyze metrics, set up alarms, and automate actions based on events.

  10. Virtual Private Cloud (VPC) Integration: EC2 instances can be launched in a specified VPC, allowing you to control network settings, security, and connectivity.

  11. Instance Metadata: EC2 instances have metadata associated with them that can be accessed from within the instance. This metadata provides information about the instance, such as its instance type, public IP address, security groups, and more.

Amazon EC2 is a foundational service in AWS that enables businesses to rapidly scale their compute resources up or down based on their needs without making an upfront investment in physical hardware.

User Data

#!/bin/bash
yum update -y
yum install httpd -y
systemctl start httpd
systemctl enable httpd
cd /var/www/html/
echo "New instance set-up done" > index.html

Note: httpd is like NGINX. You can make a server and host it on the internet.

Instance Template

  • Go to the launch template option and create a template.

Give the template name as well as the version.

Now go to the Application and OS Images (Amazon Machine Image) option and select the currently used option.

Now go to the instance type and select t2.micro. The rest of the work is regular work. We have done this several times before. In the user data sector, write a script for the HTTPD server, then just launch the template.

Amazon EC2 Auto Scaling

Amazon EC2 Auto Scaling automatically balances EC2 instances across zones when you configure multiple zones in your EC2 Auto Scaling group settings. Amazon EC2 Auto Scaling always launches new instances such that they are balanced between zones as evenly as possible across the entire fleet.

How it works

An auto-scaling group is a collection of Amazon EC2 instances that are treated as a logical unit. You configure settings for a group and its instances, as well as define the group’s minimum, maximum, and desired capacity. Setting different minimum and maximum capacity values forms the bounds of the group, which allows the group to scale as the load on your application spikes higher or lower, based on demand. To scale the Auto Scaling group, you can either make manual adjustments to the desired capacity or let Amazon EC2 Auto Scaling automatically add and remove capacity to meet changes in demand.

When launching fleets of instances, you can specify what percentage of your capacity should be fulfilled by On-Demand instances, and what percentage with Spot Instances, to save up to 90% on EC2 costs. Amazon EC2 Auto Scaling lets you provision and balance capacity across Availability Zones to optimize availability. It also provides lifecycle hooks, instance health checks, and scheduled scaling to automate capacity management.

  • Launch Auto scaling group.

Note: You can select any subnet and VPC

Note: If you want, then you can select a load balancer or other services.

Note: Desire capacity means always running 2 servers, and the minimum should 1.

Note: We can change the Metric type according to our calculation dependencies.

Here, If CPU utilization is greater than 50% then it will work.

Note: In add notifications we can set up our email address and get any notifications.

Attache this auto-scaling group with an instance:

Now go to the instance page.

Note: These two running instance without a name is our desired capacity.

Benefits and features

Fault tolerance

Detect when an instance is unhealthy, terminate it, and launch an instance to replace it.

Cost Management

Save money by dynamically launching instances when they are needed and terminating them when they aren't.

Availability

Ensure that your application always has the right amount of capacity to handle the current traffic demand.

Amazon EC2

Use Amazon EC2 to create and run virtual machines in the cloud. EC2 instance types offer varying combinations of CPU, memory, storage, and networking capacity for all workloads.

Elastic Load Balancing

Use Elastic Load Balancing to automatically distribute incoming application traffic across the instances in your Auto Scaling group.

Amazon CloudWatch

Use Amazon CloudWatch to enable scaling policies and monitor metrics for your auto-scaling groups and EC2 instances.

2. Amazon ECS

Note: Linux >x86 processor | MacOs> ARM (M1 chip)> own processor.

Linux and MacOS architectures are not the same. If I make an image into MacOS, it will not run on a Linux server. There are many differences between two different architectures.

So, we have to keep in mind, as DevOps Engineers how to make a cross-architecture build between them.

In this case, AWS provided two services.

  1. ECR = Elastic Container Registry. (like Docker Hub)

  2. ECS = Elastic Container Services. (like Docker run)

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service provided by Amazon Web Services (AWS). It is designed to simplify the deployment, management, and scaling of containerized applications using Docker containers. Amazon ECS allows you to run containers on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances or AWS Fargate, a serverless compute engine for containers.

Key features and concepts of Amazon ECS include:

  1. Task Definition: A task definition is a blueprint for your containerized application. It defines various aspects of the containers, such as the Docker image, CPU and memory requirements, environment variables, ports to expose, and more.

  2. Service: An ECS service is a long-running task definition that ensures that a specified number of tasks (containers) are running and healthy in your cluster. It allows you to define the desired state of your application and automatically deploys and manages the necessary tasks to meet that state.

  3. Cluster: An ECS cluster is a group of EC2 instances or Fargate tasks that serve as the compute resources for running your containers. You can have multiple clusters, each serving different purposes or environments.

  4. Container Instance: In the context of EC2-backed ECS, a container instance is an EC2 instance that is part of an ECS cluster and has the Amazon ECS agent running on it. This agent is responsible for communication between the cluster and the ECS service.

  5. Task: A task is a running instance of a container in your cluster. It represents a single instantiation of your application. You can run multiple tasks from the same task definition within a cluster.

  6. ECS Task Scheduling: ECS supports both manual and automatic task scheduling. In manual mode, you can place tasks on specific container instances in your cluster, while automatic mode allows ECS to manage task placement based on resource availability and constraints.

  7. AWS Fargate: Fargate is a serverless compute engine for containers, which means you don't need to manage the underlying EC2 instances. With Fargate, you can focus solely on defining and running your containers without worrying about infrastructure management.

Amazon ECS is often used in conjunction with other AWS services such as Amazon Elastic Load Balancing (ELB), Amazon Virtual Private Cloud (VPC), AWS Identity and Access Management (IAM), and AWS CloudFormation to create scalable and highly available containerized applications. It is a popular choice for deploying microservices and modern applications in a containerized environment on AWS.

3.AWS Lambda

AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS) that allows you to run code without provisioning or managing servers. With Lambda, you can upload your code and specify the events that trigger its execution, and AWS automatically manages the scaling and deployment of your code in response to those events. It enables you to build and deploy applications that respond to events in real-time without the need to manage infrastructure.

Key features and concepts of AWS Lambda include:

  1. Event-Driven Execution: Lambda functions are triggered by various events within AWS services or custom events. For example, events can be generated by changes in data in Amazon S3, updates to a DynamoDB table, HTTP requests via Amazon API Gateway, and more.

  2. No Server Management: Lambda abstracts the underlying infrastructure, so you don't need to worry about server provisioning, scaling, or patching. AWS automatically manages the infrastructure for you.

  3. Stateless Execution: Lambda functions are stateless, meaning each invocation is independent of previous invocations. If you need to store state, you can use external storage services like DynamoDB or S3.

  4. Scaling: Lambda automatically scales your functions in response to the number of incoming events. It can run thousands of instances of your function in parallel to handle high loads.

  5. Supported Runtimes: Lambda supports various programming languages, including Node.js, Python, Java, Go, Ruby,.NET Core, and custom runtimes through custom Docker images.

  6. Pay-as-You-Go: You are charged based on the number of invocations and the execution time of your functions. There is no charge when your code is not running.

  7. Trigger Integration: Lambda can be integrated with other AWS services through triggers. For example, you can configure S3 events to trigger Lambda functions whenever new files are uploaded.

  8. Concurrency and Throttling: AWS Lambda controls the concurrency of function executions to prevent overload. You can set concurrency limits and define error-handling strategies.

  9. Versioning and Aliases: Lambda supports versioning, allowing you to publish different versions of your functions. Aliases let you point to specific versions, making it easier to deploy and manage different stages of your application.

  10. Logging and Monitoring: Lambda integrates with Amazon CloudWatch to provide logging and monitoring of your functions' performance and execution metrics.

  11. VPC Integration: Lambda functions can be configured to run within a Virtual Private Cloud (VPC), allowing you to access resources within your VPC while still benefiting from serverless execution.

AWS Lambda is a powerful tool for building event-driven, scalable, and cost-efficient applications. It enables developers to focus on writing code and creating features without getting bogged down by infrastructure management.

Authentication Services

  1. AWS IAM (Amazon Identity and Access Management)

  2. AWS VPC (Amazon Virtual Private Cloud)

  3. AWS KMS (AWS Key Management Service)

1.AWS IAM

Amazon Identity and Access Management (IAM) is a web service provided by Amazon Web Services (AWS) that enables you to manage access to AWS resources securely. IAM allows you to create and control users, groups, and permissions to grant or deny access to various AWS services and resources. It helps you ensure that only authorized individuals or applications can interact with your AWS environment.

Key features and concepts of AWS IAM include:

  1. Users: IAM allows you to create users, which are individuals who have AWS accounts and can be authenticated to access your AWS resources.

  2. Groups: You can organize users into groups and assign permissions to groups rather than individual users. This simplifies access management, as you can apply permissions to multiple users simultaneously.

  3. Roles: Roles are used to grant permissions to entities outside of your AWS account, such as applications running on EC2 instances or AWS Lambda functions. Roles are assumed by trusted entities to gain temporary access to resources.

  4. Permissions and Policies: IAM uses permission policies to define what actions are allowed or denied on AWS resources. Policies are JSON documents that specify the actions, resources, and conditions under which permissions are granted.

  5. Principle of Least Privilege: IAM encourages the principle of least privilege, which means granting only the minimum permissions necessary to perform a specific task. This improves security by reducing the potential impact of a security breach.

  6. Multi-Factor Authentication (MFA): IAM supports MFA, which adds an extra layer of security by requiring users to provide additional authentication factors (beyond just a password) when accessing AWS resources.

  7. Identity Federation: IAM supports identity federation, allowing you to use external identity providers (such as Active Directory or Facebook) to grant temporary, limited access to AWS resources.

  8. Access Analyzer: The IAM Access Analyzer helps you identify unintended access permissions by analyzing resource policies and access control policies.

  9. Credential Management: IAM users can generate access keys for programmatic access to AWS services. These access keys are used in API requests and are separate from the AWS Management Console credentials.

  10. Integration with AWS Services: IAM integrates with various AWS services, allowing you to secure access to services like Amazon S3, Amazon EC2, AWS Lambda, and more.

  11. Service Control Policies (SCP): In the context of AWS Organizations, SCPs are used to manage permissions across member accounts by controlling what actions are allowed or denied at the organizational level.

  12. Audit and Logging: AWS CloudTrail can be used to track IAM actions, providing an audit trail of who accessed or modified resources and when.

AWS IAM is a foundational service for securing your AWS environment. It enables you to implement strong security practices, manage access effectively, and maintain control over your resources while collaborating with others in a secure manner.

  • Go to the console page and search IAM services.

Now, we can make a user group to access all of the services.

Here we can select what kind of services and what kind of permission we want to give this group. After selecting the permissions, click Create User Group.

Now, copy the console sign-in details and keep it safe. Copy the console sign in URL and open it.

Here, create a new password and confirm the settings. Then log in, and you can see the management console.

Now Go to the user group and select security credentials.

Now click the create access key option to create the access key. We can use this key for the terraform connection. or connection from a Windows machine or in python script.

Now create your access key. It will look like:

Now, Go to your command prompt and install AWS CLI (Command line interface)

Follow the link: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

Now run the following command

# Configure AWS from your local server. 
aws configure

Now It will require AWS access credentials.

Till now, CLI configuration is done.

AWS CLI create instances : https://docs.aws.amazon.com/cli/latest/userguide/cli-services-ec2-instances.html

Use this command

aws ec2 run-instances --image-id ami-053b0d53c279acc90 --count 1 --instance-type t2.micro --key-name mohammad --security-group-ids sg-05fea0ed99ff43a3b --subnet-id subnet-0d7d1cfacea3276d2

2.AWS VPC

Amazon Virtual Private Cloud (Amazon VPC) is a web service provided by Amazon Web Services (AWS) that allows you to create and manage isolated virtual networks within the AWS cloud environment. It enables you to launch AWS resources, such as Amazon EC2 instances, RDS databases, and more, in a logically isolated section of the AWS cloud.

Key features and components of the AWS VPC include:

  1. Isolation: With VPC, you can create isolated networks that are logically separated from each other and the public internet. This allows you to segment your resources and control communication between them.

  2. Subnets: VPCs are divided into subnets, which are smaller address ranges within the VPC. Subnets can be public or private, determining whether they have direct access to the internet or not.

  3. IP Address Management: You can define IP address ranges for your VPC and subnets, giving you control over the IP addresses assigned to your resources.

  4. Internet Gateway: An internet gateway allows resources in your VPC to connect to the public internet. It acts as a gateway between your VPC and the internet.

  5. NAT Gateways or NAT Instances: Network Address Translation (NAT) gateways or NAT instances enable resources in private subnets to initiate outbound traffic to the internet while keeping the incoming traffic blocked.

  6. Route Tables: Route tables define the routes for network traffic within your VPC. They determine how traffic is directed between subnets and to the internet.

  7. Security Groups and Network ACLs: These are used to control inbound and outbound traffic to and from resources within your VPC. Security groups are stateful and act as firewalls at the instance level, while Network ACLs are stateless and provide subnet-level control.

  8. VPC Peering: This feature allows you to connect multiple VPCs together, enabling resources in different VPCs to communicate as if they were on the same network.

  9. VPN and Direct Connect: VPC supports virtual private network (VPN) connections and dedicated network connections (Direct Connect) to extend your on-premises network into your VPC.

  10. VPC Endpoints: These enable private connectivity to AWS services from within your VPC without requiring internet access.

AWS VPC offers a high degree of control, security, and customization for network architecture in the cloud. It's a fundamental building block that allows you to design and deploy complex network topologies to meet your specific requirements.

Note: For better understanding of networking: https://www.digitalocean.com/community/tutorials/understanding-ip-addresses-subnets-and-cidr-notation-for-networking

Type of VPC

  • Default VPC: When we created our account, it automatically created a VPC for us. This is called the default VPC.

  • Custom VPC: It is chargeable. You will design your VPC yourself, according to your needs.

Practical

  1. Go to your AWS console account and search for VPC.

Note: Here, You will find all of the services from VPC. If you want to create your own VPC, It is also possible.

3.AWS KMS

AWS Key Management Service (KMS) is a fully managed encryption service offered by Amazon Web Services (AWS) that helps you protect your data by creating and controlling encryption keys. It enables you to easily create, manage, and use cryptographic keys for various AWS services and for your own applications.

Here are some key aspects and features of AWS KMS:

  1. Key Creation: AWS KMS allows you to create symmetric and asymmetric cryptographic keys. Symmetric keys are used for encryption and decryption, while asymmetric keys are used for encryption and digital signing. You can create these keys within KMS or import your own keys securely.

  2. Integration with AWS Services: KMS seamlessly integrates with a wide range of AWS services, including Amazon S3, Amazon EBS, Amazon RDS, Amazon Redshift, AWS Lambda, and more. You can use KMS to encrypt data at rest and in transit for these services.

  3. Key Policies: You can define key policies in KMS to specify who can use the keys and what actions they can perform. These policies are expressed in JSON and allow fine-grained control over key access.

  4. Key Rotation: KMS supports automatic key rotation, which helps enhance security by periodically replacing old keys with new ones. This is especially important for long-lived encryption keys.

  5. Audit Logging: AWS KMS provides detailed audit logs through AWS CloudTrail, which can be used to track key usage and changes to key policies. This helps in monitoring and compliance with security requirements.

  6. Custom Key Stores: You can create custom key stores to have more control over the location and management of your keys. This is useful in scenarios where you have specific regulatory or compliance requirements.

  7. Envelope Encryption: KMS uses a technique called envelope encryption, where data is encrypted with a unique data key, and this data key is then encrypted using the master key stored in KMS. This provides an additional layer of security.

  8. Multi-Region Support: AWS KMS allows you to replicate keys to different AWS regions for improved disaster recovery and cross-region data access.

  9. Integrated Encryption SDKs: AWS provides SDKs for various programming languages that make it easier to use KMS for encryption and decryption in your applications.

  10. Cost Control: KMS offers a free tier for key management operations, and you pay only for the keys you create and the requests you make, which can help you control costs.

AWS KMS plays a crucial role in securing data in AWS environments and is often used in conjunction with other AWS services to ensure data confidentiality and compliance with security standards. It is particularly valuable for customers who need to meet regulatory requirements for data encryption and protection.

Storage Services

  1. AWS S3 (Amazon Simple Storage Service)

  2. AWS RDS (Amazon Relational Database Service)

  3. AWS EBS (Amazon Elastic Block Store)

1.AWS S3

Amazon Simple Storage Service (Amazon S3) is a widely used object storage service provided by Amazon Web Services (AWS). It offers scalable, durable, and highly available storage for a wide variety of data types, ranging from simple text files to large multimedia files, backups, logs, and more. S3 is designed to be simple to use while providing advanced features and capabilities for managing and accessing your data.

Key features and concepts of Amazon S3 include:

  1. Objects: In S3, data is stored as objects. Each object consists of data, a unique key (which serves as an identifier), and optional metadata. Objects can range in size from a few bytes to multiple terabytes.

  2. Buckets: Objects are stored in containers called buckets. Each bucket has a globally unique name within the S3 namespace. Buckets are used to organize and manage objects.

  3. Object Storage Classes: S3 offers different storage classes with varying levels of durability, availability, and cost. These include Standard, Intelligent-Tiering, One Zone-IA, Glacier, and Glacier Deep Archive, among others.

  4. Durability and Availability: S3 provides high durability by automatically replicating objects across multiple Availability Zones (data centers) within a region. This ensures that your data is highly available and protected against hardware failures.

  5. Data Lifecycle Management: S3 allows you to define lifecycle policies to automatically transition objects to different storage classes or delete them after a specified period. This helps optimize costs based on data usage patterns.

  6. Versioning: S3 supports object versioning, which allows you to keep multiple versions of an object in the same bucket. This can be useful for data protection and recovery.

  7. Data Encryption: S3 supports both server-side encryption (SSE) and client-side encryption. SSE automatically encrypts data at rest, while client-side encryption allows you to encrypt data before uploading it to S3.

  8. Access Control: S3 provides fine-grained access control through access policies and bucket policies. You can control who can access your objects and how they can access them.

  9. Cross-Region Replication: This feature enables automatic replication of objects from one S3 bucket to another in a different region. It can be used for disaster recovery or to reduce latency for users in different geographic regions.

  10. Event Notifications: S3 can trigger events based on actions performed on objects, such as object creation, deletion, or restoration. These events can be used to automate workflows or notifications.

  11. Data Transfer Acceleration: S3 Transfer Acceleration uses Amazon CloudFront's globally distributed edge locations to accelerate uploads and downloads of large objects.

  12. Query and Analytics: S3 Select and S3 Glacier Select allow you to run queries directly on data stored in S3. This can help you retrieve specific data without having to download and process the entire object.

Amazon S3 is widely used by organizations of all sizes to store and manage their data, serving as a scalable and cost-effective solution for a wide range of storage needs.

Now go to the AWS management console and search S3.

Now go to your bucket and upload any file, like, a text file, movie file anything you want.

Note: Till now we have access through the console. It is possible to get access by programmatic access.

2.AWS RDS

Amazon Relational Database Service (Amazon RDS) is a managed database service provided by Amazon Web Services (AWS) that makes it easier to set up, operate, and scale a relational database in the cloud. It supports several popular database engines, including MySQL, PostgreSQL, MariaDB, Oracle Database, and Microsoft SQL Server. With Amazon RDS, you can offload the administrative tasks of database management, such as patching, backups, and scaling, allowing you to focus on your application development.

Key features and concepts of Amazon RDS include:

  1. Managed Database Instances: Amazon RDS provisions and manages the underlying infrastructure for your database instances. You can choose the database engine, instance type, and storage capacity that suit your application's needs.

  2. Automated Backups: RDS provides automated backup and recovery features, including daily automated backups and the ability to create manual backups. Backups are stored on Amazon S3.

  3. High Availability: RDS offers Multi-AZ deployments that replicate your database instance to a standby instance in a different Availability Zone (data center). This provides high availability and automatic failover in the event of a primary instance failure.

  4. Scalability: You can easily scale your RDS instance vertically by changing the instance type or horizontally by adding read replicas to offload read traffic from the primary instance.

  5. Security: RDS provides features like encryption at rest and in transit, IAM-based authentication, and network security through Amazon VPC integration.

  6. Monitoring and Metrics: RDS integrates with Amazon CloudWatch to provide monitoring and performance metrics for your database instances. You can set up alarms and visualize performance data.

  7. Maintenance: RDS automatically applies minor database engine updates and security patches. You can schedule a maintenance window for your preferred update time.

  8. Read Replicas: RDS allows you to create read replicas of your database instance, which can be used to offload read traffic and improve performance.

  9. Database Engine Options: RDS supports multiple database engines, each with its own set of features and capabilities, including MySQL, PostgreSQL, MariaDB, Oracle Database, and Microsoft SQL Server.

  10. Database Snapshots: RDS enables you to create manual database snapshots, which can be used to create new database instances or restore data to a specific point in time.

  11. Database Migration: RDS provides tools and features to simplify the process of migrating your existing on-premises databases or other cloud databases to RDS.

  12. Global Databases: For MySQL and PostgreSQL, RDS offers the capability to create cross-region read replicas and global databases for low-latency, high-performance data access in different geographic regions.

Amazon RDS reduces the operational burden of database management, making it easier for developers and administrators to deploy and manage relational databases in a cloud environment. It's suitable for a wide range of applications, from small-scale projects to large-scale enterprise solutions.

AWS Relational Database Services

In an AWS-fully managed relational DB engine service, AWS is responsible for:

  • Security and Patching

  • Automated Backup

  • Software updates for the DB engine

  • If selected, multi-AZ with synchronous replication between the active and standby DB instances

  • Automatic failover if the multi-AZ option was selected.

  • By default, every DB has a weekly maintenance window (max 35 days)

Settings managed by the users:

  • Managing DB settings

  • Creating a relational database schema

  • Database performance tuning.

Relational Database Engine Options

  • MS SQL Server

  • My SQL >Support 64TB of DB

  • Oracle

  • AWS aurora > High Throughput.

  • Postgre SQL > Highly Reliable & Stable

  • MariaDB > MySQL Compatible, 64TB DB

There are two Licensing Options:

  1. BYOL Bring your own license.

  2. License from AWS on an hourly basis.

Now go to your AWS management console search RDS and click on it.

Now we can see our database has been created.

Now, Our RDs are connected with our instance. Go to our instance and connect it.

Once we connect, we need to install mysql client.

run the following command:

sudo apt install mysql-client-core-8.0

Now go to the RDS DB and take the endpoint to connect with the Mysql client server which we installed recently.

Now Run the following command

mysql -u admin -p -h database-1.cchikftse4xs.us-east-1.rds.amazonaws.com -P 3306

Note: It will require a password.

To see the database, run the following commands.

show databases;

Create database

create database test_db;

Change database

use test_db;

3.AWS EBS

Amazon Elastic Block Store (Amazon EBS) is a block storage service provided by Amazon Web Services (AWS) that offers scalable and durable storage volumes that can be attached to Amazon EC2 instances. EBS volumes provide persistent and high-performance storage that is independent of the lifecycle of EC2 instances, allowing data to persist even after an instance is stopped or terminated.

Key features and concepts of Amazon EBS include:

  1. EBS Volume Types: EBS offers different types of volumes optimized for various use cases, including:

    • General Purpose (SSD): Balanced performance for a wide range of workloads

    • Provisioned IOPS (SSD): High-performance storage with consistent and predictable I/O performance

    • Cold HDD: Cost-effective storage for infrequently accessed data

    • Throughput Optimized HDD: High throughput for frequently accessed data.

    • EBS Magnetic: Older, lower-performance magnetic storage option.

  2. EBS Snapshots: EBS snapshots allow you to create point-in-time backups of your EBS volumes. Snapshots are stored in Amazon S3 and can be used to create new volumes, migrate data, or restore data in case of data loss.

  3. Data Durability: EBS volumes are designed for durability, with the ability to replicate data across multiple Availability Zones (AZs) within a region.

  4. Volume Encryption: EBS volumes support encryption at rest using AWS Key Management Service (KMS) keys. This helps protect your data from unauthorized access.

  5. Volume Resizing: You can easily resize EBS volumes up or down without requiring the creation of a new volume. This can be useful when your storage needs change over time.

  6. Elastic Volumes: With elastic volumes, you can dynamically adjust the size and performance characteristics of an EBS volume while it's in use.

  7. Multi-Attach: Certain EBS volume types support being attached to multiple EC2 instances simultaneously. This can be useful for shared storage scenarios.

  8. Lifecycle Management: EBS provides lifecycle management features such as EBS Lifecycle Policies, which automate the process of creating and deleting EBS snapshots.

  9. EBS-Optimized Instances: Some EC2 instance types provide additional dedicated network bandwidth for EBS I/O, ensuring consistent performance for applications that require high I/O rates.

  10. EBS and EC2 Instances: EBS volumes are typically attached to EC2 instances as block devices, providing a reliable and persistent storage solution for applications and data.

Amazon EBS is a crucial component for running data-intensive applications, databases, and file systems in the AWS cloud. It offers flexibility in terms of storage performance, durability, and capacity, allowing you to choose the appropriate storage solution based on your application's requirements.

AWS EFS

Amazon Elastic File System (Amazon EFS) is a scalable and fully managed file storage service provided by Amazon Web Services (AWS). It is designed to provide shared file storage that can be accessed by multiple Amazon EC2 instances and on-premises servers concurrently. Amazon EFS supports the Network File System version 4 (NFSv4) protocol, making it compatible with a wide range of applications and workloads.

Key features and concepts of Amazon EFS include:

  1. Shared File Storage: Amazon EFS allows multiple EC2 instances and servers to access the same file system concurrently, making it suitable for applications that require shared data storage.

  2. Scalability: EFS automatically scales its capacity up or down based on the amount of data stored in the file system. There's no need to provision storage in advance.

  3. Performance: EFS is designed for low-latency and high-throughput access. It can handle a wide range of workloads, including content management systems, web serving, big data processing, and more.

  4. Availability and Durability: EFS provides high availability by storing data across multiple Availability Zones within a region. Data is also redundantly stored within each Availability Zone for durability.

  5. EFS File System Lifecycle Management: EFS supports lifecycle management policies that can automatically move files that haven't been accessed in a while to a lower-cost storage class.

  6. Security: EFS supports Amazon VPC access and provides security features such as encryption at rest and in transit. Access can be controlled using AWS Identity and Access Management (IAM) policies and Network Access Control Lists (NACLs).

  7. Performance Modes: EFS offers two performance modes:

    • General Purpose: Suitable for most workloads, it balances performance and cost.

    • Max I/O: Designed for high-intensity workloads with a need for higher levels of performance.

  8. Mount Targets: EFS is accessed through mount targets that are created in Amazon VPC subnets. Each mount target provides an IP address that EC2 instances use to access the file system.

  9. Data Sharing: EFS can be used to share data between on-premises servers and AWS instances, making it useful for hybrid cloud scenarios.

  10. Integration with Other AWS Services: Amazon EFS can be used as shared storage for applications like Amazon ECS, Amazon EKS, and AWS Lambda. It can also be used as shared storage for containers running on AWS Fargate.

Amazon EFS is a valuable solution for scenarios where multiple instances need to share and access data simultaneously. It simplifies the management of shared file storage, eliminates the need for complex storage management tasks, and helps ensure data consistency and availability across applications and instances.