If you are an AWS Solution architect and always scratching your head on certain AWS designs for certain use cases, you may or may not have come across following two links. But If you have not, it is a Gold mine for all the AWS architects.
The two links are as follows:
https://aws.amazon.com/architecture – For all AWS reference architectures
https://aws.amazon.com/solutions – For all AWS solution architecture related materials
For example, if you are looking for something related to AWS Image handling and how to optimize it using serverless technologies on the web, you can just search “Serverless Image” and see how it goes (under the solutions architecture link). You will get a page with the URL https://aws.amazon.com/solutions/serverless-image-handler/
This link will give you immense knowledge in terms of getting your use cases up and running. It gives you the total solution along with a working CloudFormation Template to get you going with your solution in few minutes. What more could you ask for?
Enjoy your Journey with AWS!
In the modern technology landscape, working with cloud enabled applications is a must skill for any IT professional in the market. Most of us in the cloud based development are biased towards primarily to one or two cloud vendors (AWS, Azure, Google, etc). Can we blame for that? We simply cannot. Anyone can have his/her own preferences and liking to certain platforms.
But, are we heading towards the right direction? Most of the main cloud vendors abstracted their cloud services so much allowing the cloud users to have many managed services nowadays. Most of your enterprise applications are heading towards serverless computing architectures and becoming more and more abstract.
On the flip side, some cloud practitioners are working towards more open Kubernetes development and more Cloud Native Computing environments (i.e. CNCF Landscape). They say they want cloud to be more open!. That is really cool. But then again you need to learn Kubernetes related frameworks and plethora of products around you and need to get stuck with them (You must see the CNCF platform!!). Isn’t that lock-in too?
I recently read this nice article, which was written by Gregor Hohpe (the well known author of the Book Enterprise Integration Patterns) related to this critical aspect of Cloud Vendor Lock-In. I am sure you can get some good insights on this subject if you read it. Good Luck!
1. Gregor Hohpe Blog: https://architectelevator.com/
2. Martin Fowler Blog (The article on Cloud-Lock-in by Gregor Hohpe): https://martinfowler.com/articles/oss-lockin.html
3. CNCF Landscape: https://www.cncf.io/
At Re-Invent 2018 conference, AWS Security Hub was launched. It is a security tool, which provides the AWS users/ clients a comprehensive view of your applications (hosted on AWS). It further helps you to check your security compliance against well established security standards such as CIS.
I recently wrote an article on this subject and my experience on one of our key projects for the company I am working. This article will give some insight and some important links to get this going for your future projects. AWS Security Hub is not a fully mature tool so far and still going through some alterations and keep on adding several standards for its future compliance. For example, the Quick Start script, which is mentioned in the blog has been recently ruled out from AWS documentation as well. However, If you browse my GitHub, which is mentioned on the blog you may be able to get some insight to this work and I am 100% sure that you can get this going without a failure.
All the best!
What is a Bastion Host?
Bastion hosts are instances that sit within your public subnet and are typically accessed using SSH (for Linux) or RDP (for Windows). It acts as a ‘jump’ server, allowing you to use SSH or RDP to login to other instance in private subnet.
High Availability (HA) can be ensured for Bastion hosts by having multiple bastion hosts in each availability zone, with each bastion host is mapped to an Auto scaling group
A NAT instance is, like a bastion host, lives in your public subnet. A NAT instance, however, allows your private instances outgoing connectivity to the Internet (to get updates), while at the same time blocking inbound traffic from the Internet.
It is required to use Elastic IP addresses for bastion hosts mainly if you are using high availability scenarios.
The following are the best practices while configuring a bastion host
1. Never place your SSH private keys within a bastion hosts/ server. As suggested, use SSH Agent Forwarding for this task to connect first to the bastion host then to other instances on the private subnets. This lets you keep the private keys only with your servers.
2. Make sure the security group on the bastion host to allow SSH (port 22) to connect only from your trusted hosts and never from 0.0.0.0/0 mask.
3. Always have more than one bastion. For example, having a bastion host for each Availability Zone (AZ).
4. Make sure to configure security groups on private subnets to accept SSH traffic only from the bastion hosts.
How to handle Bastion hosts via SSH Agent Forwarding?
The SSH agent handles signing of authentication data for you. When authenticating to a server, you are required to sign some data using your private key, to prove that you are. As a security measure most people sensibly protect their private keys with a pass phrase, so any authentication attempt would require you to enter this pass-phrase. This can be undesirable, so the ssh agent caches they key for you and you only need to enter the password once, when the agent wants to decrypt it.
The SSH agent never hands these keys to client programs, but merely presents a socket over which clients can send it data and over which it responds with signed data. A side benefit of this is that you can use your private key even with programs you don’t fully trust.
Another benefit of the SSH agent is that it can be forwarded over SSH. So when you ssh to host A, while forwarding your agent, you can then ssh from A to another host B without needing your key present (not even in encrypted form) on host A.
These SSH Agents can not only be used when the paraphrase is being used. This can be successfully used in Bastion hosts. Rather copying the PEM (rather the private key) to the Bastion host, it is more secure to hand this process to SSH Agents. That would be more secure and easy!. So here are the simple steps to follow if you are to do this task. However, if you are running this on heavily secured environment with well designed Security groups and NACLs, it is always good to have a complete idea before executing this. Otherwise you will end up having too many confusions. If all well, this works like a charm!
Step 1: Adding the private key (PEM file) to the key chain. This allows the user to access the private instances without copying to the bastion host. This adds an additional layer of security.
$ ssh-add -k <PEM_file_name>
Step 2: Check whether the private key is properly added to the key chain
$ ssh-add -L
The above will list all the keys added to the chain. Check whether the key you added is listed there.
Step 3: Access the Bastion Host (Public instance)
$ ssh -A ec2-user@<bastion-host-elastic-ip> [Here ec2-user is the user for the Linux instance]
Step 4: Access the private instance
$ ssh ec2-user@<private-instance-ip>
Today (08/07/2018) I had the privilege to do a 1 hour presentation on “Towards a Cloud Enabled Data Intensive Digital Transformation” for Jaffna University IT students. I hope they were able to learn something out of this presentation. You can reach the slide deck using the following link:
With the advent of Big Data, the enterprise applications nowadays are following a Data Intensive microservices based enterprise application architecture deviating more monolithic architectures, which we have been used to decades.
These data intensive applications should meet a set of requirements.
1. Ingest Data at Scale without a loss
2. Analyze data in real-time
3. Trigger action based on the analyzed data
4. Store the data at cloud-scale.
5. Need to run in a distributed and highly resilient cloud platform
The SMACK is such a stack, which can be used for building modern enterprise applications because it can performs each of the above objectives with a loosely coupled tool chain of technologies that are are all open source, and production-proven at scale.
(S – Spark, M – Mesos, A – Akka, C – Cassendra, K – Kafka)
- Spark – A general engine for large-scale data processing, enabling analytics from SQL queries to machine learning, graph analytics, and stream processing
- Mesos – Distributed systems kernel that provides resourcing and isolation across all the other SMACK stack components. Mesos is the foundation on which other SMACK stack components run.
- Akka – A toolkit and runtime to easily create concurrent and distributed apps that are responsive to messages.
- Cassandra – Distributed database management system that can handle large amounts of data across servers with high availability.
- Kafka – A high throughput, low-latency platform for handling real-time data feeds with no data loss.
Developer Tools for Serverless Applications on AWS
AWS and its ecosystem provide frameworks/ tools, which help you develop serverless applications on AWS Lambda and other AWS services. These will help you rapidly build, test, deploy, and monitor serverless applications.
There are multiple AWS / open source frameworks available in the market today to simplify serverless application development and deployment.
1. AWS Server Application Model (SAM)
2. Open Source third party frameworks (Apex, Chalice, Clauda.js, Serverless Express, Serverless Framework, Serverless Java Container, Sparta, Zappa)
1.) AWS SAM
For simple applications it is good to use normal Lambda console. However, for complex applications, it is recommended to use AWS SAM. AWS SAM is an “abstraction” of Cloudformation (Infrastructure As Code), which is optimized for serverless applications. It supports anything that Cloudformation supports and it is an Open Specification under Apache 2.0 License.
AWS SAM Local Client
AWS SAM Local is a complementary CLI tool that lets you locally test Lambda functions defined by AWS SAM templates.You can plug this client tool into any of your favorite IDE for higher fidelity testing and debugging.
Now AWS introduced a new IDE for serverless development called AWS Cloud9. This has integrated all the required components for serverless development and testing without relying on any other tool/ IDE.
However, the deployment aspect was missing in AWS SAM and recently that was also added to the AWS SAM to automate the incremental deployments into AWS Lambda. This further allows to roll-out new versions to production in an incremental manner.
2). Open Source third party frameworks (Serverless Framework)
Please do have a look at my previous blog for an article on the Serverless Framework.
1. Developer Tools for Serverless Applications – https://aws.amazon.com/serverless/developer-tools/
2, Comparing AWS SAM with the Serverless Framework – https://sanderknape.com/2018/02/comparing-aws-sam-with-serverless-framework/
4. AWS SAM Local – Build and Test Serverless Applications Locally – https://aws.amazon.com/blogs/aws/new-aws-sam-local-beta-build-and-test-serverless-applications-locally/
1. Authoring and Deploying Serverless Applications with AWS SAM: – https://www.youtube.com/watch?v=pMyniSCOJdA
2. Serverless Architecture Patterns and Best Practices – https://www.youtube.com/watch?v=_mB1JVlhScs
3. Building CI/CD Pipelines for Serverless Applications – https://www.youtube.com/watch?v=9uOl3B88bcY
4. AWS Serverless Application Model (SAM) Implementation is Now Open-Source – Apr 10, 2018 – AWS Launchpad San Francisco – https://www.youtube.com/watch?v=uxv1dOExq5U
The Default Security – (Permissions)
By default Lambda functions are “not” authorized to do access other AWS services. Hence, it is required to explicitly give access (permissions) to each and every AWS service.(i.e. accessing S3 to store images, accessing external databases such as DynamoDB, etc). These permissions are managed by AWS IAM roles.
Changing the Default Security – (Permissions)
If you are using the Serverless Framework you can customize the default settings by changing the serverless.yaml file (in the “iamRoleStatements:” block).
iamRoleStatements: - Effect: "Allow" Action: - "lambda:*" Resource: - "*"
The above will “Allow” all (“*”) to be invoked from the Lambda Function.
The Default Security – (Network)
By default, Lambda functions are not launched in a VPC. But you can change this by creating a Lambda function within a VPC. Furthermore, you can extend further by applying “Security Groups” as an additional layer of security within a VPC.
Changing the Default Security – (Network)
If you are using the Serverless Framework you can customize the default settings by changing the serverless.yaml file. Here is the code snippet that might use for this.
provider: name: aws runtime: python2.7 profile: serverless-admin region: us-east-1 vpc: securityGroupIds: - <security-group-id> subnetIds: - <subnet-1> - <subnet-2>
The Serverless Framework (https://serverless.com/framework/) is an open-source CLI for building serverless architectures to cloud providers (AWS, Microsoft Azure, IBM OpenWhisk, Google Cloud Platform, etc).
This article will brief you on the important steps you may require to get on with the AWS platform. This Framework works well with CI/CD tools and has the full support of AWS CloudFormation. With this it can provision your AWS Lambda functions,events, and infrastructure resources.
Step 1: Installing NodeJS
Serverless is a Node.js CLI tool so the first thing you need to do is to install Node.js on your machine. Refer the official NodeJS web site and download and follow the instructions to install NodeJS.
Serverless Framework runs on Node v6.5.0 or higher. You can verify that NodeJS is installed successfully by executing node -v in your terminal.
If all fine, we may proceed to the second step.
Step 2: Installing Serverless Framework
$ npm install -g serverless
Once installed, you may verify it.
$ serverless --version
Step 3: Setting up Cloud Provider (AWS) Credentials
The Serverless Framework needs access to your cloud provider’s account so that it can create and manage resources on your behalf. You may set it up with this Youtube link
Once above is completed, you may add the AWS credentials to your client machine to work as a CLI. You may use the following command to do that.
$ serverless config credentials --provider aws --key XXXXXXXXXXXXXXXXX --secret XXXXXXXXXXXXXXXXX --profile serverless-admin
This will basically add an entry to the credentials file, which is located in the $<home-folder>/.aws folder. (assumes the AWS user is serverless-admin)
[serverless-admin] aws_access_key_id = XXXXXXXXXXXXXXXX aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXX
If all above is OK, you are ready to create your first Serverless function (Lambda Function) with AWS.
Step 3: Creating your Serverless Project
You may build your projects based on the templates/ archetypes given by the framework.
By default, there are multiple templates/ archetypes given. (i.e. “aws-nodejs”, “aws-python”, “aws-python3″, “aws-groovy-gradle”, “aws-java-maven”, “aws-java-gradle”, “aws-scala-sbt”, “aws-csharp”, etc)
So lets create a “aws-python” project for fun…
$ serverless create --template aws-python --path hello-world-python
The above will create a folder named “hello-world-python”.
Just browse the folder. You would see two files.
1. handler.py – (This is the Serverless Function. Your Business Logic goes here)
Here just edit the handler.py to have a simple output.
def hello(event, context): print "Hello Crishantha" return "Hello World!"
2. serverless.yml – (The Serverless Function Configuration.)
P.Note: You may check the following configuration especially before you executing the rest of the key commands
If you are new to YAML and know JSON well, you may use https://www.jason2yaml.com link to convert JSON to YAML and vice versa.
provider: name: aws runtime: python2.7 profile: serverless-admin region: us-east-1
If all above is ok, you are good to go and deploy the function on AWS. So lets move to the next step. (Step 4)
Step 4: Deploy the Serverless Function
As explained, move to “hello-world-python” folder and execute the following command.
$ serverless deploy -v
The above will run the automated script creating all the background scripts including CloudFormation scripts to deploy the respective application. It is pretty awesome!
Step 5: Invoke the Serverless Function
Use the following to see the output.
$ serverless invoke -f hello -l
The above will return a simple “hello” for you (The output that you have mentioned in the handler.py)
It is that simple!!!
Step 6: Verify
If you want to verify all this, you can log in to the AWS console and see what you have done is reflected in the AWS Lambda area. Sure you will.
Step 7: Remove All
OK. We just did some testing. So probably you want to remove the serverless function and all its dependencies (IAM roles, Cloudwatch Log groups, etc)
- Move to the folder that the function that you want to delete.
- Execute the following
$ serverless remove
The above will clean the whole thing up!…
So, if you are a AWS Developer, you may find it very useful as much as I do at the moment. Happy Coding!
1. Serverless Framework Page – https://serverless.com/framework/docs/providers/aws/guide/services/
2. AWS Provider Documentation – https://serverless.com/framework/docs/providers/aws/
3. Serverless AWS Lambda Guide – https://serverless.com/framework/docs/providers/aws/guide/
4. Serverless Framework GitHub – https://github.com/serverless/serverless
5. YAML to JSON tool – https://www.jason2yaml.com
6. The Serverless Framework: A deep overview of the best AWS Lambda + API Gateway Automation Solution – https://cloudacademy.com/blog/serverless-framework-aws-lambda-api-gateway-python/
If you are having a “self-managed” MySQL EC2 instance, which can be connected to other EC2 instances in the same VPC or even other remote machines. In order to do this, there are a few configuration changes you need to carry out.
Here are the steps:
1. Connect to the remote MySQL remote EC2 instance. – On default you can access the MySQL using “root” user. However it is not advisable to access a MySQL instance remotely using the “root” user for security reasons.
[P.Note: Please make sure the Port 3306 is added to the inbound rules in the EC2 Security Group prior attempting this.]
2. Change the <bind-address> parameter to 0.0.0.0, allowing the access to all remote addresses. This needs to be changed in the /etc/mysql/mysql.conf.d/my.cnf file.
3. Restart the MySQL instance
mysql-ec2-instance>> sudo /etc/init.d/mysqld restart
4. Therefore, create a new MySQL user. – For this, you are required to sign in to the MySQL and execute the following command(s).
mysql-ec2-instance>> mysql -u root -p<root-password> Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> CREATE USER 'user'@'localhost' IDENTIFIED BY 'user123'; mysql> CREATE USER 'user'@'%' IDENTIFIED BY 'user123'; mysql> GRANT ALL PRIVILEGES ON *.* to user@localhost IDENTIFIED BY 'user123' WITH GRANT OPTION; mysql> GRANT ALL PRIVILEGES ON *.* to user@'%' IDENTIFIED BY 'user123' WITH GRANT OPTION; mysql> FLUSH PRIVILEGES; mysql> EXIT;
5. Now exit from the EC2 MySQL instance and try to log into the MySQL EC2 instance from your local machine.
your-local-machine>> mysql -h <ec2-public-dns-name> -u user -puser123
If all fine, you should be able to sign in to the remote EC2 instance without any issue!!