Nowadays whether we like it or not, we are biased towards some vendor technology in your own specialized area. But we should never forget the concepts, architecture and the core. So much you know about any particular technology or their vendors you still cannot be a complete Software person unless you cover your bases.
So finally we can see some hope in this area in terms of building the competency. It is another CERTIFICATION! A lot of people might not know about this, since it is quire new to the field. But I know the people who do provide this service are the pioneers in the subject.
So it is another set of Industry Certification. But this is not about any vendor. It is about the Software Design and Patterns. Wow! I am waiting to get on board. Hope you will too.
Here is the link : https://patterns.arcitura.com/
Hats off to the creators.. Timely initiative when we are all vendor locked.
Serverless Architecture and Microservices Architecture are two architectures we do talk together in our architectural designs.If you do architectural designs on AWS, Azure, Cloud or any other cloud vendor these are two common architectural patterns that you follow.
We all know “design pattern” is a software solution for a recurring design problem. Since we started software design in our early days, we have been learning multiple design pattern sets.
1. Object Oriented Design Patterns (GoF Patterns)
2. Enterprise Application Design Patterns (Mainly on Monolithic Three Tier Architectures)
3. Enterprise Integration Patterns (Mainly on SOA related Hub-Spoke models)
Since its inception of MSA (Micro Services Architecture), we have been following multiple Design Pattern as well. Though there is no proper Bible (A Book) to specify on this area, there are multiple books published on Microservices Design.
Later, we saw cloud vendors too adopting Microservice architecture in big way. Serverless architecture is one of those heavily adopted architectures nowadays.
I recently came across the following article on Servereless Microservices Design Pattern set on AWS, which basically encompasses the complete set of this beautiful piece of architecture. More you adopt to it more you will enjoy adopting it. I know there is coupling on AWS just like any other cloud vendor, but it is amicably easy to use and easy on your purse.
Please do find the following link for all this and kudos to author to putting this for all of us.!
In the modern application development, Micro Services Architecture (MSA) Design has slowly becoming the de-facto standard for many enterprise level development projects. Along with MSA, we can hear the emergence of 12-factor application development model leveraging most of the MSA features coupled with certain DevOps (CI/CD) aspects.
Hence, it is always good to know what this architecture model is all about to basically have single vocabulary to encompass many of the good features produced by MSA, DevOps and other best practices in the modern enterprise application development.
They are as follows:
1. Code Base – One Code base tracked in revision control, many deployments
2. Dependencies – Explicitly declare and isolate dependencies
3. Config – Store in the environment
4. Backend Services – Treat backing services as attached resrouces
5. Build, Release, Run – Strictly separate build and run stages
6. Processes – Execute the application as one or more stateless processes.
7. Port Binding – Export services via port binding
8. Concurrency – Scale out via the process model
9. Disposability – Maximize robustness with fast startup and graceful shutdown
10. Dev/Prod Parity – Keep development, staging, and production as similar as possible
11. Logs – Treat logs as event streams
12. Admin Processes – Run admin/ management tasks as one-off processes.
1. Twelve Factor Model: https://www.12factor.net/
2. Building Microservices with a 12-factor application pattern on AWS – https://www.youtube.com/watch?v=2SxKKDXKrXQ&t=18s
3. Twelve Factor app methodology sets guidelines for modern apps – https://sdtimes.com/webdev/twelve-factor-app-methodology-sets-guidelines-modern-apps/
If you are an AWS Solution architect and always scratching your head on certain AWS designs for certain use cases, you may or may not have come across following two links. But If you have not, it is a Gold mine for all the AWS architects.
The two links are as follows:
https://aws.amazon.com/architecture – For all AWS reference architectures
https://aws.amazon.com/solutions – For all AWS solution architecture related materials
For example, if you are looking for something related to AWS Image handling and how to optimize it using serverless technologies on the web, you can just search “Serverless Image” and see how it goes (under the solutions architecture link). You will get a page with the URL https://aws.amazon.com/solutions/serverless-image-handler/
This link will give you immense knowledge in terms of getting your use cases up and running. It gives you the total solution along with a working CloudFormation Template to get you going with your solution in few minutes. What more could you ask for?
Enjoy your Journey with AWS!
In the modern technology landscape, working with cloud enabled applications is a must skill for any IT professional in the market. Most of us in the cloud based development are biased towards primarily to one or two cloud vendors (AWS, Azure, Google, etc). Can we blame for that? We simply cannot. Anyone can have his/her own preferences and liking to certain platforms.
But, are we heading towards the right direction? Most of the main cloud vendors abstracted their cloud services so much allowing the cloud users to have many managed services nowadays. Most of your enterprise applications are heading towards serverless computing architectures and becoming more and more abstract.
On the flip side, some cloud practitioners are working towards more open Kubernetes development and more Cloud Native Computing environments (i.e. CNCF Landscape). They say they want cloud to be more open!. That is really cool. But then again you need to learn Kubernetes related frameworks and plethora of products around you and need to get stuck with them (You must see the CNCF platform!!). Isn’t that lock-in too?
I recently read this nice article, which was written by Gregor Hohpe (the well known author of the Book Enterprise Integration Patterns) related to this critical aspect of Cloud Vendor Lock-In. I am sure you can get some good insights on this subject if you read it. Good Luck!
1. Gregor Hohpe Blog: https://architectelevator.com/
2. Martin Fowler Blog (The article on Cloud-Lock-in by Gregor Hohpe): https://martinfowler.com/articles/oss-lockin.html
3. CNCF Landscape: https://www.cncf.io/
At Re-Invent 2018 conference, AWS Security Hub was launched. It is a security tool, which provides the AWS users/ clients a comprehensive view of your applications (hosted on AWS). It further helps you to check your security compliance against well established security standards such as CIS.
I recently wrote an article on this subject and my experience on one of our key projects for the company I am working. This article will give some insight and some important links to get this going for your future projects. AWS Security Hub is not a fully mature tool so far and still going through some alterations and keep on adding several standards for its future compliance. For example, the Quick Start script, which is mentioned in the blog has been recently ruled out from AWS documentation as well. However, If you browse my GitHub, which is mentioned on the blog you may be able to get some insight to this work and I am 100% sure that you can get this going without a failure.
All the best!
Tripitaka is the traditional term for the Buddhist scriptures. The version canonical to Theravada Buddhism is generally referred to in English as the Pali Canon. Mahayana Buddhism also holds the Tripi?aka to be authoritative but, unlike Theravadins, it also includes in its canon various derivative literature and commentaries that were composed much later. [Wikipedia]
Whoever looking for the great bible of Budhdha Dhamma, now you can reach on-line using the following link.
Cellery  is an Open Source Framework, which facilitates you to implement the Cell Based Architecture , which was drafted by WSO2.
Cellery is a code-first approach to building, integrating, running and managing composite microservice applications on Kubernetes.
Using Cellery we are able to create/ composite microservices apps with code, push and pull them from docker hub and other registries, deploy into k8s and monitor. Cells have automatic API security and Web SSO.
. Cellery – The Cell based Architecture Implementation (WSO2) – https://wso2-cellery.github.io/
. Cell Based Reference Architecture (WSO2) – https://github.com/wso2/reference-architecture
If you are using VS Code for your development, the ENOSPC Error can be a common one. (Especially if you are Linux Debian user).
This basically happens due to the fact that, VS Code file watcher is running out of handles because the workspace is large and contains many files .
You can see the current limit by executing the following command:
$ cat /proc/sys/fs/inotify/max_user_watches
The limit can be increased to its maximum by editing
/etc/sysctl.conf and adding this line to the end of the file:
The above value then can be loaded to the system by executing the following:
$ sudo sysctl -p
If all goes well, you will have no issues with the watches on VS Code.
What is a Bastion Host?
Bastion hosts are instances that sit within your public subnet and are typically accessed using SSH (for Linux) or RDP (for Windows). It acts as a ‘jump’ server, allowing you to use SSH or RDP to login to other instance in private subnet.
High Availability (HA) can be ensured for Bastion hosts by having multiple bastion hosts in each availability zone, with each bastion host is mapped to an Auto scaling group
A NAT instance is, like a bastion host, lives in your public subnet. A NAT instance, however, allows your private instances outgoing connectivity to the Internet (to get updates), while at the same time blocking inbound traffic from the Internet.
It is required to use Elastic IP addresses for bastion hosts mainly if you are using high availability scenarios.
The following are the best practices while configuring a bastion host
1. Never place your SSH private keys within a bastion hosts/ server. As suggested, use SSH Agent Forwarding for this task to connect first to the bastion host then to other instances on the private subnets. This lets you keep the private keys only with your servers.
2. Make sure the security group on the bastion host to allow SSH (port 22) to connect only from your trusted hosts and never from 0.0.0.0/0 mask.
3. Always have more than one bastion. For example, having a bastion host for each Availability Zone (AZ).
4. Make sure to configure security groups on private subnets to accept SSH traffic only from the bastion hosts.
How to handle Bastion hosts via SSH Agent Forwarding?
The SSH agent handles signing of authentication data for you. When authenticating to a server, you are required to sign some data using your private key, to prove that you are. As a security measure most people sensibly protect their private keys with a pass phrase, so any authentication attempt would require you to enter this pass-phrase. This can be undesirable, so the ssh agent caches they key for you and you only need to enter the password once, when the agent wants to decrypt it.
The SSH agent never hands these keys to client programs, but merely presents a socket over which clients can send it data and over which it responds with signed data. A side benefit of this is that you can use your private key even with programs you don’t fully trust.
Another benefit of the SSH agent is that it can be forwarded over SSH. So when you ssh to host A, while forwarding your agent, you can then ssh from A to another host B without needing your key present (not even in encrypted form) on host A.
These SSH Agents can not only be used when the paraphrase is being used. This can be successfully used in Bastion hosts. Rather copying the PEM (rather the private key) to the Bastion host, it is more secure to hand this process to SSH Agents. That would be more secure and easy!. So here are the simple steps to follow if you are to do this task. However, if you are running this on heavily secured environment with well designed Security groups and NACLs, it is always good to have a complete idea before executing this. Otherwise you will end up having too many confusions. If all well, this works like a charm!
Step 1: Adding the private key (PEM file) to the key chain. This allows the user to access the private instances without copying to the bastion host. This adds an additional layer of security.
$ ssh-add -k <PEM_file_name>
Step 2: Check whether the private key is properly added to the key chain
$ ssh-add -L
The above will list all the keys added to the chain. Check whether the key you added is listed there.
Step 3: Access the Bastion Host (Public instance)
$ ssh -A ec2-user@<bastion-host-elastic-ip> [Here ec2-user is the user for the Linux instance]
Step 4: Access the private instance
$ ssh ec2-user@<private-instance-ip>