Uncategorized

Are you a Vendor Neutral?

Nowadays whether we like it or not, we are biased towards some vendor technology in your own specialized area. But we should never forget the concepts, architecture and the core. So much you know about any particular technology or their vendors you still cannot be a complete Software person unless you cover your bases.

So finally we can see some hope in this area in terms of building the competency. It is another CERTIFICATION! A lot of people might not know about this, since it is quire new to the field. But I know the people who do provide this service are the pioneers in the subject.

So it is another set of Industry Certification. But this is not about any vendor. It is about the Software Design and Patterns. Wow! I am waiting to get on board. Hope you will too.

Here is the link : https://patterns.arcitura.com/

Hats off to the creators.. Timely initiative when we are all vendor locked.

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Serverless Microservices Patterns (on AWS)

Serverless Architecture and Microservices Architecture are two architectures we do talk together in our architectural designs.If you do architectural designs on AWS, Azure, Cloud or any other cloud vendor these are two common architectural patterns that you follow.

We all know “design pattern” is a software solution for a recurring design problem. Since we started software design in our early days, we have been learning multiple design pattern sets.

1. Object Oriented Design Patterns (GoF Patterns)

2. Enterprise Application Design Patterns (Mainly on Monolithic Three Tier Architectures)

3. Enterprise Integration Patterns (Mainly on SOA related Hub-Spoke models)

Since its inception of MSA (Micro Services Architecture), we have been following multiple Design Pattern as well. Though there is no proper Bible (A Book) to specify on this area, there are multiple books published on Microservices Design.

Later, we saw cloud vendors too adopting Microservice architecture in big way. Serverless architecture is one of those heavily adopted architectures nowadays.

I recently came across the following article on Servereless Microservices Design Pattern set on AWS, which basically encompasses the complete set of this beautiful piece of architecture. More you adopt to it more you will enjoy adopting it. I know there is coupling on AWS just like any other cloud vendor, but it is amicably easy to use and easy on your purse.

Please do find the following link for all this and kudos to author to putting this for all of us.!

https://www.jeremydaly.com/serverless-microservice-patterns-for-aws/

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

12-Factor Application Development Model

In the modern application development, Micro Services Architecture (MSA) Design has slowly becoming the de-facto standard for many enterprise level development projects. Along with MSA, we can hear the emergence of 12-factor application development model leveraging most of the MSA features coupled with certain DevOps (CI/CD) aspects.

Hence, it is always good to know what this architecture model is all about to basically have single vocabulary to encompass many of the good features produced by MSA, DevOps and other best practices in the modern enterprise application development.

They are as follows:

1. Code Base – One Code base tracked in revision control, many deployments

2. Dependencies – Explicitly declare and isolate dependencies

3. Config – Store in the environment

4. Backend Services – Treat backing services as attached resrouces

5. Build, Release, Run – Strictly separate build and run stages

6. Processes – Execute the application as one or more stateless processes.

7. Port Binding – Export services via port binding

8. Concurrency – Scale out via the process model

9. Disposability – Maximize robustness with fast startup and graceful shutdown

10. Dev/Prod Parity – Keep development, staging, and production as similar as possible

11. Logs – Treat logs as event streams

12. Admin Processes – Run admin/ management tasks as one-off processes.

References

1. Twelve Factor Model: https://www.12factor.net/

2. Building Microservices with a 12-factor application pattern on AWS – https://www.youtube.com/watch?v=2SxKKDXKrXQ&t=18s

3. Twelve Factor app methodology sets guidelines for modern apps – https://sdtimes.com/webdev/twelve-factor-app-methodology-sets-guidelines-modern-apps/

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Tripitaka – On-Line Version

Tripitaka is the traditional term for the Buddhist scriptures.[1][2] The version canonical to Theravada Buddhism is generally referred to in English as the Pali Canon. Mahayana Buddhism also holds the Tripi?aka to be authoritative but, unlike Theravadins, it also includes in its canon various derivative literature and commentaries that were composed much later. [Wikipedia]

Whoever looking for the great bible of Budhdha Dhamma, now you can reach on-line using the following link.

https://pitaka.lk

Theruwan Saranai

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Cellery – The implementation of Cell based Architecture

Cellery [1] is an Open Source Framework, which facilitates you to implement the Cell Based Architecture [2], which was drafted by WSO2.

Cellery is a code-first approach to building, integrating, running and managing composite microservice applications on Kubernetes.

Using Cellery we are able to create/ composite microservices apps with code, push and pull them from docker hub and other registries, deploy into k8s and monitor. Cells have automatic API security and Web SSO.

References:

[1]. Cellery – The Cell based Architecture Implementation (WSO2) – https://wso2-cellery.github.io/

[2]. Cell Based Reference Architecture (WSO2) – https://github.com/wso2/reference-architecture

VN:F [1.9.22_1171]
Rating: 8.5/10 (2 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Handling ENOSPC Error on VS Code

If you are using VS Code for your development, the ENOSPC Error can be a common one. (Especially if you are Linux Debian user).

This basically happens due to the fact that, VS Code file watcher is running out of handles because the workspace is large and contains many files [1].

You can see the current limit by executing the following command:

$ cat /proc/sys/fs/inotify/max_user_watches

The limit can be increased to its maximum by editing /etc/sysctl.conf and adding this line to the end of the file:

fs.inotify.max_user_watches=524288

The above value then can be loaded to the system by executing the following:

$ sudo sysctl -p

If all goes well, you will have no issues with the watches on VS Code.

References:

1. https://code.visualstudio.com/docs/setup/linux#_error-enospc

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Expectations of the IT Industry

Today (17/08/2018) I had the privilege to do a 2 hour guest lecture on “Expectations of the IT industry” for SLIIT Matara Branch BSc IT students. I hope they were able to learn something out of this presentation. You can reach the slide deck using the following link:

https://www.slideshare.net/crishantha/expectaions-in-it-industry

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Hadoop 2.6 (Part 01) – Installing on Ubuntu 16.04 (Single-Node Cluster)

Sometime ago I wrote a blog on “Setting up Hadoop 1.x on Ubuntu 12.04“. Since its 1.x version it is no longer the correct blog to refer. So I thought to update it to 2.x version running on Ubuntu latest version, which is 16.04 LTS.

Prerequisites

1. Make sure that have Java installed.

(Version 2.7 and later of Apache Hadoop requires Java 7. It is built and tested on both OpenJDK and Oracle JDK/JRE. Earlier versions (2.6 and earlier) support Java 6.)

You may visit Hadoop Wiki for more information.

2. Add a separate user and a group dedicated to Hadoop work. Here the group is called “hadoop” and the user is called “hduser”

Adding “sudo” to the command will allow hduser to have super user privileges

$ sudo addgroup hadoop
$ sudo adduser --ingroup hadoop hduser sudo

3. Enable SSH access to localhost for the hduser. (Hadoop requires SSH to manage its nodes.Hence you are required to enable it. Since this is a Single node setup you are required to enable SSH to localhost)

// Though Ubuntu is pre-installed with SSH, to enable SSHD (The server daemon) // you are required to install SSH again.
$ sudo apt-get install ssh

$ su - hduser

// Create a key pair for the instance.
$ ssh-keygen -t rsa -P ""

// Move public key to authorized_keys file to negate password verification while login using SSH
$ cat /home/hduser/.ssh/id_rsa.pub >> /home/hduser/authorized_keys

// Now you can check SSH on localhost
$ ssh localhost

The above will create a public/private key-pair for secure communication via SSH. By executing the above command you basically create a directory ‘/home/hduser/.ssh’ and the private key is stored in ‘/home/hduser/.ssh/id_rsa’ and the public key is stored in ‘/home/hduser/.ssh/id_rsa.pub’ respectively.

Installing Hadoop

1. Download Hadoop from the Apache Hadoop mirrors and store in a folder of your choice. I am using hadoop-2.6.1.tar.gz distribution here.

// Now copy the hadoop tar to /usr/local and execute the following commands
$ cd /usr/local
$ sudo tar xzf hadoop-2.6.1.tar.gz
$ sudo mv hadoop-2.6.1 hadoop
$ sudo chown -R hduser:hadoop hadoop

2. Set the environment in /home/hduser/.bashrc

# Set JAVA_HOME and HADOOP_HOME
export JAVA_HOME=/opt/jdk1.8.0_66
export HADOOP_HOME=/usr/local/hadoop
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

Once you edit .bashrc, you may logout and come back and type

$ hadoop version

3. Configure Hadoop -

After setting up the prerequisites, you are required to set the environment for Hadoop. Here, there are a set of configuration files to be edited. However this is the minimum level configuration that you required to edit to get one Hadoop instance up an running with HDFS.

- $HADOOP_HOME/etc/hadoop/hadoop-env.sh

- $HADOOP_HOME/etc/hadoop/core-site.xml

- $HADOOP_HOME/etc/hadoop/mapred-site.xml

- $HADOOP_HOME/etc/hadoop/hdfs-site.xml

(i) hadoop-env.sh

Required to set the JAVA_HOME here

# The java implementation to use.
export JAVA_HOME=/opt/jdk1.8.0_66

(ii) core-site.xml

It is required to set the HDFS temporary folder (hadoop.tmp.dir) here in this configuration. This should be positioned within the <configuration>.. </configuration> tags.

<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>

Then you are required to create the temporary directory as mentioned in the parameters.

$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp

(iii). mapred-site.xml

The following should be inserted within the <configuration>.. </configuration> tags.

The mapred-site.xml is originally not in the folder. You have to rename/ copy the mapred-site.xml.template to mapred-site.xml before inserting.

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>

(iv). hdfs-site.xml

The following should be inserted within the <configuration>.. </configuration> tags.

<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>

Once editing the above, it is required to create two directories, which will be used for the NameNode and the DataNode on the host.

$ sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode
$ sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode
$ sudo chown -R hduser:hadoop hadoop_store

4. Formatting the HDFS -

When you first setup Hadoop along with HDFS, you are required to format the HDFS file system. This is like formatting a normal filing system that you get with an OS. However you are not supposed to format it once you are using a HDFS mainly because it will erase all your data on it.

$ hadoop namenode -format

5. Start the Single Node Hadoop Cluster

$ /usr/local/hadoop/sbin/start-all.sh

OR

$ /usr/local/hadoop/sbin/start-dfs.sh
$ /usr/local/hadoop/sbin/start-yarn.sh

This will basically start a Namenode, Datanode, Jobtracker and a Tasktracker on your machine.

Once you execute the above, if all OK, you will see the following output on the console.

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-crishantha-HP-ProBook-6470b.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-crishantha-HP-ProBook-6470b.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-crishantha-HP-ProBook-6470b.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-crishantha-HP-ProBook-6470b.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-crishantha-HP-ProBook-6470b.out

Execute the following to see the active ports after starting the Hadoop cluster

$ netstat -plten | grep java

tcp        0      0 127.0.0.1:54310         0.0.0.0:*               LISTEN      1002       48587       5439/java
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      1002       49408       5756/java
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      1002       46080       5439/java
tcp        0      0 0.0.0.0:50010           0.0.0.0:*               LISTEN      1002       51317       5579/java
tcp        0      0 0.0.0.0:50075           0.0.0.0:*               LISTEN      1002       51323       5579/java
tcp        0      0 0.0.0.0:50020           0.0.0.0:*               LISTEN      1002       51328       5579/java
tcp6       0      0 :::8040                 :::*                    LISTEN      1002       56335       6028/java
tcp6       0      0 :::8042                 :::*                    LISTEN      1002       54502       6028/java
tcp6       0      0 :::8088                 :::*                    LISTEN      1002       49681       5909/java
tcp6       0      0 :::39673                :::*                    LISTEN      1002       56327       6028/java
tcp6       0      0 :::8030                 :::*                    LISTEN      1002       49678       5909/java
tcp6       0      0 :::8031                 :::*                    LISTEN      1002       49671       5909/java
tcp6       0      0 :::8032                 :::*                    LISTEN      1002       52457       5909/java
tcp6       0      0 :::8033                 :::*                    LISTEN      1002       55528       5909/java

6. Verify the Hadoop Cluster – You can check the availability of starting of above nodes by using the following command.

$ jps

3744 NameNode
4050 SecondaryNameNode
4310 NodeManager
3879 DataNode
4200 ResourceManager
4606 Jps

You may use the Web Interface provided to check the running nodes:

http://localhost:50070 - To see DataNodes

http://localhost:50090 - To see NameNodes

7. Stopping the Hadoop Cluster

You are required to execute the following command.

$ /usr/local/hadoop/sbin/stop-all.sh

If you were able to complete all above test you have set up a successful single node Hadoop cluster!!

VN:F [1.9.22_1171]
Rating: 8.0/10 (3 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Cloud based SOA e-Government infrastructure in Sri Lanka

My talk on “Cloud based SOA e-Government Infrastructure in Sri Lanka”, which I did WSO2Con 2014 can be found using the following URL:

http://youtu.be/f3pjtsX8mm0

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Experiencing RedHat via Ubuntu

In the recent past, the Virtualization has emerged many avenues to most of the IT people with its dynamic capability and the architecture. The Cloud computing is one of those very strong concepts inherited.

As IT people, we too get added benefits by this dynamic feature. Now in almost all OSs have the ability to experience other OSs from your native OS (I know this is nothing new to most of you all). You do need to have some Virtualization software running on your machine to get this working. KVM or Virtualbox are few of those simple softwares that can make this work for you.

Recently, I started to get the hang of RedHat (I was using Debian/ Ubuntu for sometime now) after started following a course at OpenEd. Since my laptop was giving trouble to have a separate partition for RHEL6 (Red Hat Linux 6) mainly due to device driver issues, I opted to have a Virtual Manager like KVM on top of my Ubuntu to have a RHEL6 running as a Virtual machine. Temporarily that basically solved my problem.

If you google the above requirement I am sure you will find a lot of resources to solve the matter. I too did that and found out the following link, which I am sure can speed up your experience.

http://www.howtogeek.com/117635/how-to-install-kvm-and-create-virtual-machines-on-ubuntu/

Prerequisites:

A system with 64-bit CPU is required to run the KVM.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)
Go to Top