Cloud Computing

Docker on Ubuntu 16.04 LTS – [Part 04] Docker Compose

Currently, Docker is the most popular and widely used container management system. In most of our enterprise applications nowadays, we do tend to have components running in separate containers. In such an architecture, the “container orchestration” (starting/ shutting down containers and setting up intra-container linkages) is an important factor and the Docker community came up with a solution called Fig, which basically handled this requirement. This uses a single YAML file to orchestrate all your Docker containers and configurations. The popularity of Fig allowed Docker community to plug into its own Docker code base as separate component called “Docker Compose“.

1. Installing Docker Compose

You are required to follow the steps below:

$ sudo curl -o /usr/local/bin/docker-compose -L "$(uname -s)-$(uname -m)"

Set the permissions:

$ sudo chmod +x /usr/local/bin/docker-compose

Now check whether it is installed properly:

$ docker-compose -v

2. Running a Container with Docker Compose

Create a directory called “ubuntu” to download an image from GitHub. This will basically download the latest ubuntu distribution as an image to the local.

$ mkdir ubuntu
$ cd ubuntu

Once you do above, create a configuration file (docker-compose.yml) as an guideline to create an image.

  image: ubuntu

Now execute the following:

$ docker-compose up // As an interactive job
$ docker-compose up -d // As a daemon job

The above will read the docker-compose.yml and pull the relevant images and up the respective container.

Pulling docker-compose-test (ubuntu:latest)...
latest: Pulling from library/ubuntu
e0a742c2abfd: Pull complete
486cb8339a27: Pull complete
dc6f0d824617: Pull complete
4f7a5649a30e: Pull complete
672363445ad2: Pull complete
Digest: sha256:84c334414e2bfdcae99509a6add166bbb4fa4041dc3fa6af08046a66fed3005f
Status: Downloaded newer image for ubuntu:latest
Creating ubuntu_docker-compose-test_1
Attaching to ubuntu_docker-compose-test_1
ubuntu_docker-compose-test_1 exited with code 0

Now execute the following to see whether an ubuntu:latest image is downloaded and container is created.

$ docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             SIZE
ubuntu                        latest              14f60031763d        4 days ago          120 MB
$ docker ps -a
CONTAINER ID        IMAGE                         COMMAND                  CREATED             STATUS                     PORTS                    NAMES
5705871fe7ed        ubuntu                        "/bin/bash"              2 minutes ago       Exited (0) 2 minutes ago                            ubuntu_docker-compose-test_1


1. How to install Docker Compose on Ubuntu 16.04 LTS

2. How to install and use Docker Compose on Ubuntu 14.04 LTS

3. How To Configure a Continuous Integration Testing Environment with Docker and Docker Compose on Ubuntu 16.04

4. How To Install WordPress and PhpMyAdmin with Docker Compose on Ubuntu 14.04

4. Docker Compose (Official Web URL)

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Docker on Ubuntu 16.04 LTS – [Part 03] Docker Networking

In my previous post on Docker images, we were able to run certain containers in the foreground. To recall it, here it is:

$ docker run -d -p 80 --name static_web crishantha/static_web  /usr/sbin/apache2ctl -D FOREGROUND

However, this container is not visible to outside since it runs in a private network. If you are to run this allowing to public means, you are required to bind the 80 port to some other port, which runs the container itself. For example, if we map the same port of 80 to the container, we should execute the above command as follows:

$ docker run -d -p 80:80 --name static-web crishantha/static-web  /usr/sbin/apache2ctl -D FOREGROUND

Once you do above, you are able to run the container from the outside IP. Hope this is clear now!

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Docker on Ubuntu 16.04 LTS – [Part 02] – Images

In my previous article, I stopped at the Docker Container management. In this article basically I will be touching the Docker Images.

A typical traditional Linux system to run, it basically needs two file systems:

  1. boot file system (bootfs)
  2. root file system (rootfs)

The bootfs contains the boot loader and the kernel. The user never makes any changes to the boot file system. In fact, soon after the boot process is complete, the entire kernel is in memory, and the boot file system is unmounted to free up the RAM associated with the initrd disk image.

The rootfs includes the typical directory structure we associate with Unix-like operating systems: /dev, /proc, /bin, /etc, /lib, /usr, and /tmp plus all the configuration files, binaries and libraries required to run user applications.

Here the root file system is mounted read-only and then switched to read-write after boot. In Docker, the root file system stays in read-only mode, and Docker takes advantage of a union mount to add more read-only filesystems onto the root file system and appear as only one file system. This gives the complete control of the all the file systems, which are added to the Docker container. Finally when a container is created/ launched, Docker will mount a read-write file system on top of all the other file system image layers. All the changes made to underneath images are basically stored in this read-write layer. However, the original copy is retained in underneath layers without and changes written to them. This read-write layer + other layers underneath  + base layer basically form a Docker container. (See the image below)

In Part 01 of this article, we created a container with an ubuntu image. You can see all the available images by,

$ sudo docker images

ubuntu     latest 07f8e8c5e660 4 weeks ago 188.3 MB

Seems now you have the “latest” ubuntu image with you. If you want a specific version image then you need to specify it as a TAG. i.e. ubuntu:12.04. So lets try that now.

$ sudo docker run -t -i --name new_container ubuntu:12.04 /bin/bash

Now, check the image status

$ sudo docker images

ubuntu     latest 07f8e8c5e660 4 weeks ago 188.3 MB
ubuntu     12.04  ac6b0eaa3203 4 weeks ago 132.5 MB

Further, if you want to delete one of the created images you can use,

$ sudo docker rmi <image-id>

While interacting with multiple images, there can be many unnamed and unwanted (dangling) images are being created. These can take a lot of space in the disk. Hence periodically it is required to purge  them from the system. Use the following to do the trick:

$ docker rmi $(docker images -q -f dangling=true)

Up to now, we used Docker run command to create containers. While creating it downloads the given image from the Docker Hub. This downloading to the local basically takes some time. If you want to save this time when you are creating the container, you can have the alternate route by first pulling the required template from the Docker Hub and then creating the container using the downloaded image. So here are the steps

// Pulling the image from Docker Hub
$ sudo docker pull fedora

// Creating a Docker Container using the pulled image
$ sudo docker run -i -t fedora /bin/bash

Now if you see, you will have 3 containers.

$ sudo docker ps -a

86476cec9907 fedora:latest ---
4d8b96d1f8b1 ubuntu:12.04  ---
c607547adce2 ubuntu:latest ---
Building your own Docker Images

There are two ways to do this.

Method (1). Via docker commit command

Method (2). Via docker build command with a Dockerfile (This is the recommended method)

To test method (1), first create a container using an already pulled image and then do some alteration to the image and then execute docker commit.

// Creating a Docker Container using an image
$ sudo docker run -i -t ubuntu:14.04 /bin/bash

// Alter the image
$ apt-get -yqq update
$ apt-get -y install apache2

// Committing the changes to the image
// Here, crishantha is the account created
// in the Docker Hub repository
// you may use Docker Hub or any other Docker repo
// 9b48a2b8850f is the Container ID of the contatiner

$ sudo docker commit 9b48a2b8850f crishantha/apache2

// List the Docker images
// Here the Docker altered image ID is shown
$ sudo docker images crishantha/apache2
crishantha/apache2 latest 0a33454e78e4 ....

To test method (2), you may create a Dockerfile at a given directory and specify the required changes needed for the image. For example, the Dockerfile can have the following lines, for an Ubuntu 14.04 image. FROM basically pulls the ubuntu 14.04 image and then RUN commands basically executes and add more layers to the image. EXPOSE will basically expose port 80 from the container.

Before executing the Dockerfile, it is good to create a new directory and create the DockerFile within that directory. Here the directory is called static_web.

FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y apache2

Once this is done, you can execute the Dockerfile by,

$ sudo docker build -t="crishantha/static_web" .

If all successful, it will return a image ID and further you can see it using docker images crishantha/static_web

Checking the Docker Image History

You can further check the history of the image by executing docker history <image Name/ image ID>

$ sudo docker history crishantha/static_web

Now you can execute the container by,

$ sudo docker run -d -p 80 --name static_web crishantha/static_web  /usr/sbin/apache2ctl -D FOREGROUND

The above will run as a detached process and you would see this by executing docker ps and you would see it running in the background as a Docker process.

If you use Nginx instead of Apache2 as the web server, you may add nginx -g “daemon off;” to the command. The daemon off; directive tells Nginx to stay in the foreground. For containers this is useful as best practice is for one container = one process. One server (container) has only one service.

Pushing Docker Images

Once an image is created we can always push it a Docker repository. If you are registered with Docker Hub, it is quite easy to push your image to it. Since it is a public repository, then if anyone interested can just pull it to his/her own Docker repository.

// If you have not already not logged in,
// Here the username is the one you registered
// with Docker Hub
$ sudo docker login
Username: crishantha
Password: xxxxxxxxxx

// If login is successful
$ sudo docker push crishantha/static_web

If all successful, you may see it is available in the Docker Hub.

Pulling Docker Images

Once it is push to Docker Hub, you may pull to to any other instance which runs Docker.

$ sudo docker pull crishantha/static_web
Automated Builds in Docker Hub Repositories

In addition to push our images from our set ups to Docker Hub, it allow us to automate Docker image builds within Docker Hub by connecting to external repositories. (private or public)

You can test this out by connecting your GitHub repository or Bitbucket repositories to Docker Hub. (Use the Add Repository –> Automated Build option in the Docker Hub to follow this process)

However, the Docker Hub automated builds should have a Dockerfile attached to it in the specific build folder. The build will go through based on the Dockerfile build that you specify here. Once the build is completed you can see the build log as well.

VN:F [1.9.22_1171]
Rating: 7.5/10 (4 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Docker on Ubuntu 16.04 LTS – [Part 01] – Installation and Containers

Docker is an open-source engine that automates the deployment of applications into containers released under the Apache 2 License. It adds an application deployment engine on top of a virtualized container execution environment. Docker aims to reduce the cycle time between code being written and code being tested, deployed, and used .

Core components of Docker:

  1. The Docker client and server
  2. Docker images
  3. Registries
  4. Docker Containers

Docker has client-server architecture. The docker binary acts as both the client and the server. As a client, the docker binary sends requests to docker daemon, process them and return.

Docker images are the building blocks or the the packaging aspect of Docker. Basically containers are launched from Docker images. These Docker images can be shared, stored, updated easily and considered highly portable.

Registries are there to store Docker images that you create in Docker. There are two types of Docker Registries 1) Private 2) Public. Docker Hub is the public Docker Registry maintained by the Docker Inc.

Containers are the running and execution aspect of Docker.

Docker does not care what software resides within the container. Each container is loaded on the same way as any other container. You can map this to a shipping container. A shipping container is not too much bothered about what it basically carries inside. It teats all the goods inside in the same way.

So the Docker containers are interchangeable, stackable, portable, and as generic as possible.

Docker can be run on any x64 host, which is running a modern Linux kernel.
(The Recommended kernel version 3.10 and later.)

The native Linux container format that Docker uses is libcontainer

The Linux Kernel Namespaces provides the isolation (file system, processes, network) which is required by Docker containers.

  • File System Isolation – Each container is running its own “root” file system
  • Process Isolation – Each container is running its own process environment
  • Network Isolation – Separate virtual interfaces and IP addressing

Resources like CPU and Memory allocation for each container happens using cgroups.(cgroups is a Linux kernel feature that limits, accounts for and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes)

Installing Docker on Ubuntu

Currently it is supported in wide variety of Linux platforms including Ubuntu, RedHat (RHEL), Dabian, CentOS, Fedora, Oracle Linux, etc.


1. A 64-bit architecture (x86_64 abd amd64 only) 32 bit not supported.

2. Linux 3.8 Kernel or later version.

3. Kernel features such as cgroups and namespaces should be enabled.

Step 1 – Checking the Linux Kernel

In order to check the current Linux Kernel

$ uname -r


So my Linux Kernel is 4.4 and should support Docker easily. (Since it is more than 3.8) and it is x86_64.

But if your Ubuntu Linux Kernel is less than 3.8 you may try to install 3.8.

$ sudo apt-get update
$ sudo apt-get install linux-headers-3.8.0-27-generic linux-image
-3.8.0-27-generic linux-headers-3.8.0-27

If above headers are not available, you can try referring the Docker manuals on the web. (

Once this is done you are required to update the grub and reboot the system

$ sudo update-grub
$ sudo reboot

After rebooting pls check the Linux Kernel version by typing uname -a or uname -r

Step 2 – Installing Docker

Make sure the APT works fine with the “https” and the CA certificates are installed

$ sudo apt-get update

Add the new GPG Key

$ curl -fsSL | sudo apt-key add -

Add Docker APT repositories to /etc/apt/sources.list.d/docker.list file

$ sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"

Now update the APT sources

$ sudo apt-get update
// Make sure you are installing from the Docker repositories and not from the Ubuntu repositories
$ apt-cache policy docker-ce

Finally, now you are in a position to install Docker and other additional packages using

$ sudo apt-get install -y docker-ce

Docker now is installed and the daemon must be started. Process also enabled to start on a reboot. You may check its availability using,

$ sudo systemctl status docker

You may get rid of having “sudo” in all the commands by adding the user to the docker group, which has the super user privileges.

$ sudo usermod -aG docker $(whoami)

Once you do above, you have to re-login to the system again.

If all OK, now you should check whether Docker was installed properly using

$ docker info

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 17.03.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 977c511eda0925a723debdc94d09459af49d082a
runc version: a01dafd48bc1c7cc12bdb01206f9fea7dd6feb70
init version: 949e6fa
Security Options:
 Profile: default
Kernel Version: 4.4.0-66-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 990.6 MiB
Step 6 – Creating a Docker Container

There are two types of Docker containers.

1. Interactive Docker Containers

2. Demonized Docker Containers.

1. Interactive Docker Container

Once Docker is installed successfully, now we can try an create a Docker container instance. Prior to that it is good to check the Docker status by typing the sudo docker status command.

If everything is alright, you can go ahead and create the Docker “interactive” container instance.

$ sudo docker run --name crish_container -i -t ubuntu /bin/bash

$ root@c607547adce2:/#

The above will create a container named crish_container, which is an ubuntu template. If you do not specify a name, the system will create a dummy name along with an unique container ID attached to it. One created you will be given an “interactive shell” like below.


Here the c607547adce2 is the container ID. You can type exit to move away from the containers interactive session. Once exited from the interactive session you can see the container is being stopped. The container only runs as long as the interactive session (/bin/bash) is running. That is the reason why they called as “interactive” docker containers.

Now again you can check the docker status by,

$ docker info

Containers: 1
Images: 4
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 6
 Dirperm1 Supported: false
Execution Driver: native-0.2
Kernel Version: 3.13.0-32-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 2
Total Memory: 1.955 GiB
Name: crishantha
WARNING: No swap limit support

2. Demonized Docker Container

Other than the interactive docker instance, there is another type called “demonized containers”. These can be utilized to execute long-running jobs. In this you will not get an interactive session.

You can create a demonized container by,

$ docker run --rm --name crish_daemon -d ubuntu /bin/sh

However, these demonized sessions are ended in the background and you may not be able to reattach as an “interactive” docker session.

If we deviate from the above example, If you want to run a “nginx” container on port 8080 you can execute the following:

$ docker run -d -p 8080:80 --restart=always nginx

Now try docker ps

$ CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
65438050ceb8        nginx               "nginx -g 'daemon of…"   5 seconds ago       Up 2 seconds>80/tcp     fervent_gates

You can see that the container port 80 is pointing to host port, which is 80 and this container is running as a daemon (background).

The above will run on port 8080 on your web browser. Try localhost:8080.

The same container you can run on any other port that you define as well. For example you can run it on 8000 using the following:

$ docker run -d -p 8000:80 --restart=always nginx

However both the URLs are pointing to the same nginx container.You can again try docker ps and see. You will see two entries for the same container.

P.Note: –restart=always allows the Docker daemon to restart the container once the system is restarted after a shut down. If you do not specify this here, you are required to restart the container manually.

Step 7 – Display the Container List

To show all containers in the system (running and not running)

$ docker ps -a

If you feel that the containers are not required to be around after running, you can use –rm tag while executing the docker run command.

To show all the running containers,

$ docker ps
Step 8 – Attach to a container

The container that you created with docker run command will restart with the same options that we have specified when we reattach to the same container again. The interactive session is basically waiting for the running container. You may use the following to reattach again.

$ docker attach crish_container
$ docker attach c607547adce2

Here c607547adce2 is the <container_ID>

Note: Sometimes you are required to press ENTER key to show the bash shell once you execute the attach command.

Step 9 – Extract the Container IP

There is no straight forward command to get the Container IP that you are running. You may use the following to get it:

$ docker inspect <CONTAINER ID> | grep -w "IPAddress" | awk '{ print $2 }' | head -n 1 | cut -d "," -f1
Step 10 – Starting and Stopping a container

To start

$ docker start crish_container

To stop

$ docker stop crish_container
Step 11 – Deleting a container
$ docker rm crish_container

Step 12 – Deleting all stopped containers
$ docker system prune
VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Building SaaS based Cloud Applications


According to NIST, “cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction (NIST, 2011).

The traditional approach incur a huge capital expenditure upfront along with too much excess capacity not allowing to predict the capacity based on the market demand. The Cloud Computing approach has the ability to provision computing capabilities without requiring the human interaction with the service provider. In addition to that, its ability to have a broad network access, resource pooling, elasticity and measured services are a few of the characteristics, which basically overpower the traditional hardware approach. As benefits, it can drastically cut down the procurement lead time, can produce better scalability, substantial cost savings (no capital cost, pay for what you use) with less management headaches in terms of operational costs.

Cloud Service Models

There are three (03) basic cloud service models such as IaaS (Infrastructure as a Service), Platform as a Service (PaaS) and Software as a Service (SaaS).

SaaS Service Model

In the SaaS cloud service model the consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage or even individual application capabilities with the possible exception of limited user specific application configuration settings. A typical SaaS based application primarily provide multi-tenancy, scalability, customization and resource pooling features to the client.


Multi-tenancy is the ability for multiple customers (tenants) to share the same applications and/ or compute resources. A single software/ application can be used and customized by different organizations as if they each have a separate instance, yet a single shared stack of software or hardware is used. Further it ensures that their data and customizations remain secure and insulated from the activity of all other tenants.

Multi-tenancy Models

There are three basic multi-tenancy models.

1. Separate applications and separate databases

2. One shared application and separate databases

3. One shared application and one shared database

Figure 1 – Multi-tenancy Models

Multi-tenancy Data Architectures

According to Figure 1, there are three types of multi-tenancy data architectures based on the way data is being stored.

1. Separate Databases

In this approach, each tenant data is stored in a separate database ensuring that the tenant can access only the specific tenant database. A separate database connection pool should be set up and need to select the connection pool based on the tenant ID associated with the logged in user.

2. Shared Database – Separate Schemas

In this approach, the data is stored in separate schemas in a single database for each tenant. Similar to the first approach separate connection pools can be created for each database schema. Alternatively a single connection pool also can be used and based on the connection tenant ID (i.e. using SET SCHEMA SQL command), the relevant schema is selected.

3. Shared Database – Shared Schema (Horizontally Partitioned)

In this approach, the data is stored in a single database schema. The tenant is separated from the tenant ID, which is represented by a separate column in each table in the schema. Only one connection pool is configured at the application level. Based on the tenant ID the database schema should be partitioned (horizontally) or indexed to speed up the performance.

Figure 2 – Multi-tenancy Data Architecture


1. NIST (2011), NIST Definition,

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

About Mesosphere



The Mesosphere DCOS (Data Center Operating System) is a new kind of operating system that organizes all of your machines, VMs, and cloud instances into a single pool of intelligently and dynamically shared resources. It runs on top of and enhances any modern version of Linux.

The Mesosphere DCOS is highly-available and fault-tolerant, and runs in both private and public clouds (in Data centers).

The Mesosphere DCOS includes a distributed systems kernel with enterprise-grade security, based on Apache Mesos. Data center services are available through both public and private repositories. Build your own data center services or use data center services built and supported by Mesosphere and third parties.Data center services include Apache Spark, Apache Cassandra, Apache Kafka, Apache Hadoop, Apache YARN, Apache HDFS, Google’s Kubernetes, and more.

The Mesosphere DCOS runs containerized workloads at scale, managing resource isolation (including memory, processor, and network) and optimizing resource utilization. It is highly adaptable, employing plug-ins for native Linux containers, Docker containers, and other emerging container systems. Automating standard operations, the Mesosphere DCOS is highly-elastic, highly-available and fault-tolerant.

It was invented at UC Berkeley’s AMPLab and used at large-scale in production at companies like Twitter, Netflix and Airbnb.

Apache Mesos


Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively.


1. Scale like Twitter with Apache Mesos –

2. An Introduction to Mesosphere –

3. How to configure a production ready Mesosphere cluster on Ubuntu 14.04 –

4. Mesosphere Introductory Course –

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Cloud Computing Patterns – Part 2

In one of my previous blogs about this subject, I have mentioned about a web site, which is maintained by the well known technical writer Mr. Gregor Hoppe. But as you expected, I could find another web site about the same subject.

For your reference that is given below.

FYI: As far as I know, there is no global reference related to this subject. But it is always good to be mindful what is already available.

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Generating public-private key pairs with ssh-keygen

If you are into enterprise application development, you must have come across the requirement of “generating public-private key pairs”. The purpose can be multiple. Mostly you need this in order to authenticate a desired client.

Usually, for this task you always tend to fall back on tools like keytool or OpenSSL. However, they have the capability not only to create “public-private key pairs”. They have the capability to handle generated keys and certificates in production systems.

However, if you have a simple authentication requirement, using above tools may not be ideal choices. Mainly because there are some other tools, which has the capability to just generate “public-private key pairs” without much hassle. One of the popular choices is the ssh-keygen tool, which is available on Unix/Linux distributions.

For example, If you want to authenticate yourself to a remote server, other than the username-password authentication, you are required to create a “public-private key pair” to authenticate between two entities. (i.e. Authenticating an AWS user to a EC2 instance is a good example) In these scenarios, ssh-keygen can be very handy. After creating the key pair, just send out the public key to any party who is willing to authenticate your machine. That party may add your public key to the server instance. (In Unix like systems, it may be to your .ssh directory of the home folder. Just append the public key to the “authorized_keys” file) Thereafter, just do a “ssh” to the particular remote server/instance specifying the private key as the argument. Thats it!

However, in Windows systems, puttygen is the tool widely being used. However, the public keys, which are extracted from this tool will not work properly while user authentication. The workaround would be to change the format of the public key in the authorized_keys.

For example, the initial public key generated by puttygen can be in the following format,

Comment: "rsa-key-20110829"

In this example, just remove the comment lines and EOL characters and add a “ssh-rsa” string to the beginning.

ssh-rsa AB3NzaC1yc2EAAAABJQAAAIEAknX1EwDVO826fSyAxVOkruwwG8AWNjsw4FXzXrN6FClXU7BegOziTlL1jG0oPOHMrxx9ciJ38RMQwWUn3UvsrHKMu6qetf1kbP0b77Md4fJvxgPnxAM6yVYZrt7Nw/Q0MtObYdqFVS/4kx+JM= <user-name>

This will eliminate the authentication issue that you probably had.

VN:F [1.9.22_1171]
Rating: 7.0/10 (3 votes cast)
VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)

Amazon EC2 internal IP changes – How to overcome?

Anybody who is migrating their existing enterprise applications to the cloud, would come across several design level or deployment issues comparing to the normal server setups that we are accustomed to. AWS EC2 is one of the popular enterprise cloud solutions in the market today. While deploying an existing enterprise application to a AWS setup, the “Dynamic IP issue” is a well known problem anybody would come across. In a nutshell, all the AWS EC2 instances have a dynamic IP attached to them. Whenever they are shutdown for some reason, it releases the previously allocated instance and assign a new internal IP to it when it boots up. In an enterprise environment, this is not what you have experienced before. Therefore, it is required to overcome this issue by applying some other mechanism. This post basically covers that scenario and this is basically based on an article published on web (see the reference) and add certain deployment facts which were not mentioned there. And, here this article is based in a Ubuntu 11 server AWS instance.

So here are the steps:

1. Create a normal user in the AWS IM (Identity Management). (P.Note: It is not required to create a User Group here. Just create a normal EC2 read-only user)

2. Create a user policy with following and attach it to the user. This user is only able to access instance details as given in the policy.


3. Remember the AccessKeyID and the Secret Access Key for the account. Now this user should be able to retrieve information about your running instances.

4. Now it is the time to write the script. This script was extracted from the reference given below. All the credit goes to the author for publishing that.

Add the script called getinternalips to the /usr/local/bin directory. This file will look like below.


  require_once( 'AWSSDKforPHP/sdk.class.php' );

  CFCredentials::set(array('production' => array(
  'key' => 'production-key',
  'secret' => 'production-secret',
  'default_cache_config' => '',
  'certificate_authority' => false
  ),'@default' => 'production'

  $cred = array('key'=>'AGHDFSDDFGDGDSYKDA','secret'=>'H%#$%SFSF+sdsfsfgsTm/9CwEdF');
  $oEC2 = new AmazonEC2($cred); # __dnsupdate_cred
  $oEC2->set_region( AmazonEC2::REGION_APAC_SE1);

  $oResponse = $oEC2->describe_instances();
  if( !$oResponse->isOK() ) {
    exit( 1 );

  foreach( $oResponse->body->reservationSet->item as $oReservationSet ) {
    foreach($oReservationSet->instancesSet->item as $oInstance ) {

      $sInstanceState = $oInstance->instanceState->name;
      $sInstanceId = $oInstance->instanceId;
      $alias = $oInstance->tagSet->item->value;

      if( !strcmp( $sInstanceState, 'running' ) ) {
        $sPrivateIpAddress = $oInstance->privateIpAddress;
        echo "$sPrivateIpAddress\t$alias\n";

(Here it is required to change the Region of the instance (we use Singapore servers here) and the AccessKeyID and the SecretAccessKey (AmazonEC2( ‘AGHDFSDDFGDGDSYKDA’, ‘H%#$%SFSF+sdsfsfgsTm/9CwEdF’ )) according to your setup)

5. Change the permission of the above file,

chmod 755 /usr/local/bin/getinternalips

6. Change the hostname of the instance by editing the /etc/hostnames file.
7. Then execute “/etc/init.d/hostname restart” to restart the hostname service.
8. Now reboot/stop the instance.
9. Now you login to the instance using the command prompt. you will see the new hostname with the shell prompt.

10. Now it is required to channel AWS SDK using pear. For that, it is required to install pear in the instance.

sudo apt-get install php-pear

Then discover the channel using pear.

sudo pear channel-discover

11. Now the time to install AWS SDK. Again pear is used to do this task.

sudo pear install aws/sdk

12. After all, this is the time to execute the getinternalips script. Before that, If php-curl is not installed before, please install php-curl module since it required to run the script.

sudo apt-get install php5-curl

13. Copy /etc/hosts file to a new file called hosts.internalipupdater. This is just to have a copy of the /etc/hosts file before making any amendments to the same.

cp /etc/hosts /etc/hosts.internalipupdater

14. Execute the /usr/local/bin/getnternalips script. Now you would see how internal IPs are mapped to their host names. However, if you need this script to update the /etc/hosts file, you need to execute the following command.

cat /etc/hosts.internalipupdater > /etc/hosts && /usr/local/bin/getinternalips >> /etc/hosts 2> /dev/null

15. If you want to schedule this process, you can add this as a cron job or can be added to the rc.local

16. If you use cron jobs, add the following by executing the <em>crontab -e</em>

*/2 * * * * cat /etc/hosts.internalipupdater > /etc/hosts && /usr/local/bin/getinternalips >> /etc/hosts 2> /dev/null

17. Now examine the /etc/hosts file by executing the tail -f /etc/hosts command. You will see all the AWS instances are mapped and added to it. If there is any IP change to any of the instances, that change is reflected properly to this. (The cron job do this process in the given time frame. In this example every after 2 minutes. Good Luck!


VN:F [1.9.22_1171]
Rating: 5.5/10 (4 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 3 votes)

MySQL NDB Clustering

If you are a Linux user, you may tend to apt-get the MySQL binary installation in a few seconds. However, if you have a clustering requirement, it is advisable to download the binaries and use them. Mainly because, nowadays MySQL main binary package does not get bundled with the clustering module. You can download the binaries from official Orcale/MySQL site. ( In this tutorial, we take the binary as mysql-cluster-gpl-7.1.15-linux-i686-glibc23.tar.gz.

Step 1:

Adding the mysql group and a user. (If it is already not there)

shell>> groupadd mysql
shell>> useradd mysql
shell>> usermod -G mysql mysql

Step 2:

Extract the downloaded bundle to the preferably to the /usr/local folder.

shell>> cd /usr/local
shell>> gunzip < /usr/local/mysql-cluster-gpl-7.1.15-linux-i686-glibc23.tar.gz | tar xvf -
shell>> ln -s mysql-cluster-gpl-7.1.15-linux-i686-glibc23 mysql

Step 3:

Change the ownership of the extracted folder, to ensure that the distribution contents are accessible to mysql. Change its ownership to mysql by executing the following commands as root in the installation directory.

shell> cd mysql
shell> chown -R mysql .
shell> chgrp -R mysql .

Step 4:

If you have not installed MySQL before, you must create the MySQL data directory and initialize the grant tables:

shell>> scripts/mysql_install_db --user=mysql --basedir=/usr/local/mysql --ldata=/usr/local/mysql/data

Step 5:

Most of the MySQL installation can be owned by root if you like. The exception is that the data directory must be owned by mysql. To accomplish this, run the following commands as root in the installation directory:

shell>> chown -R root .
shell>> chown -R mysql data

Step 6:

Copy the MySQL server configuration file to /etc/mysql/my.cnf

shell>> cp /usr/local/mysql/support-files/my-medium.cnf /etc/mysql/my.cnf

Step 7:

Now you are ready to start the MySQL server manually. Use the following command.

shell>> bin/mysqld_safe --user=mysql &

Step 8:

If you want to start MySQL to the startup service add the relevant script to the /etc/init.d folder.

shell>> cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysql

However, in order to run it at the startup, you need to add it to a specific run-level as given below. You can check the run level of your Linux distribution by just entering the “runlevel” command.

Here, if your Linux distribution is running under run level 2, you need to have a link to a file in the /etc/rc2.d folder. (Here we assume the folder as S98mysql)

shell>> ln -s /etc/init.d/mysql /etc/rc2.d/S98mysql

Now try to start the MySQL by executing the /etc/init.d/mysql start. You will be able to start the server. Initially, the password of the “root” user is blank. If you need to change the password you can use the following command.

shell>> /usr/local/mysql/bin/mysqladmin -u root password 'new-password'

P.Note: it is required to set the path to mysql installation directory for you to execute the “mysql” command on the command line.

For example, if you use Ubuntu, you should set the PATH variable to have the mysql execution like below. (Assuming mysql installation directory as /usr/local/mysql.

export PATH=/usr/local/mysql/bin:$PATH

OK! now your system is installed with MYSQL installation with clustering capability. However, still it is not fully configured to function as a cluster. For that, you need to have proper configurations set.

Configuring the clustered environment

Lets say that you are going to set up a cluster between two instances. (cluster-node-01 and cluster-node-02)

Required Setup:

Management Server Nodes – 2

Data Nodes – 2

MySQL Nodes – 2

Creating the config.ini file in each instance:

Create the directory /etc/mysql in both server instances and create config.ini file with following content. (You may use provided sample config file with MySQL cluster release located at MySQL_Cluster_Home/ support-files/) [config.medium.ini].

In this scenario, both server instances are having the same config.ini.

NoOfReplicas: 2
DataDir: /usr/local/mysql/mysql-cluster

# Data Memory, Index Memory, and String Memory
DataMemory: 800M
IndexMemory: 100M
BackupMemory: 64M

# Transaction Parameters
MaxNoOfConcurrentOperations: 100000
MaxNoOfLocalOperations: 100000

# Buffering and Logging
RedoBuffer: 16M

# Logging and Checkpointing
NoOfFragmentLogFiles: 200

# Metadata Objects
MaxNoOfAttributes: 500
MaxNoOfTables: 100

# Scans and Buffering
MaxNoOfConcurrentScans: 100

PortNumber: 1186
DataDir: /usr/local/mysql/mysql-cluster

Id: 1
ArbitrationRank: 1

Id: 2
ArbitrationRank: 1

Id: 3

Id: 4

Id: 5
ArbitrationRank: 2

Id: 6
ArbitrationRank: 2

Creating the my.cnf file in each instance:

Create my.cnf file in /etc/mysql/ and provide cluster related extra configuration than default my.cnf configurations. (You may use provided sample config file with MySQL cluster release located at MySQL_Cluster_Home/ support-files/) [my-medium.cnf].

instance 01:



# cluster-specific settings

# provide ndb_mgmd settings

# provide ndbd settings

instance 02:


# cluster-specific settings

# provide ndb_mgmd settings

# provide ndbd settings

Finally the nodes should be started in the following order:

1. Management Nodes (using “ndb_mgmd” command)

2. Data Nodes (using “ndbd” command)

3. MySQL nodes (using “mysqld”) / Use /etc/init.d/mysql restart

In order to monitor above procedure, you can use “ndb_mgm” command.

Testing the cluster

1. Create a table with NDB clustering capability in one of the cluster instances. (use engine=ndb) Lets say its instance 01.

2. Insert some records to it.

3. Then check the other instance for the automatic replication.

If it works,,, Then Congratulations!


VN:F [1.9.22_1171]
Rating: 8.7/10 (7 votes cast)
VN:F [1.9.22_1171]
Rating: +4 (from 4 votes)
Go to Top