Building SaaS based Cloud Applications

Introduction

According to NIST, “cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction (NIST, 2011).

The traditional approach incur a huge capital expenditure upfront along with too much excess capacity not allowing to predict the capacity based on the market demand. The Cloud Computing approach has the ability to provision computing capabilities without requiring the human interaction with the service provider. In addition to that, its ability to have a broad network access, resource pooling, elasticity and measured services are a few of the characteristics, which basically overpower the traditional hardware approach. As benefits, it can drastically cut down the procurement lead time, can produce better scalability, substantial cost savings (no capital cost, pay for what you use) with less management headaches in terms of operational costs.

Cloud Service Models

There are three (03) basic cloud service models such as IaaS (Infrastructure as a Service), Platform as a Service (PaaS) and Software as a Service (SaaS).

SaaS Service Model

In the SaaS cloud service model the consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage or even individual application capabilities with the possible exception of limited user specific application configuration settings. A typical SaaS based application primarily provide multi-tenancy, scalability, customization and resource pooling features to the client.

Multi-tenancy

Multi-tenancy is the ability for multiple customers (tenants) to share the same applications and/ or compute resources. A single software/ application can be used and customized by different organizations as if they each have a separate instance, yet a single shared stack of software or hardware is used. Further it ensures that their data and customizations remain secure and insulated from the activity of all other tenants.

Multi-tenancy Models

There are three basic multi-tenancy models.

1. Separate applications and separate databases

2. One shared application and separate databases

3. One shared application and one shared database

Figure 1 – Multi-tenancy Models

Multi-tenancy Data Architectures

According to Figure 1, there are three types of multi-tenancy data architectures based on the way data is being stored.

1. Separate Databases

In this approach, each tenant data is stored in a separate database ensuring that the tenant can access only the specific tenant database. A separate database connection pool should be set up and need to select the connection pool based on the tenant ID associated with the logged in user.

2. Shared Database – Separate Schemas

In this approach, the data is stored in separate schemas in a single database for each tenant. Similar to the first approach separate connection pools can be created for each database schema. Alternatively a single connection pool also can be used and based on the connection tenant ID (i.e. using SET SCHEMA SQL command), the relevant schema is selected.

3. Shared Database – Shared Schema (Horizontally Partitioned)

In this approach, the data is stored in a single database schema. The tenant is separated from the tenant ID, which is represented by a separate column in each table in the schema. Only one connection pool is configured at the application level. Based on the tenant ID the database schema should be partitioned (horizontally) or indexed to speed up the performance.

Figure 2 – Multi-tenancy Data Architecture

References

1. NIST (2011), NIST Definition, https://www.nist.gov/news-events/news/2011/10/final-version-nist-cloud-computing-definition-published

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Robotic Process Automation (RPA)

Introduction

Robotic Process Automation (RPA) refers to the use of software “robots” that are trained to mimic human actions on the user interface of applications, and then reenact these actions automatically.

RPA Technology can be applied to wide range of industries,

  • Process Automation
    • Mimics the steps of rules based process without compromising the existing IT architecture, which are able to carry out “prescribed” functions and easily scale up or down based on the demand
    • The back office process automation –  (Finance, Procurement, SCM, Accounting, Customer Service, HR) can be expedited using data entry, purchase order issuing, creating of on-line access credentials and complex and connected business processes.
  • IT Support and Management
    • Processes in IT infrastructure management such as Service Desk Management, Network Monitoring
  • Automated Assistant
    • Call center software

Benefits

There are several benefits that RPA can bring in.

1. Cost and Speed – On average RPA robot is a third of the cost of an FTE (Full Time Equivalent)

2. Scalability – Additional robots can be deployed quickly with minimal expenditure. Can train tens, hundreds or thousands of robots at exactly the same time through workflow creation.

3. Accuracy – Eliminates human error

4. Analytics – RPA tools can provide this with ease

Steps Involoved

1. Identify the business processes involved in your organization

2. Constant mapping of how the employees interact with the current business processes (There is no process re-engineering involved)

3. Find the business processes that require simple data transfers first and then move to complex ones (data manipulations and data transfers)

RPA Solutions

1. No proper “Open Source” solutions so far.

2. UI Path (http://www.uipath.com/community) – Community Edition (Free for non commercial use)

3. Python based Automation – https://automatetheboringstuff.com/

4. Robot Framework – http://robotframework.org/ – Open Source, premature

5. Pega Open Span – Commercial Version - https://www.pega.com/products/pega-7-platform/openspan

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Citrix ICAClient (Client Receiver) 13 on Ubuntu 14.04

If you are planning to use Citrix ICAClient (Client Receiver) on Ubuntu 14.04 LTS, you are sure to encounter many issues. At least a SSL error.

I too encountered many issues and finally ended up resolving it thanks to Ubuntu Community Help Page. Generally, Citrix ICAClient works pretty well with Windows, but with Linux and Macs.

If you are an Ubuntu 14.04 LTS user, I am sure the following link will help you to resolve most of those issues. Just execute the commands, you will sure to resolve it.

Link: https://help.ubuntu.com/community/CitrixICAClientHowTo

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Securing Apache with SSL on Ubuntu 14

Prerequisites
$ sudo apt-get update
$ sudo apt-get install apache2
Activate the SSL Module
$ sudo a2enmod ssl
$ sudo service apache2 restart
Create a Self Signed SSL Certificate
You are required to create a self signed certificate and attach it to the Apache SSL configuration. You may create at any preferred location. Here there are moved to a new directory /etc/apache2/ssl.
$ sudo service apache2 restart
$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.crt
Configure Apache to use SSL
Edit default-ssl.conf (/etc/apache2/sites-available), file that contains the default SSL configuration.
<IfModule mod_ssl.c>
    <VirtualHost _default_:443>
        ServerAdmin admin@example.com
        ServerName example.com
        ServerAlias www.example.com
        DocumentRoot /var/www/html
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined
        SSLEngine on
        SSLCertificateFile /etc/apache2/ssl/apache.crt
        SSLCertificateKeyFile /etc/apache2/ssl/apache.key
        <FilesMatch "\.(cgi|shtml|phtml|php)$">
                        SSLOptions +StdEnvVars
        </FilesMatch>
        <Directory /usr/lib/cgi-bin>
                        SSLOptions +StdEnvVars
        </Directory>
        BrowserMatch "MSIE [2-6]" \
                        nokeepalive ssl-unclean-shutdown \
                        downgrade-1.0 force-response-1.0
        BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
    </VirtualHost>
</IfModule>
Activate the SSL Virtual Host

$ sudo a2ensite default-ssl.conf
$ sudo service apache2 restart

Test the Virtual Host with SSL
Now you can test the application with https://<your-domain> it should work!
References
VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Tomcat Startup Script – Ubuntu 14.04 LTS

Environment : Ubuntu 14.04 LTS

Prerequisites: Java and Tomcat installed in your machine/instance. JAVA_HOME should be set already before start.

Step 1: Create a file called “tomcat” under /etc/init.d folder and have the contents as below.

#!/bin/bash
#
# tomcat
#
# chkconfig:
# description:  Start up the Tomcat servlet engine.

# Source function library.
TOMCAT_DIR=/home/crishantha/lib/apache-tomcat-7.0.63

case "$1" in
 start)
   $TOMCAT_DIR/bin/startup.sh
   ;;
 stop)
   $TOMCAT_DIR/bin/shutdown.sh
   sleep 10
   ;;
 restart)
   $TOMCAT_DIR/bin/shutdown.sh
   sleep 20
   $TOMCAT_DIR/bin/startup.sh
   ;;
 *)
   echo "Usage: tomcat {start|stop|restart}" >&2
   exit 3
   ;;
esac

Step 2: Make the script executable

sudo chmod a+x tomcat

Step 3: Test the above script by executing the commands below

sudo ./tomcat start
sudo ./tomcat stop

Step 4: Registering the above script as an init script. The following will make sure to execute “start” or “stop” at the system run levels. Generally default start happens on 2 3 4 5 run levels. Default stop happens on 0 1 6 run levels.

sudo update-rc.d tomcat defaults

Step 5: Now reboot the machine/instance to see everything is fine

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Why Apache Thrift?

Thrift is an interface definition language (IDL) and binary communication protocol. It is used as  a RPC framework, which was developed by Facebook and now maintained as an open source project by Apache Software Foundation.

Thrift is used by many popular projects such as Facebook, Hadoop, Cassendra, HBase, Hypertable.

Why Thrift over JSON?

Thrift is Lightweight, Language Independent and support Data Transport/ Serialization to build cross language services.

It basically has libraries to support languages like C++, Java, Python, Ruby, Erlang, Perl, Haskell, C#, Javascript, Node.JS.

1) Strong Typing

JSON is great if you are working with scripting languages like Python, Ruby, PHP, Javascript, etc. However, if you’re building significant portions of your application in a strongly-typed language like C++ or Java, JSON often becomes a bit of a headache to work with. Thrift lets you transparently work with strong, native types and also provides a mechanism for throwing application-level exceptions across the wire.

2) Performance

Performance is one of Thrift’s main design considerations. JSON is much more geared towards human readability, which comes at the cost of making them more CPU intensive to work with.

3) Serialization

If you are serializing large amounts of data, Thrift’s binary protocol is more efficient than JSON.

4) Versioning Support

Thrift has inbuilt mechanisms for versioning data. This can be very helpful in a distributed environment where your service interfaces may change, but you cannot atomically update all your client and server code.

5) Server Implementaion

Thrift includes RPC server implementations for a number of languages. Because they are streamlined to just support Thrift requests, they are lighter-weight and higher-performance than typical HTTP server implementations like JSON.

Advantages

Thrift generates both the server and client interfaces for a given service. Client calls will be more consistent and generally be less error prone. Thrift supports various protocols, not just HTTP.  If you are dealing with large volumes of service calls, or have bandwidth requirements, the client/server can transparently switch to more efficient transports such as this.

Disadvantages

It is more work to get started on the client side, when the clients are directly building the calling code. It’s less work for the service owner if they are building libraries for clients. The bottom line: If you are providing a simple service & API, Thrift is probably not the right tool.

References

1. Thrift Tutorial - http://thrift-tutorial.readthedocs.org/en/latest/intro.html

2. The programmers guide to Apache Thrift (MEAP) – Chapter 01

3. Writing your first Thrift Service - http://srinathsview.blogspot.com/2011/09/writing-your-first-thrift-service.html

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Hadoop Eco System

As you may aware Hadoop Eco system consists of so many open source tools. There is a lot of research is going on in this area now and everyday you would see a new version of an existing framework or a new framework altogether getting popular undermining the existing ones. Hence if you are a Hadoop developer you need to constantly gather current technological advancements, which happen around you.

As a start to understand the technological frameworks around, I myself tried to sketch a diagram to summarize some of the key open source frameworks and their relationship with their usage. I will try to evolve this diagram as much as I learn in the future and I will not forget to share the same with you all as well.


Steps

1. Feeding RDBMS data to HDFS via Sqoop

2. Cleansing imported data via Pig

3. Loading HDFS data to Hive using Hive Scripts. This can be done by manually running Hive scripts or scheduled through Oozie work scheduler

4. Hive Data Warehouse schema’s are stored separately in a Hive Data Warehouse RDBMS Schema

5. In Hadoop 1.x, Spark and Shark need to be installed separately to do real time query via Hive. In Hadoop 2.x YARN basically bundles Spark and Shark components

6. Batch queries can be executed directly via Hive

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Getting Started with Docker on Ubuntu 14 LTS – Part 02

In my previous article, I stopped at the Docker Container management. In this article basically I will be touching the Docker Images.

A typical traditional Linux system to run, it basically needs two file systems:

  1. boot file system (bootfs)
  2. root file system (rootfs)

The bootfs contains the boot loader and the kernel. The user never makes any changes to the boot file system. In fact, soon after the boot process is complete, the entire kernel is in memory, and the boot file system is unmounted to free up the RAM associated with the initrd disk image.

The rootfs includes the typical directory structure we associate with Unix-like operating systems: /dev, /proc, /bin, /etc, /lib, /usr, and /tmp plus all the configuration files, binaries and libraries required to run user applications.

Here the root file system is mounted read-only and then switched to read-write after boot. In Docker, the root file system stays in read-only mode, and Docker takes advantage of a union mount to add more read-only filesystems onto the root file system and appear as only one file system. This gives the complete control of the all the file systems, which are added to the Docker container. Finally when a container is created/ launched, Docker will mount a read-write file system on top of all the other file system image layers. All the changes made to underneath images are basically stored in this read-write layer. However, the original copy is retained in underneath layers without and changes written to them. This read-write layer + other layers underneath  + base layer basically form a Docker container. (See the image below)

In Part 01 of this article, we created a container with an ubuntu image. You can see all the available images by,

$ sudo docker images

REPOSITORY TAG    IMAGE ID     CREATED     VIRTUAL SIZE
ubuntu     latest 07f8e8c5e660 4 weeks ago 188.3 MB

Seems now you have the “latest” ubuntu image with you. If you want a specific version image then you need to specify it as a TAG. i.e. ubuntu:12.04. So lets try that now.

$ sudo docker run -t -i --name new_container ubuntu:12.04 /bin/bash

Now, check the image status

$ sudo docker images

REPOSITORY TAG    IMAGE ID     CREATED     VIRTUAL SIZE
ubuntu     latest 07f8e8c5e660 4 weeks ago 188.3 MB
ubuntu     12.04  ac6b0eaa3203 4 weeks ago 132.5 MB

Up to now, we used Docker run command to create containers. While creating it downloads the given image from the Docker Hub. This downloading to the local basically takes some time. If you want to save this time when you are creating the container, you can have the alternate route by first pulling the required template from the Docker Hub and then creating the container using the downloaded image. So here are the steps

// Pulling the image from Docker Hub
$ sudo docker pull fedora

// Creating a Docker Container using the pulled image
$ sudo docker run -i -t fedora /bin/bash

Now if you see, you will have 3 containers.

$ sudo docker ps -a

86476cec9907 fedora:latest ---
4d8b96d1f8b1 ubuntu:12.04  ---
c607547adce2 ubuntu:latest ---
Building our own Docker Images

There are two ways to do this.

Method (1). Via docker commit command

Method (2). Via docker build command with a Dockerfile (This is the recommended method)

To test method (1), first create a container using an already pulled image and then do some alteration to the image and then execute docker commit.

// Creating a Docker Container using an image
$ sudo docker run -i -t ubuntu /bin/bash

// Aler the image
$ apt-get -yqq update
$ apt-get -y install apache2

// Committing the changes to the image
// Here, crishantha is the account created
// in the Docker Hub repository
// you may use Docker Hub or any other Docker repo
// 9b48a2b8850f is the Container ID of the contatiner

$ sudo docker commit 9b48a2b8850f crishantha/apache2

// List the Docker images
// Here the Docker altered image ID is shown
$ sudo docker images crishantha/apache2
crishantha/apache2 latest 0a33454e78e4 ....

To test method (2), you may create a Dockerfile at a given directory and specify the required changes needed for the image. For example, the Dockerfile can have the following lines, for an Ubuntu 14.04 image. FROM basically pulls the ubuntu 14.04 image and then RUN commands basically executes and add more layers to the image. EXPOSE will basically expose port 80 from the container.

Before executing the Dockerfile, it is good to create a new directory and create the DockerFile within that directory. Here the directory is called static_web.

FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y nginx
EXPOSE 80

Once this is done, you can execute the Dockerfile by,

$ sudo docker build -t="crishantha/static_web" .

If all successful, it will return a image ID and further you can see it using docker images crishantha/apache2.

Checking the Docker Image History

You can further check the history of the image by executing docker history <image Name/ image ID>

$ sudo docker history crishantha/static_web

Now you can execute the container by,

$ sudo docker run -d -p 80 --name static_web crishantha/static_web  nginx -g "daemon off;"

The above will run as a detached process and you would see this by executing docker ps and you would see it running in the background as a Docker process.

Pushing Docker Images

Once an image is created we can always push it a Docker repository. If you are registered with Docker Hub, it is quite easy to push your image to it. Since it is a public repository, then if anyone interested can just pull it to his/her own Docker repository.

// If you have not already not logged in,
// Here the username is the one you registered
// with Docker Hub
$ sudo docker login
Username: crishantha
Password: xxxxxxxxxx

// If login is successful
$ sudo docker push crishantha/static_web

If all successful, you may see it is available in the Docker Hub.

Pulling Docker Images

Once it is push to Docker Hub, you may pull to to any other instance which runs Docker.

$ sudo docker pull crishantha/static_web
Automated Builds in Docker Hub Repositories

In addition to push our images from our set ups to Docker Hub, it allow us to automate Docker image builds within Docker Hub by connecting to external repositories. (private or public)

You can test this out by connecting your GitHub repository or Bitbucket repositories to Docker Hub. (Use the Add Repository –> Automated Build option in the Docker Hub to follow this process)

However, the Docker Hub automated builds should have a Dockerfile attached to it in the specific build folder. The build will go through based on the Dockerfile build that you specify here. Once the build is completed you can see the build log as well.

VN:F [1.9.22_1171]
Rating: 10.0/10 (2 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Getting Started with Docker on Ubuntu 14 LTS – Part 01

Docker is an open-source engine that automates the deployment of applications into containers released under the Apache 2 License. It adds an application deployment engine on top of a virtualized container execution environment. Docker aims to reduce the cycle time between code being written and code being tested, deployed, and used .

Core components of Docker:

  1. The Docker client and server
  2. Docker images
  3. Registries
  4. Docker Containers

Docker has client-server architecture. The docker binary acts as both the client and the server. As a client, the docker binary sends requests to docker daemon, process them and return.

Docker images are the building blocks or the the packaging aspect of Docker. Basically containers are launched from Docker images. These Docker images can be shared, stored, updated easily and considered highly portable.

Registries are there to store Docker images that you create in Docker. There are two types of Docker Registries 1) Private 2) Public. Docker Hub is the public Docker Registry maintained by the Docker Inc.

Containers are the running and execution aspect of Docker.

Docker does not care what software resides within the container. Each container is loaded on the same way as any other container. You can map this to a shipping container. A shipping container is not too much bothered about what it basically carries inside. It teats all the goods inside in the same way.

So the Docker containers are interchangeable, stackable, portable, and as generic as possible.

Docker can be run on any x64 host, which is running a modern Linux kernel.
(The Recommended kernel version 3.10 and later.)

The native Linux container format that Docker uses is libcontainer

The Linux Kernel Namespaces provides the isolation (file system, processes, network) which is required by Docker containers.

  • File System Isolation – Each container is running its own “root” file system
  • Process Isolation – Each container is running its own process environment
  • Network Isolation – Separate virtual interfaces and IP addressing

Resources like CPU and Memory allocation for each container happens using cgroups.(cgroups is a Linux kernel feature that limits, accounts for and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes)

Installing Docker on Ubuntu

Currently it is supported in wide variety of Linux platforms including Ubuntu, RedHat (RHEL), Dabian, CentOS, Fedora, Oracle Linux, etc.

Prerequisites

1. A 64-bit architecture (x86_64 abd amd64 only) 32 bit not supported.

2. Linux 3.8 Kernel or later version.

3. Kernel features such as cgroups and namespaces should be enabled.

Step 1 – Checking the Linux Kernel

In order to check the current Linux Kernel

$ uname -a
Linux crishantha 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

So my Linux Kernel is 3.13 and should support Docker easily. (Since it is more than 3.8) and it is x86_64.

But if your Ubuntu Linux Kernel is less than 3.8 you may try to install 3.8.

$ sudo apt-get update
$ sudo apt-get install linux-headers-3.8.0-27-generic linux-image
-3.8.0-27-generic linux-headers-3.8.0-27

If above headers are not available, you can try referring the Docker manuals on the web. (https://docs.docker.com/engine/installation/ubuntulinux/)

Once this is done you are required to update the grub and reboot the system

$ sudo update-grub
$ sudo reboot

After rebooting pls check the Linux Kernel version by typing uname -a

Step 2 – Checking the Storage Driver

There are Linux Device Storage drivers such as Device Mapper, AUFS, vfs, btrfs available. The default is the Device Mapper Storage Driver, which we are going to use here. The Device Mapper Storage driver has been in the Linux Kernel since 2.6.9. So it should be available in your system as well. However you can confirm it using following commands.

$ ls -l /sys/class/misc/device-mapper
lrwxrwxrwx 1 root root 0 May 18 21:12 /sys/class/misc/device-mapper
-> ../../devices/virtual/misc/device-mapper
OR
$ sudo grep device-mapper /proc/devices
252 device-mapper

If neither is available then you can load the module using

sudo modprobe dm_mod
Step 3 – Installing Docker

Make sure the APT works fine with the “https” and the CA certificates are installed

$ sudo apt-get update
$ sudo apt-get install apt-transport-https ca-certificates

Add the new GPG Key

$ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

Add Docker APT repositories to /etc/apt/sources.list.d/docker.list file

$ sudo sh -c "echo deb https://apt.dockerproject.org/repo ubuntu-trusty main >
/etc/apt/sources.list.d/docker.list"

Now update the APT sources

$ sudo apt-get update

Install some additional drivers as well.

$ sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual

Finally, now you are in a position to install Docker and other additional packages using

$ sudo apt-get install docker-engine

If all OK, now you should check whether Docker was installed properly using

$ sudo docker info
Containers: 0
Images: 0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 0
 Dirperm1 Supported: false
Execution Driver: native-0.2
Kernel Version: 3.13.0-32-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 2
Total Memory: 1.955 GiB
Name: crishantha
ID: QHAR:VN5E:SYKX:5LW4:YOW7:SKUB:SD6I:S4ZG:GEEI:IMU7:6MNI:WYR3
WARNING: No swap limit support
Step 4 – Checking Docker Daemon Status

If you want to stop/ start the daemon, use

$ sudo stop docker
docker stop/waiting

$ sudo start docker
docker start/running, process 9672
Step 5 – Upgrading Docker

Finally, one day if you need to upgrade the docker version, you can do this by.

$ sudo apt-get update
$ sudo apt-get upgrade docker-engine
Step 6 – Creating a Docker Container

There are two types of Docker containers.

1. Interactive Docker Containers

2. Demonized Docker Containers.

1. Interactive Docker Container

Once Docker is installed successfully, now we can try an create a Docker container instance. Prior to that it is good to check the Docker status by typing the sudo docker status command.

If everything is alright, you can go ahead and create the Docker “interactive” container instance.

$ sudo docker run --name crish_container -i -t ubuntu /bin/bash

$ root@c607547adce2:/#

The above will create a container named crish_container, which is an ubuntu template. If you do not specify a name, the system will create a dummy name along with an unique container ID attached to it. One created you will be given an “interactive shell” like below.

root@c607547adce2:/#

Here the c607547adce2 is the container ID. You can type exit to move away from the containers interactive session. Once exited from the interactive session you can see the container is being stopped. The container only runs as long as the interactive session (/bin/bash) is running. That is the reason why they called as “interactive” docker containers.

Now again you can check the docker status by,

$ sudo docker info

Containers: 1
Images: 4
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 6
 Dirperm1 Supported: false
Execution Driver: native-0.2
Kernel Version: 3.13.0-32-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 2
Total Memory: 1.955 GiB
Name: crishantha
ID: QHAR:VN5E:SYKX:5LW4:YOW7:SKUB:SD6I:S4ZG:GEEI:IMU7:6MNI:WYR3
WARNING: No swap limit support

2. Demonized Docker Container

Other than the interactive docker instance, there is another type called “demonized containers”. These can be utilized to execute long-running jobs. In this you will not get an interactive session.

You can create a demonized container by,

$ sudo docker run --name crish_daemon -d ubuntu /bin/sh

However, these demonized sessions are ended in the background and you may not be able to reattach as an “interactive” docker session.

Step 7 – Display the Container List

To show all containers in the system

$ sudo docker ps -a

To show all the running containers,

$ sudo docker ps
Step 8 – Attach to a container

The container that you created with docker run command will restart with the same options that we have specified when we reattach to the same container again. The interactive session is basically waiting for the running container. You may use the following to reattach again.

$ sudo docker attach crish_container
OR
$ sudo docker attach c607547adce2

Here c607547adce2 is the <container_ID>

Note: Sometimes you are required to press ENTER key to show the bash shell once you execute the attach command.

Step 9 – Starting and Stopping a container

To start

$ sudo docker start crish_container

To stop

$ sudo docker stop crish_container
Step 10 – Deleting a container
$ sudo docker rm crish_container

Resources

1. Docker Home Page – https://www.docker.com/

2. Docker Hub – http://hub.docker.com

3. Docker Blog – http://blog.docker.com/

4. Docker Documentation – http://docs.docker.com

5. Docker Getting Started Guide – https://www.docker.com/tryit/

6. The Docker Book by James Turnbull

7. Introduction to Docker – http://www.programering.com/a/MDMzAjMwATk.html

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Creating LXC instances on Ubuntu 14 LTS

Hypervisor Virtualization Vs OS level / Container Virtualization

Unlike hypervisor virtualization, where one or more independent machines run virtually on physical hardware via an intermediation layer, containers instead run user space on top of an operating system’s kernel. As a result, container virtualization is often called operating system-level virtualization.

Container /OS level virtualization, provide multiple isolated Linux environments on a single Linux host. It shares the host OS kernel and make use of the Guest OS system libraries for providing the required OS capabilities.This allows containers to have a very low overhead and to have much faster startup time compared to VMs.

As limitations, containers also been considered as less secure compared to hypervisor virtualization. However countering this argument, containers lack the larger attacker surface compared to full operating systems deployed by the hypervisor virtualization.

The most recent OS level virtualiztion/ containers are considered as OpenVZ, Oracle Solaris Zones, Linux LXCs.

LXC Containers

Linux Container (LXC), is a fast, lightweight, and OS-level virtualization technology that allows us to host multiple isolated Linux systems in a single host.

Installing LXC on Ubuntu 14 LTS

LXC is available on Ubuntu default repositories. Simply type the following for a complete installation.

sudo apt-get install lxc lxctl lxc-templates

To check the successful completion, type

sudo lxc-checkconfig

If everything is fine, it will show something similar to the following

Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.13.0-32-generic
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
Creating LXC
sudo lxc-create -n <container-name> -t <template>

The <template> can be found in the  /usr/share/lxc/templates/ folder.

For example, if you need to create an Ubuntu container, you may execute,

sudo lxc-create -n ubuntu01 -t ubuntu

If you want to create an OpenSUSE container you may execute,

sudo lxc-create -n opensuse1 -t opensuse

If you want to create a Centos container, you may execute,

sudo apt-get install yum // This is require as a prerequisite for centos installation
sudo lxc-create -n centos01 -t centos

Once created you should be able to list all the LXCs created.

sudo lxc-ls

To list down the complete container information,

sudo lxc-info -n ubuntu01
Starting LXC

Execute following command to start the created containers.

sudo lxc-start -n ubuntu01 -d

Now use the following to log in to the started containers.

sudo lxc-console -n ubuntu01

The default userid/password is ubuntu/ubuntu.

[To exit from the console, press “Ctrl+a” followed by the letter “a”.]

If you need to see the assigned IP address and the state of any created instance,

sudo lxc-ls --fancy ubuntu01
Stopping LXC
sudo lxc-stop -n ubuntu01
Cloning LXC
sudo lxc-stop -n ubuntu01
sudo lxc-clone ubuntu01 ubuntu02
sudo lxc-start -n ubuntu02
Deleting LXC

sudo lxc-destroy -n ubuntu01

Managing LXC using a Web Console
sudo wget http://lxc-webpanel.github.io/tools/install.sh -O - | bash

Then, access the LXC web panel using URL: http://<ip-address>:5000. The default username/password is admin/admin

References:

1. Setting up Multiple Linix System Containers using Ubuntu 14 LTS - http://www.unixmen.com/setting-multiple-isolated-linux-systems-containers-using-lxc-ubuntu-14-04/

2. LXC Complete Guide – https://help.ubuntu.com/12.04/serverguide/lxc.html

3. The Evolution of Linux Containers and Future – https://dzone.com/articles/evolution-of-linux-containers-future

4. Can containers really ship software –  https://dzone.com/articles/can-containers-really-ship-software

VN:F [1.9.22_1171]
Rating: 10.0/10 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)
Go to Top