The kubelet is the primary “node agent” that runs on each node. The kubelet works in terms of a PodSpec.

The kubelet is the Go process agent running on each worker node of a Kubernetes cluster. It’s only function is to receive PodSpec and ensure that the containers described in those PodSpec are running and healthy. A PodSpec is a YAML or JSON object that describes a pod.

A PodSpec is a YAML or JSON object that describes a pod. The kubelet takes a set of PodSpecs that are provided through various mechanisms (primarily through the apiserver) and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.

Other than from a PodSpec from the apiserver, there are three ways that a container manifest can be provided to the Kubelet.

File: Path passed as a flag on the command line. Files under this path will be monitored periodically for updates. The monitoring period is 20s by default and is configurable via a flag.

HTTP endpoint: HTTP endpoint passed as a parameter on the command line. This endpoint is checked every 20 seconds (also configurable with a flag).

HTTP server: The kubelet can also listen for HTTP and respond to a simple API (underspec’d currently) to submit a new manifest.

Getting started with Android and Kotlin

Android and Kotlin

At the Google I/O 2017 Google announced that the programming language Kotlin is now officially supported for Android app development using Android Studio.

Installing the Android Studio 3.0 and Kotlin Plugin

  • We must download and install Android Studio 3.0 at  developer.android.com (current version is 3.0.1)
  • Checking Kotlin plugin installed by selecting File > Settings


Easy program

package com.example.admin.myfirstkotlinapp

import android.support.v7.app.AppCompatActivity

import android.os.Bundle

import kotlinx.android.synthetic.main.activity_main.*

class MainActivity : AppCompatActivity() {

override fun onCreate(savedInstanceState: Bundle?) {



message.text = "Hello My Kotlin App in Android Studio 3.0"

} }

WHY GO ? …. Go has first class support for concurrency.

The primary motive for designing a new language was to solve software engineering issues at Google. They also mention that Go was actually developed as an alternative to C++.

Rob Pike mentions the purpose for the Go programming language:

“Go’s purpose is therefore not to do research into programming language design; it is to improve the working environment for its designers and their coworkers. Go is more about software engineering than programming language research. Or to rephrase, it is about language design in the service of software engineering.”

I- Concurrency:

Concurrency is one of the major selling points of Go.

The language designers have designed the concurrency model around the ‘Communicating Sequential Processes’ paper by Tony Hoare.

The Go runtime allows you to run hundreds of thousands of concurrent goroutines on a machine.

A Goroutine is a lightweight thread of execution. The

Go runtime multiplexes those goroutines over operating system threads. That means that multiple goroutines can run concurrently on a single OS thread. The Go runtime has a scheduler whose job is to schedule these goroutines for execution.

There are two benefits of this approach:

i) A Goroutine when initialized has a stack of 4 KB. This is really tiny as compared to a stack of an OS thread, which is generally 1 MB. This number matters when you need to have hundreds of thousands of different goroutines running concurrently. If you would run more than thousands of OS threads in parallel, the RAM obviously will become a bottleneck.

ii) Go could have followed the same model as other languages like Java, which support the same concept of threads as OS threads. But in that case, the cost of a context switch between OS threads is much larger than the cost of a context switch between different goroutines.

Since I’m referring to “concurrency” multiple times in this article, I would advise you to check out Rob Pike’s talk on ‘Concurrency is not parallelism”. In programming, concurrency is the composition of independently executing processes, while parallelism is the simultaneous execution of (possibly related) computations. Unless you have a processor with multiple cores or have multiple processors, you can’t really have parallelism since a CPU core can only execute one thing at a time. On a single core machine, it’s just concurrency that’s doing its job behind the scenes. The OS scheduler schedules different processes (threads actually. every process has atleast a main thread) for different timeslices on the processor. Therefore, at one moment in time, you can only have one thread(process) running on the processor. Due to the high speed of execution of the instructions, we get the feeling that multiple things are running. But it’s actually just one thing at a time.

Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.

f) Interfaces: Interfaces enable loosely coupled systems. An interface type in Go can be defined as a set of functions. That’s it. Any type which implements those functions implicitly implements the interface, i.e. you don’t need to specify that a type implements the interface. This is checked by the compiler automatically at compile time.

This means that a part of your code can just rely on an interface type and doesn’t really care about who implements the interface or how the interface is actually implemented. Your main/controller function can then supply a dependency which satisfies the interface (implements all the functions in the interface) to that code. This also enables a really clean architecture for unit testing (through dependency injection). Now, your test code can just inject a mock implementation of the interface required by the code to be able to test if it’s doing its job correctly or not.

While this is great for decoupling, the other benefit is that you then start thinking about your architecture as different microservices. Even if your application resides on a single server (if you’re just starting out), you architect different functionalities required in your application as different microservices, each implementing an interface it promises. So other services/controllers just call the methods in your interface not actually caring about how they are implemented behind the scenes.

g) Garbage collection: Unlike C, you don’t need to remember to free up pointers or worry about dangling pointers in Go. The garbage collector automatically does this job.

h) No exceptions, handle errors yourself: I love the fact that Go doesn’t have the standard exception logic that other languages have. Go forces developers to handle basic errors like ‘couldn’t open file’ etc rather than letting them wrap up all of their code in a try catch block. This also puts pressure on developers to actually think about what needs to be done to handle these failure scenarios.

i) Amazing tooling: One of the best aspects about Go is its tooling. It has tools like:

i) Gofmt: It automatically formats and indents your code so that your code looks like the same as every Go developer on the planet. This has a huge effect on code readability.

ii) Go run: This compiles your code and runs it, both :). So even though Go needs to be compiled, this tool makes you feel like it’s an interpreted language since it just compiles your code so fast that you don’t even feel when the code got compiled.

iii) Go get: This downloads the library from GitHub and copies it to your GoPath so that you can import the library in your project

iv) Godoc: Godoc parses your Go source code — including comments — and produces its documentation in HTML or plain text format. Through godoc’s web interface, you can then see documentation tightly coupled with the code it documents. You can navigate from a function’s documentation to its implementation with one click.

You can check more tools here.

j) Great built-in libraries: Go has great built-in libraries to aid modern development. Some of them are:

a) net/http — Provides HTTP client and server implementations

b) database/sql — For interaction with SQL databases

c) encoding/json — JSON is treated as a first class member of the standard language 🙂

d) html/templates — HTML templating library

e) io/ioutil — Implements I/O utility functions

There is a lot of development going on in the Go horizon. You can find all Go libraries and frameworks for all sorts of tools and use cases here.



deep learning frameworks :: Tensorflow, Theano, Caffe, Pytorch, CNTK, MXNet, Torch, deeplearning4j, Caffe2 among many others.

Deep Learning is a branch of AI which uses Neural Networks for Machine Learning. In the recent years.

Deep Learning became a household name for AI engineers since 2012 when Alex Krizhevsky and his team won the ImageNet challenge. ImageNet is a computer vision competition in which the computer is required to correctly classify the image of an object into one of 1000 categories. The objects include different types of animals, plants, instruments, furniture, Vehicles to name a few.

This attracted a lot of attention from the Computer vision community and almost everyone started working on Neural Networks. But at that time, there were not many tools available to get you started in this new domain. A lot of effort has been put in by the community of researchers to create useful libraries making it easy to work in this emerging field. Some popular deep learning frameworks at present are Tensorflow, Theano, Caffe, Pytorch, CNTK, MXNet, Torch, deeplearning4j, Caffe2 among many others.

Docker :: Docker-compose example

One example with a Standard Web application

1- Install Docker Compose … …………….. not include in Docker

  • curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
  • chmod +x /usr/local/bin/docker-compose


2- Launch Docker- compose with the command

Docker-compose up -d


3- The file docker-compose.yml

# Adopt version 3 syntax:
# https://docs.docker.com/compose/compose-file/#/versioning
version: ‘3’

# Setup the Apache container
container_name: apache_docker
restart: always
image: httpd:2.4.38
– 80:80
– ./apache/httpd.conf:/usr/local/apache2/conf/httpd.conf
– ./apache/vhosts/:/usr/local/apache2/conf/vhosts
– php

# Setup the PHP container ( with a Dockerfile  )
container_name: php_docker
restart: always
build: ./php/
– 9000
– ./www/:/usr/local/apache2/htdocs
– ./php/ssmtp.conf:/etc/ssmtp/ssmtp.conf:ro
– ./php/php-mail.conf:/usr/local/etc/php/conf.d/mail.ini:ro
– mysql

# Setup the MySQL container
container_name: mysql_docker
restart: always
image: mysql:8.0.15
– 3306:3306
– ./mysql/data2:/var/lib/mysql
– ./mysql/conf-mysql.cnf:/etc/mysql/mysql.conf.d/conf-mysql.cnf:ro
MYSQL_USER: project

# Setup the phpmyadmin

image: phpmyadmin/phpmyadmin
container_name: z_phpmyadmin
– 8080:80
– mysql


4- the Dockerfile for PHP 

FROM php:7.1-apache

# Get repository and install wget and vim
RUN apt-get update && apt-get install –no-install-recommends -y \
wget \
gnupg \
git \

# Install PHP extensions deps
RUN apt-get update \
&& apt-get install –no-install-recommends -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
zlib1g-dev \
libicu-dev \
g++ \
unixodbc-dev \
libxml2-dev \
libaio-dev \
libmemcached-dev \
freetds-dev \
libssl-dev \

# Install Composer
RUN curl -sS https://getcomposer.org/installer | php — \
–install-dir=/usr/local/bin \

# Install PHP extensions
RUN docker-php-ext-configure gd –with-freetype-dir=/usr/include/ –with-jpeg-dir=/usr/include/ \
&& docker-php-ext-configure pdo_dblib –with-libdir=/lib/x86_64-linux-gnu \
&& pecl install sqlsrv- \
&& pecl install pdo_sqlsrv- \
&& pecl install redis \
&& pecl install memcached \
&& pecl install xdebug \
&& docker-php-ext-install \
iconv \
mbstring \
intl \
mcrypt \
gd \
mysqli \
pdo_mysql \
pdo_dblib \
soap \
sockets \
zip \
pcntl \
ftp \
&& docker-php-ext-enable \
sqlsrv \
pdo_sqlsrv \
redis \
memcached \
opcache \

# Install APCu and APC backward compatibility
RUN pecl install apcu \
&& pecl install apcu_bc-1.0.3 \
&& docker-php-ext-enable apcu –ini-name 10-docker-php-ext-apcu.ini \
&& docker-php-ext-enable apc –ini-name 20-docker-php-ext-apc.ini

# Clean repository
RUN apt-get clean \
&& rm -rf /var/lib/apt/lists/*

# The path that will be used to make Apache run under that user
ENV VOLUME_PATH /var/www/html/public

# Move files
COPY src/ /var/www/html/

COPY vhost.cnf /etc/apache2/sites-available/000-default.conf

WORKDIR /var/www/html/

RUN chown -R www-data:www-data /var/www/html/ \
&& a2enmod rewrite


5- Some commands usefull :

docker-compose down
docker rm $(docker ps -a -q)


Docker: Modify the current value of Apache LogLevel directive within the configuration file

1. Run the Apache web server container

Running the httpd container
$ docker run -d httpd:latest


2. Get the id of the running container

Output from `docker ps`
$ docker ps
801e3b4a29bd httpd:latest "httpd-foreground" 16 seconds ago Up 14 seconds 80/tcp condescending_easley
35b5493e239c rancher/server "/usr/bin/entry /u..." 2 months ago Up 15 hours 3306/tcp,>8080/tcp keen_agnesi

A simple docker ps will list the running containers and from there you can get the container id.
You can see that the id of the container we want begins with 801e…

3. Open a shell in the running container

Open a bash shell
# docker exec -it 801e /bin/bash

Once the command is executed you enter a root shell within the container (shown by the presence of ‘root’ and ‘#’).

4. Check the current value of the LogLevel directive within the configuration file

Check the LogLevel
root@801e3b4a29bd:/usr/local/apache2# cat conf/httpd.conf | grep -i loglevel
# LogLevel: Control the number of messages logged to the error_log.
LogLevel warn

Explanation of the command:
cat conf/httpd.conf | grep -i loglevel.
cat conf/httpd.conf prints the content of the file conf/httpd.conf.
The | (pipe) redirects the output printed by cat to the next command which is grep.
grep is called with the argument loglevel and the flag -i.
The -i flag makes sure the loglevel string is treated in a case-insensitive way by grep.
The command results in all lines with any occurrence of loglevel (case-insensitive) being printed to the screen.
From the output we can see that the LogLevel is set to warn – whereas we would like it set to debug.

5. Use sed to search and replace the line with what we want

root@801e3b4a29bd:/usr/local/apache2# sed 's/LogLevel warn/LogLevel debug/' conf/httpd.conf > conf/httpd.conf.changed && mv conf/httpd.conf.changed conf/httpd.conf

Most likely, as is in this case, the container will not have a text editor installed – even vi and nano will not present. As a result we use sed (which is installed) to make the change.
An alternative would be to use your container’s package manager to install an editor. For example if the container is Debian based you could run apt-get update && apt-get install vim to install vim . Once installed you could use vim to edit the file.
Explanation of the command:
sed ‘s/LogLevel warn/LogLevel debug/’ conf/httpd.conf …
Substitute LogLevel warn with LogLevel debug in the contents of the file conf/httpd.conf. Note sed just streams this substitution to stdout so it still needs to be written to disk to persist.
… > conf/httpd.conf.changed && mv conf/httpd.conf.changed conf/httpd.conf
Write the output from the substitution to the file conf/httpd.conf.changed (it could be called anything) and then rename (move) the file so it overwrites the original. This is done to avoid the issue of creating an empty file as output (as would be the case if we wrote to the same file we read in from).
Essentially this just replaces the occurrence of LogLevel warn with LogLevel debug in the conf/httpd.conf file .

6. Check that your file change has been made correctly

LogLevel has been changed
root@801e3b4a29bd:/usr/local/apache2# cat conf/httpd.conf | grep -i LogLevel
# LogLevel: Control the number of messages logged to the error_log.
LogLevel debug

We can see from the output the LogLevel has been changed from warn to debug as we wanted.

7. Exit the shell (also exits the container)

Exit the shell and container
root@801e3b4a29bd:/usr/local/apache2# exit

8. Restart the container for the changes to take effect

Restart the container
$ docker restart 801e

The actual file change occurs immediately within the container but the global configuration file for the Apache server (httpd.conf) is only read when the server starts.
As a result we need to restart the server/container for the changes to take effect.

Docker :: How to work with your Favorite Editor into a Container ?

Java Developers  don’t like vi  editor … 2  easy options  to work with netbeans or eclipse and git

Option 1: Using Shared Volumes

Docker allows for mounting local directories into containers using the shared volumes feature.

Just use the -v switch to specify the local directory path that you wish to mount,  along with the location where it should be mounted within the running container:

docker run -d -P –name <name of your container> -v /path/to/local/directory:/path/to/container/directory <image name> …

Using this command, the host’s directory becomes accessible to the container under the path you specify.
This is particularly useful when developing locally, as you can use your favorite editor to work locally, commit code to Git, and pull the latest code from remote branches.

Your application will run inside a container,
isolating it away from any processes you have running on your development laptop.  The container instance will also have access to other instances, such as those running to provide databases, message brokers and other services.

You can read more about how volumes work from the Docker user guide.

In this scenario, all containers on the same host would share the same shared codebase and binaries at the same time.
Versioning of code should occur within the Docker image, not at the host volume.
Therefore, it’s not recommended to use shared volumes in production.

Option 2: Using the ADD or COPY command

You can use the COPY command within a Dockerfile to copy files from the local filesystem into a specific directory within the container.

The following Dockerfile example would recursively add the current working directory into the /app directory of the container image:

# Dockerfile for a Ruby 2.6 container

FROM ruby:2.6

RUN mkdir /app
COPY . /app

The ADD command is similar to the COPY command, but has the added advantage of fetching remote URLs and extracting tarballs.


To access the port 81 —>  http://localhost:81/

docker run -d -p 81:80 -v /path/to/local/directory:/path/to/container/directory <image name> ...

To access the port 82 —>  http://localhost:82/

docker run -d -p 82:80 -v /path/to/local/directory:/path/to/container/directory <image name> ...


Pull the DataStax Image

The DataStax Server Image is the DataStax distribution of Apache Cassandra with additional capabilities of Search Engine, Spark Analytics and Graph Components (configurable at the docker run step). For quality and simplicity, this is your best bet.

$> docker pull datastax/dse-server:latest


Pull DataStax Studio Image (Notebook)

The DataStax Studio is a notebook based development tool for data exploration, data modeling, data visualization, and query profiling. Studio also has the ability to save, import and export notebooks. This allows you to share your findings with your team as you go. (Awesome!)

$> docker pull datastax/dse-studio:latest

Run The Containers

We will execute the docker run command to create new containers from pulled images. Once the container is created you won’t have to perform the run command again (i.e. use docker start/stop container).

Start the DataStax Server Container

The -name parameter provides a human readable reference for the container operations, but can also be used as a resolvable hostname for communication between containers (required for later steps).

As stated before, the DataStax distribution comes with some additonal integrations for building different models, making it highly sought after for implementing domain driven design patterns.

  • The -g flag starts a Node with Graph Model enabled
  • The -s flag starts a Node with Search Engine enabled
  • The -k flag starts a Node with Spark Analytics enabled
$> docker run -e DS_LICENSE=accept --memory 4g --name my-dse -d datastax/dse-server -g -s -k

Start DataStax Studio Container

The –link parameter provides a way to map a hostname to a container IP address. In this example, we map the database container to Studio container by providing its name, ‘my-dse’. Now Studio can connect to the database using the container name instead of an IP address. (can also do user-defined bridge)

The -p flag is for mapping ports between container and host. The 9091 port is the default address for Studio.

$> docker run -e DS_LICENSE=accept --link my-dse -p 9091:9091 --memory 1g --name my-studio -d datastax/dse-studio

Connecting Studio

Visit the Studio page that is now hosted on your docker container by entering http://localhost:9091 in your browser.


Optimize Docker

Your Docker containers are already fast, at least compared to virtual machines. But what if you want to make them even faster? Here are strategies for optimizing Docker container speed and performance.

If you’re using Docker, it’s probably at least partly because you want your applications to start and run faster. Out of the box, containers offer significant performance advantages over infrastructure built using virtual machines.

1 -Making Containers Even Faster

  • Make your container images lean and mean.
  • Host Docker on bare metal.
  • Use a minimalist host operating system. bones Linux distribution (such as Alpine Linux or RancherOS) for hosting Docker rather than a full-feature system will deliver better performance.
  • Use microservices. There are several advantages to migrating your app to microservices. Speed is one of them. Containers that host just a microservice rather than an entire monolithic app will start faster because they have less code to run.
  • Use a build cache.

2 Portainer ::   a lightweight management UI which allows you to easily manage your different Docker environments (Docker hosts or Swarm clusters).


Portainer was developed to help customers adopt Docker container technology and accelerate time-to-value.

It has never been so easy to build, manage and maintain your Docker environments. Portainer is easy to use software that provides an intuitive interface for both software developers and IT operations.

Portainer gives you a detailed overview of your Docker environments and allows you to manage your containers, images, networks and volumes.

Portainer is simple to deploy – you are just one Docker command away from running Portainer anywhere.

Portainer is built to run on Docker and is really simple to deploy.

Portainer deployment scenarios can be executed on any platform unless specified.

Quick start

Deploying Portainer is as simple as:

$ docker volume create portainer_data
$ docker run -d -p 9000:9000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

Voilà, you can now use Portainer by accessing the port 9000 on the server where Portainer is running.

Inside a Swarm cluster

Use our agent setup to deploy Portainer inside a Swarm cluster.

Note: This setup will assume that you’re executing the following instructions on a Swarm manager node.

$ curl -L https://downloads.portainer.io/portainer-agent-stack.yml -o portainer-agent-stack.yml
$ docker stack deploy --compose-file=portainer-agent-stack.yml portainer