10 Gradient Descent Optimisation Algorithms

Gradient descent is an optimisation method for finding the minimum of a function.

It is commonly used in deep learning models to update the weights of the neural network through backpropagation.

In this article, we will summarise the common gradient descent optimisation algorithms used in popular deep learning frameworks (e.g. TensorFlow, Keras, PyTorch, Caffe). The purpose of this post is to make it easy to read and digest (using consistent nomenclature) since there aren’t many of such summaries out there, and as a cheat sheet if you want to implement them from scratch.


skeleton of using logistic regression in R.


#load data
train <- read.csv(‘Train_Old.csv’)

#create training and validation data from given data

split <- sample.split(train$Recommended, SplitRatio = 0.75)

#get training and test data
dresstrain <- subset(train, split == TRUE)
dresstest <- subset(train, split == FALSE)

#logistic regression model
model <- glm (Recommended ~ .-ID, data = dresstrain, family = binomial)

predict <- predict(model, type = ‘response’)

#confusion matrix
table(dresstrain$Recommended, predict > 0.5)

#ROCR Curve
ROCRpred <- prediction(predict, dresstrain$Recommended)
ROCRperf <- performance(ROCRpred, ‘tpr’,’fpr’)
plot(ROCRperf, colorize = TRUE, text.adj = c(-0.2,1.7))

#plot glm
ggplot(dresstrain, aes(x=Rating, y=Recommended)) + geom_point() +
stat_smooth(method=”glm”, family=”binomial”, se=FALSE)

opencv news

OpenCV (Open Source Computer Vision Library)

is released under a BSD license and hence it’s free for both academic and commercial use. It has C++, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. OpenCV was designed for computational efficiency and with a strong focus on real-time applications.

Written in optimized C/C++, the library can take advantage of multi-core processing. Enabled with OpenCL, it can take advantage of the hardware acceleration of the underlying heterogeneous compute platform.

Usage ranges from interactive art, to mines inspection, stitching maps on the web or through advanced robotics.

There are many different ways to do image recognition.
Google recently released a new Tensorflow Object Detection API to give computer vision everywhere a boost
Tensorflow Object Detection API is a very powerful source for quickly building object detection models.

Understanding the API

The API has been trained on the COCO dataset (Common Objects in Context).
This is a dataset of 300k images of 90 most commonly found objects.
Examples of objects includes:


Openstack in a good cloud managed infrastructure

OpenStack is a cloud computing system written in Python.

The project is 8 years old.

During that time we have had 18 major releases, and to give you an idea of the size of the community, during 2017, we had 2,500 people contribute more than 65,000 patches. The age and size of the project means our community has been through several transitions and challenges that other smaller projects may not yet have encountered, and my hope is that by thinking about them early, you can prepare for them, and your communities will be able to grow in a healthier way.

In 2019,
A good cloud managed infrastructure has grown by a factor of 10 compared to the resources in 2013.
This has been achieved in collaboration with the many open source communities we have worked with over the past years,
including :


What’s new in 4.2.0 ?

  • Fix a bug with empty word lists (contributed by FabioRosado)
  • Update dependency management to use setuptools extras Document how to create multiple wordfiles (contributed by FabioRosado)
  • Note that PyEnchant is unmaintained and fix links (contributed by Marti Raudsepp)
  • Don’t use mutable default argument (contributed by Daniele Tricoli)

MongoDB and Hadoop … a powerful combination

MongoDB and Hadoop are a powerful combination and can be used together to deliver complex analytics and data processing for data stored in MongoDB. The following guide shows how you can start working with the MongoDB Connector for Hadoop. Once you become familiar with the connector, you can use it to pull your MongoDB data into Hadoop Map-Reduce jobs, process the data and return results back to a MongoDB collection.



In order to use the following guide, you should already have Hadoop up and running. This can range from a deployed cluster containing multiple nodes or a single node pseudo-distributed Hadoop installation running locally. As long as you are able to run any of the examples on your Hadoop installation, you should be all set. The Hadoop connector supports all Apache Hadoop versions 2.X and up, including distributions based on these versions such as CDH4, CDH5, and HDP 2.0 and up.


Install and run the latest version of MongoDB. In addition, the MongoDB commands should be in your system search path (i.e. $PATH).

If your mongod requires authorization [1], the user account used by the Hadoop connector must be able to run the splitVector command on the input database. You can either give the user the clusterManager role, or create a custom role for this:

  role: "hadoopSplitVector",
  privileges: [{
    resource: {
      db: "myDatabase",
      collection: "myCollection"
    actions: ["splitVector"]
  user: "hadoopUser",
  pwd: "secret",
  roles: ["hadoopSplitVector", "readWrite"]

Note that the splitVector command cannot be run through a mongos, so the Hadoop connector will automatically connect to the primary shard in order to run the command. If your input collection is unsharded and the connector reads through a mongos, make sure that your MongoDB Hadoop user is created on the primary shard as well.

Move docker’s default /var/lib/docker and clean up

to Move docker’s default /var/lib/docker to another directory on Linux

Stop the  docker daemon

# systemctl stop docker

edit lib/systemd/system/docker.service and replace the following line where /new/path/docker is a location of your new chosen docker directory:

ExecStart=/usr/bin/docker daemon -H fd://
ExecStart=/usr/bin/docker daemon -g /new/path/docker -H fd:/

reload systemd daemon:

# systemctl daemon-reload

Once this is done create a new directory you specified above and optionally rsync current docker data to a new directory:

# mkdir /new/path/docker
# rsync -aqxP /var/lib/docker/ /new/path/docker

At this stage we can safely start docker daemon:

# systemctl start docker

Confirm that docker runs within a new data directory:

# ps aux | grep -i docker | grep -v grep

commands to clean up

Kill all running containers:
# docker kill $(docker ps -q)

Delete all stopped containers
# docker rm $(docker ps -a -q)

Delete all images
# docker rmi $(docker images -q)

Remove unused data
# docker system prune

And some more
# docker system prune -af

Little script to clean volumes :

for vol in $(docker volume ls | awk '{print $2}' | grep -v VOLUME)
  docker volume rm $vol


Inspecting docker activity with socat

Inspecting docker activity with socat

We can’t directly sniff the traffic on it as we don’t really control this socket.

We first create a fake unix socket, say ‘/tmp/socatproxy’

and relay all its traffic to



$ socat -v UNIX-LISTEN;/tmp/socatproxy.sock,fork,reuseaddr

UNIX-CONNECT://var/run/docker.sock &

. In this way, regular interactions remain undisturbed, but the redirect allows socat to inspect traffic.

$ socat -v UNIX-LISTEN:/tmp/socatproxy.sock,fork UNIX-CONNECT:/var/run/docker.sock

-v : writes the traffic to stderr as text in addition to relaying instructions. Some conversions are made for the sake of readability so if certain sequences aren’t being interpreted properly, one could try -x (hex).
UNIX-LISTEN : listen for connections on the unix socket (In our case, /tmp/fake)
fork : create a separate subprocess for handling new connections so the main process may continue listening
UNIX-CONNECT : connect to the specified unix socket (In our case, /var/run/docker.sock)





to list all containers with Docker

$ docker -H unix:///tmp/socatproxy.sock ps -a

Factorial Analysis with R

Exploratory Factor Analysis with  R can be performed using
the factanal function.
In addition to this standard function, some additional facilities are provided by fa.promax function.

I Sample with fa function

  1. > #install the package
  2. > install.packages("psych")
  3. > #load the package
  4. > library(psych)
  1. > #calculate the correlation matrix
  2. > corMat <- cor(data)
  3. > #display the correlation matrix
  4. > corMat
  1. > #use fa() to conduct an oblique principal-axis exploratory factor analysis
  2. > #save the solution to an R variable
  3. > solution <- fa(r = corMat, nfactors = 2, rotate = "oblimin", fm = "pa")
  4. > #display the solution output
  5. > solution

II Sample with factanal function

# Required packages.

# Import data from SPSS data file.
personality <- foreign::read.spss("spss\\personality.sav", 
    to.data.frame = TRUE)

# Factor analysis.
items <- c("ipip1", "ipip2", "ipip3", "ipip4", "ipip5", 
    "ipip6", "ipip7", "ipip8", "ipip9", "ipip10", "ipip11", 
    "ipip12", "ipip13", "ipip14", "ipip15", "ipip16", "ipip17", 
    "ipip18", "ipip19", "ipip20", "ipip21", "ipip22", "ipip23", 
    "ipip24", "ipip25", "ipip26", "ipip27", "ipip28", "ipip29", 
    "ipip30", "ipip31", "ipip32", "ipip33", "ipip34", "ipip35", 
    "ipip36", "ipip37", "ipip38", "ipip39", "ipip40", "ipip41", 
    "ipip42", "ipip43", "ipip44", "ipip45", "ipip46", "ipip47", 
    "ipip48", "ipip49", "ipip50") ;

# Descriptive Statistics.
itemDescriptiveStatistics <- sapply(personality[items], 
    function(x) c(mean=mean(x), sd=sd(x), n = length(x)));
cbind(attr(personality, "variable.labels")[items], 
    round(t(itemDescriptiveStatistics), 2) );

# Scree plot.

# Some other indicators of the number of factors.
psych::VSS(cor(personality[items]), 10, 
    n.obs = nrow(personality), rotate = "promax");

# Communalities
itemCommunalities <- 1 - dataForScreePlot$uniquenesses;
round(cbind(itemCommunalities), 2);

# List items with low communalities.
itemsWithLowCommunalities <- names(itemCommunalities[
        itemCommunalities < .25]);
cat("Items with low communalities (< .25)\n");
problematicItemText <- attr(personality, 
    "variable.labels")[itemsWithLowCommunalities ];
problematicItemCommunalities <- round(itemCommunalities[
data.frame(itemText = problematicItemText, 
    communality = problematicItemCommunalities);

# Variance explained by each factor before rotation. 
# (see Proportion Var)
factanal(personality[items], factors = 5, rotation = "none");

# Variance explained by each factor after rotatoin. 
# (see Proportion Var)
factanal(personality[items], factors = 5, rotation = "promax");

# Loadings prior to rotation.
fitNoRotation <- factanal(personality[items], 
    factors = 5, rotation = "none");
print(fitNoRotation$loadings, cutoff = .30, sort = TRUE);

# Loadings after rotation.
fitAfterRotation <- factanal(personality[items], 
    factors = 5, rotation = "promax");
print(fitAfterRotation$loadings, cutoff = .30, sort = TRUE);

# Correlations between factors 
# This assumes use of a correlated rotation method such as promax
factorCorrelationsRegression <- cor(factanal(
        personality[items],  factors = 5, 
        rotation = "promax", scores = "regression")$scores);

Ubuntu opencv tensorflow : Easy install in Local

This is an Easy way to install your own environment , and compile the programs  for your  machine in local. just applicate t he procedure

just applicate the the next commands in your console

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential cmake unzip pkg-config
sudo apt-get install libjpeg-dev libpng-dev libtiff-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install python3-dev
cd ~
wget -O opencv.zip https://github.com/opencv/opencv/archive/3.4.4.zip
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/3.4.4.zip
mv opencv-3.4.4 opencv
mv opencv_contrib-3.4.4 opencv_contrib

wget https://bootstrap.pypa.io/get-pip.py
sudo python3 get-pip.py

sudo pip install virtualenv virtualenvwrapper
sudo rm -rf ~/get-pip.py ~/.cache/pip

# Vitual environment is very usefull when you want to change python version or other librairies

echo -e "\n# virtualenv and virtualenvwrapper" >> ~/.bashrc
echo "export WORKON_HOME=$HOME/.virtualenvs" >> ~/.bashrc
echo "export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3" >> ~/.bashrc
echo "source /usr/local/bin/virtualenvwrapper.sh" >> ~/.bashrc
# here the Vitual environment is cv 
mkvirtualenv cv -p python3
workon cv
pip3 install numpy
cd ~/opencv-3.4.4/
mkdir build
cd build


# You can make the ‘make’ process leverage multiple cores by simply adding the ‘-j’, along with the number of logical cores your computer has.
# Example: Dual-core: -j2; Quad-core: -j4; Octa-core: -j8, etc.
# If the process fails with the flag enabled, you may need to remove the flag due to a possible race condition causing the error. Then run ‘make clean’ and then ‘make’ to run it with a single core.

make -j4
if [ ! $? -eq 0 ]; then
make clean
#If compilation finishes without errors, you can install by saying:
sudo make install
sudo ldconfig
cd /usr/local/python/cv2/python-3.5
sudo mv cv2.cpython-35m-x86_64-linux-gnu.so cv2.so
cd ~/.virtualenvs/cv/lib/python3.5/site-packages/
ln -s /usr/local/python/cv2/python-3.5/cv2.so cv2.so
# you can also complete your installation  with tensorflow and keras  to add deep learning functionnalities

pip3 install jupyter
pip install numpy
pip install opencv-contrib-python
pip install scipy matplotlib pillow
pip install imutils h5py requests progressbar2
pip install scikit-learn scikit-image
pip install keras

pip install tensorflow
pip install tensorflow-gpu