Masks to contain spread of COVID-19 and educating the public

COVID-19 has been spreading at rapid rates around the globe leading to loss of life or in some cases permanent lung damage to the infected. Masks are a very important tool in preventing the spread of COVID-19, along with social distancing. The main reason that inspired me to write this post is the absence of people wearing masks.

People imagine that normal cloth masks don’t help them from contraction the virus, and some, imagine that they are in a safe place with just a few infections that it is not necessary to wear masks. Both these thought process does significant harm in accelerating the spread of the virus as most people show no symptoms even after days of contracting the virus. This outlook can create infections in a community and damage that person or their loved ones. The later thought process is damaging in the perspective that there is no safe place on planet earth immune to this spread and a single infection can spread like wildfire if not contained.

Spread of covid-19 through sneeze or cough source:

Education plays a key role in the changing mindset of the general public. One of the successful places on the planet to contain the spread was the state of Kerala in India. One of the important things that the government and doctors in the state of Kerala did was to educate the general public. This leads the people, the government and the health machinery to work together and unitedly stop the spread. The state is now educating its residents who are coming back from other countries as soon as they land in the state’s airports, making them aware and at ease. Educating people, making them aware, and getting them to work together with the healthcare system is the only way any place in the world could work together to contain the spread.

The video below shows flow visualization using LaVision’s BOS (Background Oriented Schlieren)  system showing the effect of wearing masks.

Researchers have found out that at peak exhalation speeds of 10-30 m/s can generate cloud that can extend up to 27 feet. Scientific study also suggests keeping a gap of at least 4-5 meters while walking in a single line and when slow biking or running use a minimum of 10 meters and for hard biking, at-least 20 meters.

Droplet spreading when running at a speed of 14.4 km/h when (a,b) running behind each other; (c) side-by-side; (d) in staggered arrangement source:

The novel coronavirus

The novel coronavirus was identified in Wuhan, Hubei, China and has been spreading across the world disrupting every aspect of day-to-day life and with a large human cost.

To understand the spread and make calculated predictions, I have been exploring different models, data sources and algorithms. Recently, I used sparse identification for nonlinear dynamical systems (SINDy) to make some predictions for the next 20 days for a few countries from 20th March 2020, based on the Hubei data set from Johns Hopkins till 20th March 2020. However, real data will be based on many factors including the availability of testing, self-quarantine/ isolation, etc.

The following are the Hubei data and calculated data:

Some other dashboards for covid-19:

Air pollution and its impacts today

Air pollution over Shanghai China Source:

Ever since the industrial revolution, industrialization has led way to air pollution with the burning of fossil fuels. An EPA article indicates the current level of atmospheric orientations of carbon dioxide, methane, and oxide are unprecedented compared with the past 800,000 years. Air pollution currently claims 7 million lives globally which is many times higher than AIDS, tuberculosis, malaria combined and over 15 times higher than due to all of the wars and other forms of violence globally per year as mentioned in The Lancet Commission report on pollution and health. About 1.2 million lives were lost to air pollution in India alone as of 2017 and now the life expectancy of an average Indian is down by 2.6 years. today published an article titled “These Photos Document the Most Polluted Cities in India (and the World)” documenting the most polluted cities in India and its effects.

In addition to the problem of air pollution globally, there is the issue of pollution that affects humans regionally and in a concentrated manner. A classic example of this is the “Asian Brown cloud which regionally ours as a result of environmental pollution which occurs seasonally at a very high concentration. This pollution generates a 10 million square kilometer haze covering countries including China, India, and Nepal.

On the World environment day, the United Nations released a campaign to make the citizens of the world aware of the dangers of air pollution and the dire need to beat pollution, and in the new release UN Secretary-General stated that action to fight climate change is “the battle of our lives”.

Adopting linux as primary OS

Image Source:

Linux distros are being used by researchers, engineers, scientists and doctors among many other professionals. Linux is omnipresent, in servers providing critical cloud services, in desktops running scientific computations and for day-to-day use. Recently, the Indian state of Kerala saved $400 million by choosing Linux computers as the operating system for school computers under a state-wide project.

Recently, more and more people, especially with background, is data science and machine learning are making use of Linux distros as their primary desktops. Newer distros with more advanced features are being developed to enable better performance and ease of use for faster adoption and transition. The most recent of them is the Clear Linux project developed by Intel which provides developers and data scientist the capability to develop cutting edge machine learning applications and at the same time use it can be used as a regular desktop OS. Users of Ubuntu, Fedora, elementary OS, and Linux Mint would find it a similar environment but with a different way of managing packages in bundles rather than the traditional package managers used in Debian or Red Hat based systems.

The surge in Linux usage can be attributed to Zero or low cost, ability to develop software and perform complex scientific computations with ease.

Management – Changing landscape and its vitality for a professional

Photo by Hunters Race on

One of the significant realizations that inspired me to write this post is the realization that most professionals become leaders such as managers in most parts of their careers. Most of them, no matter how old they are, cannot manage people in the changing landscape. What was considered effective a decade ago, may not be effective today.

An inherently participative model drives modern-day team leadership unlike the hierarchical model of the industrial age. Organizations are performing research into the effective management of workplace today dominated by millennials. A hierarchical model where followers are asked to do specific things without critique and critical thinking are things of the past. The stick-and-carrot model of past can do more harm in the modern day than benefiting the company or a leader.

The leader-follower relationship quality prediction started in the 1980’s through the vertical-dyadic theory.The theory investigates the relationship building over a period, predicting how good a leader can be. We can broadly classify the relationship of followers with the leader as in-group and out-group. In in-group members have a higher affinity with the leader with better communication, and they praise each other. In out-group, the members are imposed or told what to do without two-way communication. LMX (Leadership exchange theory) is an improved version of the dyadic theory, where there is a continuum of the relationship between leaders and followers. There are cases where followers have high interaction with leader and situations where the leader has formal communication. As a follower, a person can take steps to get from out-group to in-group, and it is not always up to the leader to take steps. A leader can influence a follower in a way that is beneficial for the team and the organization, which is known as influencing organizational citizenship behavior (OCB). This leads to the follower taking more than what the job description states and ensuring high-quality exchange between the leader and the followers. A leader has a certain span of control which can be used to influence the followers to behave this way. The more OCB we have in the high-quality network, then the higher the performance of the company. There is also the social identity component when people identify that they are a part of the company. The way the organization behaves becomes a part of who they are and how they describe themselves, and which in turn leads them to perform better. 

As a team leader, to bring followers into the in-group from out-group, one must try building bonds with open communication and appreciation of their efforts. Influencing organization citizenship behavior within the leader’s span of control would lead them to perform better and providing them with the social identity, and higher purpose of their work would inspire them to work closely with the team and leader. In addition to this, researchers have found out that when a leader or a set of people believe that a group of people can work effectively, setting high expectations, they would inturn become high performers. This is otherwise called the Pygmalion effect – the phenomenon whereby higher expectations lead to an increase in performance. It is hence vital to ensure that high expectations of colleagues or followers, else the thought of low expectations would lead to low performing employees. Psychological safety is one of the most critical factors of effective and top performing teams as identified by Google. A leader must ensure psychological safety, providing an open line of communication and ensuring their opinions are valued without repercussions to their ideas and critiques. Convert disengaged people to show out their potential and through active participation and demonstration of the cognitive and emotional self. Acknowledging uncertainty, interdependence, encouraging critique, idea exchange and modeling curiosity through questions should be the given importance at the workplace.

The four of the most valuable things to ensure success in the future workplace is psychological safety, creativity, critical thinking and cognitive flexibility (or the ability to see the problem from multiple viewpoints).

The lack of large reliable data sets in fluid dynamics for deep learning

Fluid flow method using regression forest method by Ladicky et. al (Source)

Deep learning has gained prominence in varied sectors. Fluid dynamics simulations and experiments have also been explored to improve predictions over the past few years.

Ladicky et. al has explored a novel idea combining machine learning with fluid dynamics simulation. Here they made use of physics-based simulation as a regression problem, estimating the acceleration of every particle in every frame. The created a feature vector, directly modeling individual forces and constraints from the Navier-Stokes equations, predicting reliably the positions and velocities of particles in large time step on an unseen test video. A large training set of simulations obtained using traditional solvers were used for training using regression forest to obtain an approximate behavior of particles. Simulations like these reduce the need for computational resources for high-resolution real-time simulation.

“Given the inherent difficulties of long-term predictions, our vision for CFD in 2030 is grounded on a desired set of capabilities that must be present for a radical improvement in CFD predictions of critical flow phenomena associated with the key aerospace product/application categories, including commercial and military aircraft, engine propulsion, rotorcraft, space exploration systems, launch vehicle programs, air-breathing space-access configurations, and spacecraft entry, descent, and landing (EDL). “- NASA’s CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences (NASA/CR–2014-218178

Modern day simulations in fluid dynamics have reached its pinnacle with traditional turbulence modelling. In NASA’s CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences (NASA/CR–2014-218178) report the most critical area in CFD simulation capability that will remain a pacing item by 2030 will be the ability to adequately predict viscous turbulent flows with possible boundary layer transition and flow separation present.

But modeling knowledge and predictive capabilities can be built only on the basis of good data under varied conditions. The single thing that the fluid dynamics community suffers in the lack of reliable data under different conditions and for different applications. Training, algorithms, modeling techniques and machine learning approaches are secondary to develop data-driven turbulence modeling capabilities. This necessitates a global repository or database of reliable data both experimental and computational, where researchers from around the globe can pool in data and can benefit researchers focused on data-driven fluid dynamic simulations. Such a repository would open up better capabilities of predicting fluid dynamics and understanding flow in varied areas of interest unlike anything seen in history.

Installing Tensorflow(GPU), OpenCV and dlib on Ubuntu 18.04 Bionic Beaver

Image source:

For those who are ready for machine learning and computer vision with the updated versions of OpenCV, dlib, Tensorflow (GPU) on the Bionic Beaver.

Install synaptic and atom from Ubuntu’s package manager

sudo ubuntu-drivers autoinstall
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update

search and install nvidia-390 from synaptic

Download CUDA 9.0 (It has to be 9.0 for Tensorflow 1.8):

And Download cuDNN v7.1.3 for CUDA 9.0:

sudo chmod +x
sudo chmod +x
sudo chmod +x
sudo ./ --override
sudo ./
sudo ./

DO NOT INSTALL THE DRIVER AND SAMPLES IN THIS PART! Ignore the fact that they give you a warning for the driver not being installed. This is because the installer cannot detect the installed driver in your system, which we installed earlier through synaptic.

sudo apt-get install cuda-9
sudo apt-get upgrade
tar -zxvf cudnn-9.0-linux-x64-v7.1.tgz
sudo cp -P cuda/lib64/* /usr/local/cuda-9.0/lib64/
sudo cp  cuda/include/* /usr/local/cuda-9.0/include/
sudo chmod a+r /usr/local/cuda-9.0/include/cudnn.h
sudo apt-get install libcupti-dev
sudo atom ~/.bashrc

And add these lines:

export PATH=/usr/local/cuda-9.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

source ~/.bashrc
sudo apt-get update
sudo apt-get install build-essential cmake libopenblas-dev liblapack-dev libx11-dev libgtk-3-dev python python-dev python-pip python3 python3-dev python3-pip

Download and install Anaconda Python 3.6 (I use 3.6 universally and 2.7 for scientific computations)

Create environment using an environment name (envname)

conda create -n envname python=2.7
source activate envname
pip install numpy pillow lxml jupyter matplotlib dlib protobuf
sudo apt -y install python-opencv
conda install -c conda-forge opencv 
sudo snap install protobuf --classic
pip install --upgrade tensorflow-gpu

To KILL process and clear memory of GPU:


and kill the process causing unwanted memory usage

sudo kill -20483 PID.

Ubuntu 18.04 LTS (Bionic Beaver) – A powerful platform for a multitude of applications

Image source:

The heavily revamped Ubuntu comes with GNOME shell instead of unity, Snap apps and a software center where (finally!) we don’t have to search the web for an updated version of any software! The team at Canonical has added many features which have made the platform easy to use and powerful for programming. Canonical’s LivePatch can be used for critical updates without the need for rebooting the computer. The only issue I found was the boot issue with my laptop post-installation. The following fix can be applied to solve the problem.

Fixing boot problems with Ubuntu 18.04 LTS (Bionic Beaver)

Intel Virtualization Technology: Enabled
VT-d: Enabled
Hyper-threading: Enabled
CPU C states: Disabled
Fast Boot: Enabled
Boot mode select: UEFI
Secure Boot: Disabled
Secure Boot Mode: Standard

In: /etc/default/grub, change:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi= acpi_backlight=none acpi=ht pci=nomsi"

Then run: sudo update-grub

The original work of M.J. Boussinesq published in 1913

Annals of Chemistry and of Physics - 1913

In was the search of the original article “The application of the formula for surface viscosity to the surface of a slowly falling droplet in the midst of a large unlimited amount of fluid which is at rest and possesses a smaller specific gravity” published by M.J. Boussinesq which was published in originally in French in the Annales de Chimie et de physique (Annals of Chemistry and of Physics) published in 1913. The paper formed the basis of the legendary Boussinesq-Scriven stress model. I found the volume of the scientific journal in a French library repository and is sharing the same here:

M.J. Boussinesq, “The application of the formula for surface viscosity to the
surface of a slowly falling droplet in the midst of a large unlimited amount
of fluid which is at rest and possesses a smaller specific gravity,” Ann. Chem.
Phys., vol. 29, pp. 357–364, 1913

Interesting publications in fluid dynamics using neural networks

Image source:  Oreilly – A look at deep learning for science

“Turbulence was probably invented by the Devil on the seventh day of Creation when the Good Lord wasn’t looking.” Peter Bradshaw (1994)

Recently, I came across a few interesting publications in fluid dynamics domain focusing on the use of neural networks. The use of neural networks is popular in varied areas including self-driving cars. The use of neural networks has started leaving its footprints in fluid dynamics, especially in turbulence modeling. A recent article in the Journal of Fluid Mechanics by José Nathan Kutz provided an overview of Deep learning in fluid dynamics.

The article mentions the publication by Ling et al. titled: “Reynolds averaged turbulence modelling using deep neural networks with embedded invariance“, as one of first to apply a true DNN architecture, specifically to Reynolds averaged Navier Stokes turbulence models. The paper by Ling et al. focuses on the fact demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics, and the method they used to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. The paper showed that the neural network architecture provided improved prediction accuracy  and significant improvement in comparison to baseline RANS linear eddy viscosity and nonlinear eddy viscosity, models.

In May 2017, Gamahara et al. published a paper in Physical Review Fluids titled: Searching for turbulence models by artificial neural network. The work  demonstrated the use of the artificial neural network (ANN) as a tool for finding a new subgrid model of the subgrid-scale (SGS) stress in large-eddy simulation. The paper concludes that the results confirm that the ANN is a promising tool for establishing a new subgrid model with further improvement.

These interesting research papers point to the growing importance of deep neural networks in the area of fluid dynamics and turbulence modeling. It was rightly said that by Peter Bradshaw that  “Turbulence was probably invented by the Devil on the seventh day of Creation when the Good Lord wasn’t looking.”  and maybe the advances in machine learning and neural networks, would provide at least some of the answers humans have been seeking for a long time.