Jupyter Lab in the Azure platform

We all love the Jupyter notebooks for its intuitive and interactive interface. It  is one of the best interactive platforms for data analysis and scientific computation. I have executed numerous computation and analysis using python with Jupyter notebooks offline and on the cloud using AWS(Amazon Web Services). 

Recently, I started using the python with Jupyter notebooks on the Microsoft’s Azure platform, along with the newly released JupyterLab IDE. The JupyterLab IDE is a true Interactive development environment for effective computation. It comprises of a file browser, terminal and code editor, which reminds me of the Rodeo IDE.The Jupyter lab google drive feature also assists in realtime collaboration.

The ability scale and deploy on the azure platform, similar to the AWS and the Google cloud, enables computationally intensive applications to be performed with ease.


Check out the video below for an intro to JupyterLab:

Machine learning to improve experimental fluid flow analysis

Most analysis in experimental fluid dynamics uses techniques to track flow, by seeding the flow with particles. Humans with eye and the extraordinarily fascinating brain are undoubtedly the best system to identify and track moving objects or particles. The only issue with that is,  human beings are slower for tracking thousands of particles/ objects over time, get tired over time and can sometimes lead to error as a result of tiredness/ fatigue.

Most popular experimental techniques in the fluid flow analysis domain include particle image velocimetry- PIV (used for a high density of particles) and particle tracking velocimetry-PTV. The accuracy of both the methods depends on the algorithm used, the quality of images and the validation to ground truth. Most major companies that produce these PIV and PTV systems such as Dantec Dynamics have a pre-calibrated system and employ robust methods, which yield results that are ready for publication. However, the use of free and open source software, often untested for a given condition and set of particles, produce erroneous results. In the case of PTV systems, there is another complication; this arises due to the existence of particle tracking methods used by surface chemists(e.g., to look at Brownian motion), that produce inaccurate results when used for fluid dynamic analysis.  A classic example of this transpired when I used a particle tracking system used by surface chemists, which produced erroneous results for an oscillatory flow. However, when I used a correction factor calculation called “drift,” used to correct the velocity of particles when the flow field varies as a result of variation in the fluid containment such as in the case of a shaking beaker, there was a much better result, but still not accurate enough. Hence, such algorithms can cause serious errors and wrong interpretation of results, when used to look at fluid dynamics analysis.

The advent of new technologies and advanced machine learning techniques and state-of-the-art tools such as Tensorflow and DLib C++ can resolve such errors, producing highly accurate results and better interpretation of physics. These revolutionary technologies can help us understand fluid dynamics better than yesteryears.

Machine learning and artificial intelligence to improve scientific research 

Title image source: http://www.esanosys.com/


Most scientists come up with mathematical models that could be used to simulate physics. It was rightly said by George Box that “all models are wrong but some are useful”. A mathematical model is mostly based on considering a few conditions that make up the physical process and gives us results to some degree of accuracy given a number of controlled physical parameters or conditions.

The advent of machine learning and artificial intelligence is going to change the method of mathematical modeling, making it obsolete. The method of modeling of future will involve computational models which are generated in real-time, considering all the physical parameters associated with a physical process. The results: a complex computational model which is being updated in real-time, which can be used to predict with much accuracy the results we are looking for.

This method will not only put an end to multiple models with limited capability but also to ambiguity, bias and limitations of current models. The newer computation models will not be limited to a given condition but can be used in every condition.

Remote lab and 3D reconstruction of confocal images


The technological advancements of the age have enabled us to do things which were considered almost impossible just over a few years ago.

For the first time, last month, I started setting up remote access to lab experiments for monitoring and control of experiments. This enables monitoring and administration of operations at night without being in the lab physically.

I have experimented with machine learning methods for image classification and tracking, though it is still in its infancy and the technology needs improvement.

The use of 3D reconstruction of amyloid fibril structure obtained from confocal imagery will provide better information about the structure of fibrils.

Confocal image of fibril
Confocal Z-slice image of fibril
3D reconstructed fibril
3D reconstructed fibril 










Deep convolutional neural network for unbiased classification of amyloid fibrils from salt crystals

Image source: “New clues illuminate Alzheimer’s roots”- http://news.rice.edu/2013/07/19/new-clues-illuminate-alzheimers-roots-2/

I am using machine learning in python along with the use of libraries such as Tensorflow and scikit-learn. Recently I re-trained the Google’s inception model to learn and classify what microscopic images look like for amyloid fibrils and for samples one thinks as fibrils but are salt crystals from a buffer solution of NaCl.  The system was designed to provide an unbiased classification(removing expert bias). The Google’s inception model v3 was re-trained on thousands of images of amyloid fibrils and salt crystals taken from peer reviewed journals.


Results of running sample cases


IOT devices and prototyping boards changing the way we do research

Image source: http://techfactslive.com/


Internet of things (IOT) devices has made data acquisition, remote real-time monitoring and remote control. With the advent of new and productive prototyping boards such as the Arduino, Raspberry Pi, and Particle Photon, being IOT capable has revolutionized the way modern day research can be performed. The ease of how they can be customized to the specification of a given project makes it simple enough for anyone with basic programming and electronics experience to develop cutting edge solutions within a span for few hours.

The recent introduction of remote labs around the globe, where people can monitor research work remotely and control devices, has enabled researchers to control and monitor experiments hazardous locations and while traveling. Such remote labs also enable collaboration through remote sharing with investigators around the globe who can access data in real-time and even engage in control of experiments.

The simplicity and infrastructure in the current era also support the use of IOT based devices almost anywhere in the world and for a low cost. Most of the climate control systems, home appliances, etc. have all become IOT devices. With integration of real-time data acquisition and analytics,  we can make conclusions for improvement in a variety of areas including improvement in research methods and reduction of energy consumption not just in labs, but also in our homes




Ease, customizability and security of using linux os

Most of us are familiar with Windows and Mac OSes, and a growing number for Linux based distros.

The sheer ease of use, customizability, and security has led mostly geeks to use Linux based distros.

When Windows 10 came out, most of us believe is higher levels of security in comparison to its past releases. However, even Windows 10  had its flaws, and was never free of viruses or spyware even with its enhanced security features. Mac OSes, OS X, based on UNIX is secure compared to Windows, but comes with expensive hardware and is closed source. Linux distros, however, has all the advantages of UNIX based OSes and is open source with a large number of contributors around the world, including software engineers in major tech firms.

There are various distros with a specific application like Scientific Linux developed by CERN for physic and other scientific research.

Linux is also highly customizable that we can use a stripped down version for own use and which is specific for our purposes. Most of the today’s closed source application also have their Linux variants, like Skype, which makes it suitable for use for our day to day use.


Moi linux desktop from Fayaz Rasheed on Vimeo.

Do we actually use all those publications in journals in real life?

Image source: spectrumcenter.umich.edu

It is quite an intriguing question I was asking myself, despite numerous publication in high-quality journals about new research in science, technology, management and psychology; we rarely use them. Be it at the universities that submit the papers to journals or in the people in their real life.

A simple example would be in management, how a school manages or mismanages student experience or how a team in the University administers a conference. How they utilize research in human psychology to avoid anxiety, stress, and harassment to train the students well enough to enable to contribute to the society. The absence of which makes such academic institutions mouthpieces to research with no use of its own.

A classic example of good management by the educational institution was that of the case of one of last year’s APS  conference, where over 30 thousand people were well managed and taken the care off, and without any source of anxiety.

I have also had an exact opposite experience in another conference.

The belief that the productive use of research will benefit society is under question in such cases.