It is ubiquitous, in some industrial and biological process, to confront a situation where fluid
flows through another thin layer fluid, usually called a lubricant layer. Some of this can be found
in living systems including red blood cells (RBC) flow patterns through narrow capillaries, flow
of liquids in lungs and eyes etc. And in the machinery systems such as heating of fluids, coating
of thin films, electrical seals and paints etc. It is necessary to deal with nonlinear differential
equations with linear boundary conditions for stretching flow with no-slip. On the other hand,
one needs to resolve nonlinear differential equations imposed on nonlinear boundary conditions
for the boundary layer flow over a lubricated layer. Due to the nonlinearities the system become
more complicated which results in getting analytic solutions very hard. In addition, the governing
equations for non-Newtonian fluids have a higher order than the current boundary conditions
and, in the stretching and stagnation point flows, the coefficient of the leading derivative
disappears at the domain starting/initial point. As a consequence, numerical solution cannot be
achieved through a generic integration scheme. Researchers have used various methods to deal
with these difficulties. This inspires us to understand the movement of flow and heat over a
lubricated
layer
of
a
nanofluid.
The present article aims to analyze bullying behavior in the light of the Holy Quran and the Sayings of the Holy Prophet. The Holy Quran and Hadiths were used for the collection of data. The data collected from the original text was verified/ screened by the researchers
Human action recognition (HAR) has emerged as a core research domain for video understanding and analysis, thus attracting many researchers. Although signi cant results have been achieved in simple scenarios, HAR is still a challenging task due to issues associated with view independence, occlusion and inter-class variation observed in realistic scenarios. In previous research e orts, the classical Bag of Words (BoW) approach, along with its variations, has been widely used. In this dissertation, we propose a novel feature representation approach for action representation in complex and realistic scenarios. We also present an approach to handle the inter and intraclass variation challenge present in human action recognition. The primary focus of this research is to enhance the existing strengths of the BoW approach like view independence, scale invariance and occlusion handling. The proposed Bag of Expressions (BoE) includes an independent pair of neighbors for building expressions; therefore it is tolerant to occlusion and capable of handling view independence up to some extent in realistic scenarios. We apply a class-speci c visual words extraction approach for establishing a relationship between these extracted visual words in both space and time dimensions. To improve classical BoW, we propose a Dynamic Spatio-Temporal Bag of Expressions (D-STBoE) model for human action recognition without compromising the strengths of the classical bag of visual words approach. Expressions are formed based on the density of a spatiotemporal cube of a visual word. To handle inter-class variation, we use class-speci c visual word representation for visual expressions generation. The formation of visual expressions is based on the density of spatiotemporal cube built around each visual word, as constructing neighborhoods with axed number of neighbors would include non-relevant information hence making a visual expression less discriminative in scenarios with occlusion and changing viewpoints. Thus, the proposed approach makes our model more robust to occlusion and changing viewpoint challenges present in realistic scenarios. Comprehensive experiments on publicly available datasets show that the proposed approach outperforms existing state-of-the-art human action recognition approaches.