Droplet impact on liquid film is of critical importance in several industrial applications, such as inkjet printing and thermal sprays. Single-liquid system (same liquid for the droplet and the liquid film) has shown two outcomes for the impact of a droplet on a liquid film, namely bouncing and merging. The transition between the regimes of bouncing and merging has been reported to be a function of the impact Weber number and the film thickness. Very often, in practical application such as in multiple layer 3D printing, the droplet and liquid film are composed of different liquids. Thus, a good understanding of the droplet impact dynamics in two-liquid systems (i.e. different liquid for the droplet and the film) is required to control these processes. However, very few studies in literature have focused on two-liquid systems. In this thesis, we experimentally investigate the dynamics of droplet impact in a two-liquid system with contrasting liquid property ratios. Experimental observations from the two-liquid systems show a significant shift in the transitional boundaries, where droplet impact outcomes change from bouncing to merging, with respect to that of the single-liquid system. In addition to the two types of merging of the droplet to the liquid film, early merging and late merging, reported for single-liquid systems, we also observe a new type of merging for two-liquid systems. Additionally, the findings from experiments have also been reproduced using theoretical analysis.

Skip to main contentRefine Results Back to Results From: To: Apply

## Type of Work

Article (1) Book (0) Theses (1) Multimedia (0)

## Peer Review

Peer-reviewed only (2)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (0)

## Publication Year

## Campus

UC Berkeley (0) UC Davis (0) UC Irvine (0) UCLA (1) UC Merced (0) UC Riverside (0) UC San Diego (1) UCSF (0) UC Santa Barbara (0) UC Santa Cruz (0) UC Office of the President (0) Lawrence Berkeley National Laboratory (0) UC Agriculture & Natural Resources (0)

## Department

## Journal

## Discipline

## Reuse License

## Scholarly Works (2 results)

In this paper we consider the problem of computing an $\epsilon$-optimal
policy of a discounted Markov Decision Process (DMDP) provided we can only
access its transition function through a generative sampling model that given
any state-action pair samples from the transition function in $O(1)$ time.
Given such a DMDP with states $S$, actions $A$, discount factor
$\gamma\in(0,1)$, and rewards in range $[0, 1]$ we provide an algorithm which
computes an $\epsilon$-optimal policy with probability $1 - \delta$ where
\emph{both} the time spent and number of sample taken are upper bounded by \[
O\left[\frac{|S||A|}{(1-\gamma)^3 \epsilon^2} \log
\left(\frac{|S||A|}{(1-\gamma)\delta \epsilon}
\right)
\log\left(\frac{1}{(1-\gamma)\epsilon}\right)\right] ~. \] For fixed values
of $\epsilon$, this improves upon the previous best known bounds by a factor of
$(1 - \gamma)^{-1}$ and matches the sample complexity lower bounds proved in
Azar et al. (2013) up to logarithmic factors. We also extend our method to
computing $\epsilon$-optimal policies for finite-horizon MDP with a generative
model and provide a nearly matching sample complexity lower bound.