Multilinear integral inequalities, such as H¨older’s inequality and Young’s convolution inequality, play a large role in analysis. In [6] and [5], Bennett, Carbery, Christ, and Tao provide a classification of such inequalities of the form
→ R. Rd
n j=1
fj(Lj(x))dx ≤ C
n j=1
||fj||pj
for constant C > 0, exponents pj ∈ [1,∞], and surjective linear maps Lj : Rd → Rdj with fj : Rdj
, In Chapter 2, we discuss a generalization of the above related to work by Ivanisvili
and Volberg [23]. Specifically, we provide a classification of functions B : Rn → [0,∞) such that
B(f1(L1(x)), ..., fn(Ln(x))dx ≤ CB Rd f1, ...,
fn
.
In some cases, it will be shown that maximizers of the above inequality exist. Tuples of Gaussians are not always maximizers, differing from the usual multilinear theory. Chapter 3 focuses its attention on the trilinear form for twisted convolution:
f(x)g(y)h(x + y)eiσ(x,y)dxdy. Rd×Rd While existence of maximizers can shed light on the structure of an operator,
sometimes it is useful to establish more refined information. For twisted convolution, we show a quantitative version of the statement that if a triple of functions nearly maximizes the form, then it must be close to a maximizing triple. Such a statement2
may be referred to as a sharpened inequality. Here, the proof of a sharpened inequality is complicated by the fact that no maximizers exist for twisted convolution; however, one may vary the amount of oscillation and compare to the case in which there is zero oscillation. In Chapter 4, we establish a sharpened version of the following inequality due to
Baernstein and Taylor [3]:
Sd×Sd f(x)g(y)h(x · y)dσ(x)dσ(y) ≤ f∗(x)g∗(y)h(x · y)dσ(x)dσ(y) Sd×Sd
where f, g, h are restricted to the class of indicator functions and h is monotonic on [−1, 1]. In the above, f∗ refers to the symmetric decreasing rearrangement of f, and likewise for g and g∗.