If a permutation has descent word α, then its run word is a sequence L of positive integers where L i is the length of the ith run in α.. Thus, its size is one more than the length of th
Trang 1Asymptotics of Permutations with Nearly Periodic Patterns of Rises and Falls
Edward A Bender Department of Mathematics University of California, San Diego
La Jolla, CA 92093-0112 ebender@ucsd.edu William J Helton∗Department of Mathematics University of California, San Diego
La Jolla, CA 92093-0112 helton@ucsd.edu
L Bruce Richmond Department of Combinatorics and Optimization
University of Waterloo Waterloo, Ontario CANADA N2L 3G1
the form Cr −n n!, and show how to compute the various constants A reformulation
in terms of iid random variables leads to an eigenvalue problem for a Fredholmintegral equation Tools from functional analysis establish the necessary properties
∗Partially supported by the NSF and the Ford Motor Company.
Trang 21 Introduction
Definition 1 (words) A word is a sequence of symbols If v and w are words, then vw
is the concatenation and w k is the concatenation of k copies of w The length |w| of w is the number of symbols in the sequence.
The descent word of a sequence σ1, , σ n of numbers is α = a1· · · a n −1 ∈ {d, u} n −1
where a i = d if σ i > σ i+1 and a i = u otherwise.
If a permutation has descent word α, then its run word is a sequence L of positive integers where L i is the length of the ith run in α The size kLk of a run word L is the sum of its parts plus 1 Thus, its size is one more than the length of the corresponding descent word In other words, it is the size of the set being permuted.
Let Run(N ) be the number of permutations that begin with an ascent and have run word N
For example, the descent and run words of the permutation 3, 2, 7, 5, 1, 4, 6 are dudduu and
1122, respectively, and k1122k = 7 Note that each run word corresponds to two descent words: just interchange the roles of d and u Thus the total number of permutations with run word N is 2 Run(N ).
We prove the following generalization of Ehrenborg’s Conjecture 7.1 [3]
Theorem 1 Let L0, , L k be (possibly empty) run words and let M1, , M k be nonempty run words There are nonzero constants B0, , B k such that
Theorem 2 [6] For a run pattern L there are constants C(L) and λ(L) such that the
fraction of permutations with run pattern L n is asymptotic to C(L) λ(L) n
Since kL n k − 1 = n(kLk − 1), the theorem can be rewritten
Trang 3where λ ∗ = λ 1/(kLk−1) and C ∗ = C/λ ∗.
When L = 1, Run(L n ) counts alternating permutations of size n + 1 and so we obtain
the Euler numbers:1 Run(1n ) = E n+1 ∼ 2(2/π) n+2(n + 1)! Thus
Definition 2 (ends of descent words) The lengths of the longest initial and final constant
strings in a descent word α are denoted by A(α) and Z(α), respectively These are the initial and final integers in the run word corresponding to α.
We now define the probability distributions and a measure of deviation from independencethat play a central role in our approach
Definition 3 (some probability) If α ∈ {d, u} n −1 , then f (x, y, α) is the probability
den-sity function for the event that the sequence X1, , X n of iid random variables with the uniform distribution on [0, 1] has X1 = x, X n = y and descent word α Also, f (x, y | α)
is the conditional density function We replace x and/or y with ∗ to indicate marginal distributions For example, f (x, ∗, α) =R01f (x, y, α) dy.
Let α1, α2, be a sequence of descent words with |α n | → ∞ We call the sequence asymptotically independent if either
1This works for both odd and even n since 1 2k corresponds to (ud) k and 12k+1 corresponds to (ud) k u.
Trang 4(a) lim n →∞ A(α n) =∞,
(1 +|α|)! f(∗, ∗, α).
Due to the lemma, we may study permutations via the probability distributions Stabilityand asymptotic independence imply a result needed to prove Theorem 1:
Theorem 3 Fix k > 0 Suppose that, for each 1 ≤ i ≤ k, the sequence α i,1, α i,2, is
stable and asymptotically independent Suppose that β i are possibly empty descent words for 0 ≤ i ≤ k Let
δ n = β0α 1,n β1· · · α k,n β k Let a(β) and z(β) be the first and last letters in β, respectively If β i is not empty, assume both
• that Z(α i,n a(β i )) is bounded for all n when 0 < i ≤ k and
• that A(z(β i )α i +1,n ) is bounded for all n when 0 ≤ i < k.
If β i is empty and 0 < i < k, assume either
• that Z(α i,n ) and A(α i +1,n ) are bounded for all n or
• that z(α i,n)6= a(α i +1,n ) for all n.
Trang 5We now provide the tools for calculating the constants in Theorems 1 and 2.
Definition 4 (reversal of descent words) For any descent word α, define α R to be α read
in reverse order and α to be α with the roles of d and u reversed.
Lemma 2 Let α and β be arbitrary descent words, We have
Theorem 4 Let µ = m1 m |µ| be a descent word containing both d and u The sequence
µ, µ2, µ3, is asymptotically independent and stable.
Let ω = e 2πi/|µ| Define the |µ| × |µ| matrix M for 0 ≤ k, ` < |µ| by
M k,` =
ω k` exp(rω ` ), if m k+1 = u.
Trang 6Let r be the smallest magnitude number for which the matrix M is not invertible Let
U (µ) be the number of u’s in µ Then, uniformly for (x, y) ∈ [0, 1]2,
In particular, f ( ∗, ∗, µ n) ∼ C(µ) λ(µ) n
and Vainshtein [6], including the same formulas for calculating C, λ and φ Their method
of proof differs from ours If our Conjecture 1 were proved, then our Theorem 4 wouldfollow from Theorem 2 [6]
“second largest eigenvalue” λ2 = 1/ |r2| |µ|, which is discussed in later sections This
can be used to obtain information about rate of convergence because of (6.1) See alsoSection 8
Using the lemma, one can compute f (x, y, α) for any particular descent word α We use (2.4) to convert results for d into results for u and results for the left end of α into
results for the right, generally without comment To study the asymptotics of something
like f ( ∗, ∗, α k βµ ` ) as k, ` → ∞, one combines the lemma and theorem:
Trang 7The matrix equation M D = 0 in Theorem 4 is written as |µ| separate equations
in (8.11) With ω = e 2πi/`, these are
` −1
X
t=0
ω kt D t = 0 for 1≤ k ≤ ` − 1.
It is easily seen that these equations have the one parameter solution given by D0 = D1 =
· · · = D ` −1 The condition for k = 0 is
transcen-t=0exp(rω t ) = 0 for r This can be simplified by using the Taylor
series for e z to expand exp(rω t ) and then collecting terms according to powers of r:
since the sum of ω tn over t vanishes when n is not a multiple of ` This is the result
of Leeming and MacLeod [5] mentioned after Theorem 2 In their notation, r = p `, thesmallest magnitude zero of (3.2) By (2.8), we can rewrite (3.2) as
Trang 8With E t = ω −t exp(rω t )D t, these become P` −1
t=0ω jt E t = 0 for 1≤ j ≤ ` − 1 and so, as before, E0 = E1 =· · · = E ` −1 For k = ` − 1 we have
The following table contains some values of λ(ud ` −1 ) and C(ud ` −1) and well as the
de-nominator of (3.4) and λ 1/` The denominator is needed in computing φ and λ 1/` is used
Trang 9To further illustrate the calculation procedure, we compute asymptotics in Theorem 1
when the permutation alternates up/down, except for some internal cases of uu To do this, we take all a i to be even except possibly a k , M i = 1 for all i, L i = 1 for 1≤ i ≤ k −1, and L0 and L k empty We need to compute B i
Ehrenborg’s Theorem 4.1 [3] gives the value When a is even and b is odd, what he calls β(1 a , 2, 1 b ) is the number of permutations with pattern (ud) a/2u(ud) (b+1)/2 He compares
this with E n , the number of alternating permutations of the same length n = k1 a21b k.
On the other hand, B i compares it with E n −1 Since the fraction of n-long permutations that alternate is asymptotic to C(1)λ(1) n , we obtain an extra factor of λ(1) = 2/π:
B i ∼ λ(1) β(1 a , 2, 1 b)
Thus B i = 4/π2 Ehrenborg also discusses computing β(1 a , L, 1 b)
To illustrate the use of our formulas, we now compute B i without using Ehrenborg’s
result Note that Run(M i n M i n+1) is just counting alternating permutations To evaluate
i L i M n
i+1), we apply (2.5) twice to compute f (x, y, (ud) m u(ud) m) and integrate this
over x and y Since
In computing B i in Theorem 1, the formulas we are using are probabilities and so we
will be estimating Run(P )/ kP k! for patterns P Remembering that R01φ(x, α) dx = 1,
Trang 10Lemma 3 Let α and β be descent words.
(a) If |α| > 1, then f(x, y, α) is a monotonic uniformly continuous function of x and y
on the unit square In fact, it is increasing in x if and only if α begins with d and
is increasing in y if and only if α ends with u.
U1(Z(α)) ≥ f(∗, y | α) ≥ L1(y, Z(α))
for functions U1 and L1 where L1(x, k) is strictly positive for 0 < x < 1.
Proof It is easily seen that (2.3) is monotonic It follows by induction that f (x, y, α) is
continuous if |α| > 1 Suppose α = uβ where β is not the empty word By (2.5)
By (a), both f (x, t, αd) and f (t, y, uβ) are monotonic decreasing functions of t By the
integral form of Chebyshev’s integral inequality [7],
Trang 11This completes the proof of (b).
We now prove (c) Let B(m) be an upper bound for f (x, y, u m ) Suppose α = a m βb n
Let U (k) be the maximum of the left side over m, n ≤ k and let L(x, y, k) be the minimum
of the right side over m, n ≤ k and a, b ∈ {d, u} The last statement for (c) is proved in
Trang 12with Z 1
0
f (t i −1 , s i −1 , β i −1 ) f (s i −1 , ∗ | α i,n ) f ( ∗, t i | α in ) ds i −1 (4.3)
Since the f (t i , s i , β i) are either uniformly continuous by Lemma 3(a) or a step function
as in (2.3), we can rearrange limits and integrals to obtain (2.2), except for showing that
the C i exist and are nonzero Note that this gives
for 0 < i < k and similar results for i = 0 and k These C i are easily seen to be equivalent
to those in the theorem
We distinguish cases according to whether or not A(α i,n ) and/or Z(α i,n) are bounded
First suppose both A(α i,n ) and Z(α i,n) are bounded In this case, the definition ofasymptotic independence gives us
f (s i −1 , t i | α i,n ) = f (s i −1 , ∗ | α i,n ) f ( ∗, t i | α i,n ) + o(1)
uniformly over the range of integration Thus we can replace (4.2) with (4.3) plusR
f (t i −1 , s i −1 , β i −1 ) o(1) The effect of this latter is to add a term of products of C j’s
with C i −1 C i replaced by o(1) Since the C i will be shown to be nonzero, the asymptoticsare unchanged
Now suppose A(α i,n)→ ∞ and a(α i,n ) = u the cases of Z and d are handled by (2.4) For simplicity, we drop the i subscripts Write α n = u m γ where m → ∞ and a(γ) = d.
By assumption, z(β) = d Note that f (s, t, β), f (t, x, u m ) and f (x, y, γ) are decreasing functions of t and x Also, for each fixed x > 0, f (t, x | u m) approaches a delta function
as m → ∞ and so, for 0 < s, x < 1,
Trang 13The case of unbounded runs at the end of α i,ncan be handled as in the derivation of (4.5).
Otherwise, stability guarantees that f ( ∗, s | α i,n ) and f (t, ∗ | α i +1,n) approach a limit and
Lemma 3(c) guarantees that the limits are bounded Since f (s, t, β i ) is well behaved, C i
exists Furthermore, it is positive because of the lower bound in Lemma 3(c)
Suppose µ is a descent word containing both d and u Without loss of generality, we suppose that µ begins with d Define K(x, y) = f (x, y, µ) and
left and right eigenvectors u and v This is the discrete form of Theorem 4 We prove
analogous results for the function K(x, y) using functional analysis.
We begin with some relevant properties of the kernel
containing u Then we have the following.
(a) K(x, y) is uniformly continuous on the unit square [0, 1]2 and strictly positive on (0, 1] × (0, 1).
There is a continuous strictly increasing function ˜ e(x) on [0, 1] with ˜ e(0) = 0 such that (b) for every positive Borel measure ν on [0, 1] with ν( (0, 1) ) > 0, there is a number
τ ν > 0 such that
τ ν e(x)˜ ≤
Z
(c) there is a constant M K such that, for every Borel measure ν y on [0, 1],
limx →0+e(x)˜ −1 K(x, y), if x = 0,
is continuous on [0, 1]2 and is strictly positive on [0, 1] × (0, 1).
Trang 14Proof Lemma 3 implies (a).
We now prove (b) Without loss of generality, µ = d k uβ for some k > 0 and so, by Lemma 3(b), K(x, y) ≥ f(x, ∗, d k )f ( ∗, y, uβ) Set
Note that f (x, ∗, δ) > 0 on (0, 1) for any δ, so ˜e(x) is also positive there Also note that
f ( ∗, y, δ) is strictly positive and continuous on (0, 1), so τ ν > 0.
We now prove (c) Since f (x, y, µ) is nonnegative and (uniformly) continuous in the unit square, there is a constant M K such that f (x, y, µ) ≤ M K f (x, ∗, µ) Combine this
with the fact that
We now prove (d) Since e(x) is continuous and strictly positive on (0, 1] and K(x, y)
is continuous on [0, 1]2, the claim holds on (0, 1] × [0, 1] Since K(x, y) is monotonic in y,
so is e(x) −1 K(x, y) It suffices to study the limit of this ratio as x → 0 We claim that
f (x, y, d k) =
To see this, consider the sequence of independent, identically distributed, random variables
X1, , X k+1conditioned on X1 = x > y = X k+1 The probability that X2, , X k all
lie in [y, x] is (x − y) k −1 and the probability that they are in increasing order is 1/(k − 1)! since there (k − 1)! ways to arrange them Since these two events are independent, (5.4)
Trang 156 Operators Which Preserve Cones
Before considering integral operators like K(x, y), we develop some general properties of
linear operators that are needed for our proof We follow the terminology in [2] and try
to keep the expositions reasonably self-contained
Banach space A cone P is a closed convex set with
A real Banach space can be complexified to a (unique) complex Banach space B c and an
operator T on B extends uniquely to B c (See [2], Chapter 9.8.)
Here is the Krein-Rutman Theorem as stated in Theorem 19.3 of [2] It plays a central
role in our analysis of K(x, y).
maps P, except for 0, into its interior, denoted P0, then the maximum magnitude
eigen-value λ1 of T extended to the complexification B c is real and positive The eigenvector
φ corresponding to λ1 is unique (up to a scalar multiple) and lies in P0 Any other
eigenvector of T does not lie in P.
As is often the case with Krein-Rutman applications we shall find that our map T maps
a cone P into itself, but not into its interior We now describe a standard patch which
allows one to still use the theorem
partial order of Definition 5 Pick e ∈ P, set
t>0
t[ −e, e]
and define a norm on B e by
kbk e = inf{t > 0 : b ∈ t[−e, e]} for b ∈ B e Define a cone P e by
P e = B e ∩ P = {b ∈ P : te − b ∈ P for some t > 0}.
Trang 16Beware: B e is not complete in this norm Note that [ −e, e] is the unit ball in B e Thekey facts aboutk k e are as follows.
true.
(a) The norm k k e is semi-monotonic on B e with respect to the cone P e
(b) If k k is semi-monotonic on B with respect to the cone P, then (B e , k k e ) is complete and hence a real Banach space Also there is a number γ such that γ kbk e ≥ kbk for all b in B e
(c) If T : B → B is a operator such that
(i) T maps the cone P into P,
(ii) k k is semi-monotonic on B with respect to the cone P,
(iii) for each b in B there is a number τ b such that −τ b e ≤ T (b) ≤ τ b e,
(iv) for each b ∈ P there is a number M b > 0 such that e ≤ M b T (b),
then T maps P e into its interior.
If in addition T is a compact operator on B e , then Theorem 5 applies on B e to give
λ1 ∈ R+ and φ ∈ P e
Proof Parts (a) and (b) are Proposition 19.9 of [2].
We prove (c) We claim that the interior of P e is{b ∈ B e : b ≥ te for some t > 0} To prove this, first note that b ∈ B e is in the interior of P e if and only if
b + t[ −e, e] ⊂ P e for some t > 0,
which is true if and only if
b ± te ∈ P e for some t > 0,
which is true if and only if
t 0 e ≥ b ± te ≥ 0 for some t, t 0 > 0.
This gives four inequalities that must hold All follow automatically from b ∈ B e except
the inequality b ≥ te This proves the claim By (iii), T : B → B e, and so, by the claimand (iv), we are done
We now turn our attention to powers of operators
.