Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
algorithmic dimension.pdf
Скачиваний:
10
Добавлен:
24.03.2015
Размер:
1.2 Mб
Скачать

13

Algorithmic Dimension

Not all classes of measure 0 are created equal, and the classical theory of dimension provides a method for classifying them. Likewise, some nonrandom sets are more random than others. In this chapter, we look at e ectivizations of Hausdor dimension and other notions of dimension, and explore their relationships with calibrating randomness.

13.1 Classical Hausdor dimension

The study of measure as a way of specifying the size of sets began with work of Borel and others in the late 19th century. In his famous thesis [237], Lebesgue introduced the measure that is now called Lebesgue measure. In 1914, Carath´eodory [54] gave a more general construction that included Lebesgue measure as a special case. For R, Carath´eodory’s definition yields

the s-dimensional measure μs(A) = inf{ i |Ii|s : A

i Ii}, where each

I

is an interval. In 1919, Hausdor

[177] used s-dimensional measure to

i

 

 

generalize the notion of dimension to non-integer values.

This idea of changing the way we measure open sets by an additional factor in the exponent is realized in Cantor space as follows. For 0 s 1,

the s-measure of a clopen set σ

is μs( σ ) = 2−s|σ|. Notice that the

1-measure is the same as the

uniform measure.

 

 

 

 

 

 

Definition 13.1.1. (i)

A set of strings D is an n-cover

of R 2ω if

R D and D 2 n.

 

 

 

 

 

Hs

{

 

 

 

 

 

}

 

(ii) Let

n(R) = inf

σ

D

.

 

 

μs( σ ) : D is an n-cover of R

 

R.G. Downey and D. Hirschfeldt, Algorithmic Randomness and Complexity, Theory and

592

Applications of Computability, DOI 10.1007/978-0-387-68441-3_13, © Springer Science+Business Media, LLC 2010

13.1. Classical Hausdor dimension

593

(iii)The s-dimensional outer Hausdor measure of R is Hs(R) = limn Hns (R).

This notion has been widely studied. The fundamental result is the following.

Theorem 13.1.2. For each R 2ω there is an s [0, 1] such that

(i)Ht(R) = 0 for all t > s and

(ii)Hu(R) = ∞ for all 0 u < s.

Proof. For all s, r [0, 1], and n N,

Hns+r(R) = inf σ

D

2−s|σ|2−r|σ| : D is an n-cover of R 2−rnHns (R).

 

 

 

 

 

So if Hs(R) = 0 then Hns+r(R) = 0, while if Hns+r(R) = then Hs(R) =

.

 

t > 1

t

 

To complete the proof, notice that if

then limn Hn(R)

limn σ 2n 2−t|σ| = limn 2n2−tn = 0.

 

 

 

Theorem 13.1.2 means that the following definition makes sense.

 

Definition 13.1.3.s

For R 2ω, the

Hausdor

dimension of R is

dimH(R) = inf{s : H (R) = 0}.

 

 

 

Hausdor dimension has a number of important basic properties.

 

Proposition 13.1.4.

(i) If μ(X) > 0 then dimH(X) = 1.

 

(ii)(monotonicity) If X Y then dimH(X) dimH (Y ).

(iii)(countable stability) If P is a countable collection of subsets of 2ω,

then dimH( X P X) = supX P dimH(X). In particular, dimH(X Y ) is max{dimH(X), dimH(Y )}.

Proof sketch. This proposition is well known, but its proof is worth sketching. We begin with a lemma.

Lemma 13.1.5. If X is measurable then H1(X) = μ(X).

 

 

Proof. By definition, μ(X) is the infimum of

σ U 2−σ over all covers U

of X.

For any cover U of X, we can replace any σ

 

U such that

σ

< n

 

 

 

| |

 

by all its extensions of length n to obtain an n-cover V of X such that

2−σ = 2−σ, so H1(X) = μ(X) for all n.

σ V σ U n

Part (i) of Proposition 13.1.4 follows immediately from the lemma. To see that part (ii) holds, take any s > dimH(Y ). Then since any n-cover of Y is an n-cover of X, we have Hns (X) Hns (Y ) for all n, so Hs(X) = 0. Part (iii) is proved by a similar manipulation of covers.

594 13. Algorithmic Dimension

13.2 Hausdor dimension via gales

The first person to e ectivize Hausdor dimension explicitly was Lutz [252, 253, 254]. To do so, he used a generalization of the notion of (super)martingale. This idea was, however, in a sense implicit in the work of Schnorr [348, 349], who used orders to calibrate the rates of success of martingales, as we have seen in Chapter 7, in much the same way that the s factor calibrates the growth rate of the measure of covers. Recall that, for a (super)martingale d and an order h, the h-success set of d is

Sh[d] = {A : lim supn d(A n) = ∞}. As we will see, the theory of Hausdor

h(n)

dimension can be developed in terms of orders on martingales, but Lutz [252, 254] originally used the following notions.

Definition 13.2.1 (Lutz [252, 254]). An s-gale is a function d : 2R 0 such that d(σ) = 2−s(d(σ0) + d(σ1)) for all σ.

An s-supergale is a function d : 2R 0 such that d(σ) 2−s(d(σ0)+ d(σ1)) for all σ.

We define the success set S[d] of an s-(super)gale in the same way as for martingales.

A 1-(super)gale is the same as a (super)martingale. For s < 1, we can think of an s-gale as capturing the idea of betting in an inflationary environment, in which not betting costs us money. In the case of martingales, if we are not prepared to favor one side or the other in our bet following σ, we just make d(σi) = d(σ) for i = 0, 1, and are assured of retaining our current capital. In the case of an s-gale with s < 1, we are no longer able to do so. If we want to have d(σ0) = d(σ1), then we will have d(σi) < d(σ) for i = 0, 1, and hence will necessarily lose money.

It is quite easy to go between s-gales and rates of success of martingales.

Lemma 13.2.2. For each (super)martingale d there is an s-(super)gale d such that S[d] = S2(1−s)n [d]. Conversely, for each s-(super)gale d there is a (super)martingale d such that S[d] = S2(1−s)n [d].

Proof. For a (super)martingale d, let d be defined by d(σ) = 2(s−1)|σ|d(σ).

 

 

 

check that d is an s-(super)gale. For each k, we have

d(A n)

 

 

 

 

 

 

It is easy to

 

(s

1)n

d(

 

 

(1

s)n (s

1)n

 

2(1−s)n

> k

iff d(A

 

n) = 2

 

n) > 2

 

2

 

 

2

 

 

 

 

A

 

 

 

 

 

k = k, so S[d] = S (1

s)n [d].

The converse is symmetric.

 

 

 

 

 

 

 

 

 

 

Note that, in the above proof, if s is rational then we can pass e ectively from d to d or vice versa, so, for instance, d is c.e. iff d is c.e.

Lutz’ key insight was that Hausdor dimension can be quite naturally characterized using either s-gales or growth rates of martingales.

Theorem 13.2.3 (Lutz [252, 254]). For a class X 2ω the following are equivalent.

13.2. Hausdor dimension via gales

595

(i)dimH(X) = r.

(ii)r = inf{s Q : X S[d] for some s-(super)gale d}.

(iii)r = inf{s Q : X S2(1−s)n [d] for some (super)martingale d}.

Proof. By Lemma 13.2.2, (ii) and (iii) are equivalent.

We first prove that dimH(X) inf{s Q : X S2(1−s)n [d] for some

(super)martingale d}. We first consider the

supermartingale formulation.

 

 

 

 

 

 

 

 

 

s

-null. That is,

Let s > dimH(X) and let {Uk}k ω witness that X is H

 

for all k we have X

 

U

 

 

and

 

2s|σ|

 

2−k. We may assume that

the U

 

 

k

 

 

σ Uk

 

σ

=

 

τ

 

U

 

 

: σ

 

τ

 

. For

k are prefix-free. For each

σ and k, let U

k

{

 

k

}

 

 

 

 

 

 

 

 

 

 

 

each k, let

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

dk(σ) = 2|σ|

2−s|τ |,

 

 

 

 

 

 

 

 

 

 

 

 

τ Ukσ

and let

d(σ) = dk(σ).

k

If σ Uk, then Ukσi = for i

σ / Uk, then Ukσ = Ukσ0 Ukσ1

dk(σ0) + dk(σ1) = 2|σ|+1

=0, 1, so dk(σ0) + dk(σ1) = 0 2dk(σ). If

,so

2−s|τ | + 2−s|τ |

 

 

 

 

 

 

 

 

 

τ Ukσ0

 

τ Ukσ1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

= 2 · 2|σ|

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

τ U2−s|τ | = 2dk(σ).

 

Thus each dk is a supermartingale. Furthermore, dk(λ) =

 

2s|σ|

 

2−k, so d is a supermartingale.

 

 

 

 

 

 

 

σ Uk

 

 

Now suppose that A

 

k Uk . Then for each k there is an nk such that

A

 

n

k Uk. Let t > s.

Then

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

d(A nk)

 

dk(A nk)

=

2(1s)nk

= 2

(t s)n

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

k ,

 

 

 

 

 

 

 

2(1−t)nk

 

2(1−t)nk

2(1−t)nk

 

 

 

 

 

 

 

 

 

 

 

 

 

 

so

 

lim supn

d(A n)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

=

∞, and

hence

A S2(1−t)n [d]. Since t

> s

>

 

2(1−t)n

dimH(X) are arbitrary, dimH (X) inf{s Q : X S2(1−s)n [d] for some supermartingale d}.

596 13. Algorithmic Dimension

To get the same result for martingales,1 we slightly change the definition of dk to

 

 

2|σ|

σ 2−s|τ |

if U σ

=

| |

 

 

 

k

 

 

 

 

τ Uk

 

 

dk(σ) =

2(1−s)m

if σ m

Uk for m < σ

 

 

 

 

 

 

 

 

 

 

 

otherwise.

 

 

0

 

 

We now prove that dimH(X) inf{s Q : X S2(1−s)n [d] for some supermartingale d}. Suppose that d is a supermartingale and X S2(1−s)n [d]. We may assume that d(λ) 1. Let

Vk = σ :

d(σ)

2k .

2(1−s)|σ|

Let Uk be the set of σ Vk such that τ prefix-free and cover X. Furthermore,

 

 

d(σ)

 

2−s|σ|

2−s|σ|

 

2−k

σ Uk

σ Uk

2(1−s)|σ|

 

 

 

/ Vk for τ σ. Then the Uk are

= 2−k 2−|σ|d(σ) 2−k,

σ Uk

the last inequality following by Kolmogorov’s Inequality (Theorem 6.3.3). Thus {Uk}k ω witnesses the fact that X is Hs-null.

Lutz [255] made the following remarks about the above characterization: “Informally speaking, the above theorem says that the dimension of a set is the most hostile environment (i.e., the most unfavorable payo schedule, i.e., the infimum s) in which a single betting strategy can achieve infinite winnings on every element of the set.”

13.3 E ective Hausdor dimension

Using the martingale characterization of Hausdor dimension, we can e ectivize that notion as follows.

Definition 13.3.1 (Lutz [252, 253, 254]). For a complexity class C of real-valued functions, the C (Hausdor ) dimension of R 2ω is

inf{s Q : R S2(1−s)n [d] for some martingale d C}.

For A 2ω, the C dimension of A is the C dimension of {A}.

Note that this infimum is equivalent to

inf{s Q : R S[d] for some s-gale d C}.

1In the e ective setting, we will discuss the issue of passing from supergales to gales when we prove Theorem 13.3.2.

13.3. E ective Hausdor dimension

597

We will be particularly interested in the case C = Σ01. The Σ01 Hausdor dimension of R is sometimes called the constructive (Hausdor ) dimension of R, but we will refer to is as the e ective Hausdor dimension, or simply the e ective dimension, of R, and denote it by dim(R).

In Lutz’ original paper [252], he defined C dimension using gales, but in the journal version [254], he used supergales. The issue here is one of multiplicative optimality versus universality. Let us consider the Σ01 case. By Theorem 6.3.5, there is a universal c.e. martingale, but while there is an optimal c.e. supermartingale (Theorem 6.3.7), there is no optimal c.e. martingale (Corollary 6.3.12). As noted by Hitchcock, Lutz, and others, it is not known whether there is always a c.e. r-gale that is universal for the class of all c.e. r-supergales. However, we have the following result, which shows that if r < t then there is a t-gale that is optimal for the class of all r-supergales. Thus gales su ce for defining e ective dimension, and Lutz’ original formulation of e ective dimension in terms of gales is equivalent to the one in terms of supergales.

Theorem 13.3.2 (Hitchcock [179]). Let 0 r < t be rationals. There is a c.e. t-gale d such that for all c.e. r-supergales d, we have S[d] S[d ].

Proof. Let d be a multiplicatively optimal c.e. r-supergale with d(λ) < 1 (constructed as in the proof of Theorem 6.3.7). It is enough to build d so

that S[d] S[d ].

 

 

 

[n]

= A∩2

n

. By Kolmogorov’s Inequality

Let A = : d(σ) > 1} and let A

 

 

 

(Theorem 6.3.3), A[n]

|

2rn for all n. Let d be defined by

|

 

 

 

 

 

n

 

 

 

d (σ) =

2−t(n−|σ|)|{τ A[n] : σ τ }|

if |σ| n

n

2(t−1)(|σ|−n)dn(σ (n

1))

if

σ

> n.

 

 

 

 

 

 

 

 

|

|

 

Then it is easy to check that each dn is a t-gale and dn(σ) = 1 for all

σ A[n].

Now let s (r, t) be rational and let

 

 

 

 

 

d = 2(s−r)ndn.

 

 

n

Then d (λ)

=

n 2(s−r)n2−tn|A[n]| n 2(s−t)n < ∞, so d is well-

defined, and

hence is a t-gale. Furthermore, it is c.e., since A is c.e.

 

 

Let X S[d]. Then X (n − 1) A for infinitely many A. For such n, d (X (n − 1)) 2(s−r)ndn(X (n − 1)) = 2(s−r)n.

Hence X S[d ].

Of course, this result translates into the language of martingales. Say

d(A n)

that a (super)martingale d s-succeeds on A if lim supn 2(1−s)n = . We can build a c.e. martingale that (s + ε)-succeeds on every set A for which there is a c.e. supermartingale that s-succeeds on A. However, we do not know

598 13. Algorithmic Dimension

whether for each such A there necessarily is a martingale that s-succeeds on A.

In any case, dim(R) is equal to inf{s Q : R S2(1−s)n [d]}, where d is an optimal c.e. supermartingale. It follows immediately that dim(R) = sup{dim(A) : A R} and, more generally, the e ective dimension of an arbitrary union of classes is the supremum of the e ective dimensions of the individual classes. (Notice that for classical Hausdor dimension, the corresponding fact is true only for countable unions.)

We can also e ectivize Definition 13.1.1 directly. For Σ01 dimension, we have the following.

Definition 13.3.3. (i)

A set of strings D is an e ective n-cover if it is

an n-cover and c.e.

 

μs( σ ) : D is an e ective n-cover of R}.

(ii) Let Hns (R) = inf{ σ D

 

 

 

s

(iii)The e ective s-dimensional outer Hausdor measure of R is H (R) =

s limn Hn(R).

(iv) The e ective Hausdor dimension of R is the unique s [0, 1] such

Ht(R) = 0 for all t > s and Hu(R) =

for all 0

 

u < s.

that

 

 

 

It is straightforward to e ectivize our earlier work and show that the e ective analog of Theorem 13.2.3 holds. That is, the above definition agrees with our earlier definition of e ective Hausdor dimension. A similar comment holds for other suitable complexity classes.

There is another fundamental characterization of e ective Hausdor dimension, in terms of initial segment complexity.

Theorem 13.3.4 (Mayordomo [262]). For A 2ω, we have

dim(A) = lim inf

K(A n)

 

= lim inf

C(A n)

.2

n

n

n

n

 

Proof. The second equation is immediate, since C and K agree up to a log factor. We prove the first equation.

First suppose that dim(A) < s. Let d be a universal c.e. supermartingale.

Then lim supn

d(A n)

= . Now, as noted in Section 6.3.2, d(A n) =

2(1−s)n

2n−KM(A n)±O(1), so lim supn 2sn−KM(A n) = . But by Theorem 3.15.4,

K(A n) KM (A n) + O(log n), so lim supn 2sn−K(A n)+O(log n) = .

Thus there are infinitely many n such that K(A n) < sn + O(log n), and

hence lim infn

K(A n)

s.

n

 

2Staiger [378] showed that Theorem 13.3.4 can be obtained from results of Levin in [425]. There were also a number of earlier results indicating the deep relationship between e ective Hausdor dimension and Kolmogorov complexity, such as those of Cai and Hartmanis [45], Ryabko [338, 339, 340], and Staiger [376, 377]. These results are discussed in Lutz [254] (Section 6, in particular) and Staiger [378].

13.3. E ective Hausdor dimension

599

Now suppose that lim infn

 

K(A n)

< r < s for rationals r and s. Let

 

 

 

 

 

 

 

n

 

 

 

 

 

 

 

D = : K(σ) < r|σ|}. Then S is c.e., and by Theorem 3.7.6, |D ∩ 2n|

O(2rn−K(n)). Let

 

 

 

 

 

 

 

 

 

 

 

 

d(σ) = 2(s−r)|σ| στ

 

D 2−r|τ | + ν

 

D

 

ν

 

σ

2(r−1)(|σ|−|ν|) .

Then d(λ)

 

 

 

 

 

 

 

τ D 2−r|τ | O( n 2rn−K(n)2−rn) = O( n 2−K(n)) < ∞,

and a

straightforward calculation shows that d is an s-gale. Since D is c.e.,

 

 

 

 

 

 

 

 

 

 

 

 

so is d. There are infinitely many n such that A n D. For any such n, we have d(A n) 2(s−r)n, so d succeeds on A, and hence dim(A) s.3

One consequence of this characterization is that if A has positive e ective Hausdor dimension, then A is complex, since if dim(A) > r > 0, then C(A n) > rn for almost all n.

The Kolmogorov complexity characterization of e ective dimension also allows us to easily produce sets with fractional e ective dimension. Let A

be 1-random. Then B = A

 

0ω

has e ective dimension

1

 

B

 

n) =

 

 

 

 

2

, since K(n

 

 

 

K(A #

n

 

 

 

 

K(B n)

 

 

 

K(A 2

)

 

 

1

 

$) ± O(1), and hence lim infn

 

= lim infn

 

 

 

 

=

2 .

2

n

n

 

 

 

Recall the following definition. Given an infinite and coinfinite set X, let x0 < x1 < · · · be the elements of X, let y0 < y1 < · · · be the elements of

X, and define C X D to be {xn : n C} {yn : n D}. For a computable

real r [0, 1], we can find a computable X such that limn |X∩[0,n)| = r.

n

Then it is easy to check that A X 0ω has e ective dimension r. Thus we have the following result.

Theorem 13.3.5 (Lutz [252, 254]). For every computable real s, there is a set of e ective Hausdor dimension s.

Clearly, every 1-random set has e ective dimension 1, but if we take r = 1 in the above construction, then we have an example of a set of e ective dimension 1 that is not 1-random.

Schnorr [349] considered exponential orders on martingales, and hence was in a sense implicitly studying dimension. We thank Sebastiaan Terwijn for the following comments on Schnorr’s work. There is no explicit reference in Schnorr’s book to Hausdor dimension. After introducing orders of growth of martingales, he places special emphasis on exponential orders. It is interesting to ask why he did this, and whether he might have been inspired by the theory of dimension, but when asked at a recent meeting, he said he was not so motivated, so it is unclear why he gave such emphasis to exponential orders. Chapter 17 of [349] is titled “Die Zufallsgesetze von exponentieller Ordnung” (“The statistical laws of exponential

3Using the characterization in Definition 13.3.3, we can give a slightly cleaner proof.

n

: K(σ) < r|σ|}. Then there are

infinitely many n such that D

 

is

Let Dn = 2

 

2

r σ

σ Dn

2

 

K(σ)

 

n

 

an e ective n-cover of A, and for all n, we have

σ Dn

− | |

 

< 1.

Thus dim(A) r.

 

 

 

 

 

 

 

600 13. Algorithmic Dimension

order”) and it starts as follows: “Nach unserer Vorstellung entsprechen den wichtigsten Zufallsgesetzen Nullmengen mit schnell wachsenden Ordnungsfunktionen. Die exponentiell wachsenden Ordnungsfunktionen sind hierbei von besonderer Bedeutung.”4

Satz 17.1 then says that for any measure 0 set A the following are equivalent.

1. There are a computable martingale d and an a > 1 such that A is

contained in {X : lim supn

d(X n)

> 0}.

an

2. There are a Schnorr test { Un }n ω and a b > 0 such that A is contained in {X : lim supn mU (X n) − bn > 0}, where mU is a “Niveaufunktion” from strings to numbers defined by mU (σ) = min{n : σ Un}.

This result is a test characterization saying something about the speed at which a set is covered, and resembles characterizations of e ective Hausdor dimension.

In this chapter we will focus on the Σ01 notion of e ective dimension, but as with notions of algorithmic randomness, there are many possible variations. For instance, we can vary arithmetic complexity of the gales used in the definition of dimension. In Section 13.15, we will examine the e ect of replacing Σ01 gales by computable ones, to yield a notion of computable dimension. We can also take the test set approach and study variations on that, for instance Schnorr dimension, which turns out to be equivalent to computable dimension, as we will see in Section 13.15. Some known relationships between notions of randomness and dimension are summarized in

the following diagram from [118]. Here

20 randomness is the relativization

 

 

0

 

 

 

0

of computable randomness to

 

(so not the same as 2-randomness), and

similarly for Schnorr

2 randomness and 2 dimension.

0

random

 

 

 

 

2

 

 

 

 

 

 

Schnorr

0

random

 

 

=

0

2

 

 

2 dimension 1

 

 

 

 

 

=

 

1-random

 

 

e ective dimension 1

 

 

 

 

 

 

=

computably random

 

 

 

 

 

 

 

 

 

 

= computable dimension 1

Schnorr random

 

 

No other implications hold than the ones indicated.

4“In our opinion, the important statistical laws correspond to null sets with fast growing orders. Here the exponentially growing orders are of special significance.”

13.4. Shift complex sets

601

13.4 Shift complex sets

Not every set of high e ective Hausdor dimension is a “watered down” version of a 1-random set.

Definition 13.4.1. Let 0 < d < 1. A set A is d-shift complex if for every m < n, we have K(A [m, n]) d(n − m) − O(1).

If A is d-shift complex then K(A n) dn − O(1), so the e ective Hausdor dimension of A is at least d. On the other hand, a 1-random set cannot be d-shift complex for any d > 0, since it must contain arbitrarily long sequences of 0’s. Thus Theorem 13.4.3 below, which states that d- shift complex sets exist for every d (0, 1), shows that sets of high e ective dimension can be quite di erent from 1-random sets. (Note that any 1-shift complex set would be 1-random, and hence no such sets exist.) We give a proof due to Miller [274], though the original proof in Durand, Levin, and Shen [138] is also not complicated.

We say that a set A avoids a string σ if there are no m < n such that σ = A [m, n]. Theorem 13.4.3 also follows from a stronger result due to Rumyantsev and Ushakov [337], who showed that if c < 1 and for each n, we have a set Fn 2n with |Fn| 2cn, then there is an A such that for all su ciently large n, the set A avoids every element of Fn.

Lemma 13.4.2 (Miller [274]). Let S 2and c ( 21 , 1) be such that

τ S

|

| 2c − 1.

τ . Since p < 1, we have λ / S. Let

 

c τ

 

Then there is an A that avoids every element of S.

Proof. Let p = τ S c| |ρ

 

 

 

w(σ) = {c|

| : τ S (|ρ| < |τ | σρ ends in τ )}.

We think of w(σ) as measuring the threat that an extension of σ ends in an element of S. Note in particular that if σ itself ends in some τ S then w(σ) c|λ| = 1, so it is enough to build A so that w(A n) < 1 for all n. We have w(λ) = 0. Now suppose that we have defined A n = σ so that w(σ) < 1. It is easy to check that c(w(σ0) + w(σ1)) = w(σ) + p < 2c, so there is an i < 2 such that w(σi) < 1, and we can let A(n) = i.

Theorem 13.4.3 (Durand, Levin, and Shen [138]). Let 0 < d < 1. There is a d-shift complex set.

Proof. Let b = log(1 −d) + 1, let c = 2−d, and let S = 2: K(τ ) d|τ | − b}. We have

 

 

 

 

 

 

 

 

 

 

c|τ | =

2−d|τ |

2−K(τ )−b 2−b

2−K(τ )

 

 

 

 

τ S

τ S

τ S

 

τ S

 

 

 

 

 

 

 

 

2−b =

1 − d

< 21−d

1 = 2c

1,

 

 

2

 

 

 

 

 

 

602 13. Algorithmic Dimension

where the last inequality is easy to show using the fact that d (0, 1). Thus, by the lemma, there is an A that avoids every element of S. This A is clearly d-shift complex.

13.5 Partial randomness

Results such as Theorem 13.3.4 show that e ective dimension can be thought of as a notion of “partial randomness”. Indeed, as we saw above, a set such as A 0 for a 1-random A, which one would intuitively think of as “ 21 -random”, does indeed have e ective dimension 12 . In this section, we look at a di erent, though closely related, approach to defining a notion of s-randomness for s [0, 1].

We begin with the measure-theoretic approach. We will see that there are at least two reasonable ways to define the concept of s-Martin-L¨of randomness. The first is a straightforward generalization of the original

definition.

 

 

 

 

Definition 13.5.1 (Tadaki [385]). Let s [0, 1].

 

 

(i)

A test for weak s-Martin-L¨of randomness5 is a sequence of uniformly

 

c.e. sets of strings {Vk}k ω such that

 

σ Vk

2−s|σ| 2−k for all k.

(ii)

A is weakly s-Martin-L¨of random if A

/

V

for all tests for weak

 

 

s-Martin-L¨of randomness {Vk}k ω.

 

k

k

 

We can also define A to be weakly s-random if K(A n) sn−O(1). The analog of Schnorr’s Theorem that 1-randomness is the same as Martin-L¨of randomness is straightforward.

Theorem 13.5.2 (Tadaki [385]). Let s [0, 1] be a computable real. A set A is weakly s-Martin-L¨of random iff A is weakly s-random.

Proof. This proof directly mimics the one of Theorem 6.2.3. We give it here as an example. For other similar results whose proofs are straightforward modifications of the s = 1 case, we simply mention that fact.

Suppose that A is weakly s-Martin-L¨of random. Let Vk = : K(σ)

s|σ| − k}. Then

σ Vk 2−s|σ| σ Vk

2−K(σ)−k 2−k, so the Vk form a

test for weak s-Martin-L¨of randomness, and hence A /

k

Vk

. Thus there

 

 

 

is a k such that K(A n) sn − k.

 

 

 

 

5Below, we will define the notion of tests for strong s-Martin-L¨of randomness. Since they will determine a stronger notion of s-randomness, these tests will be weaker than the ones we define here. The terminology adopted here is meant to avoid the confusion of talking about weak tests in connection with strong randomness and strong tests in connection with weak randomness, or the equally confusing possibility of having objects called “weak tests” that are actually stronger than ones called “strong tests”.

−s|σ|

13.5. Partial randomness

603

Conversely, suppose that A is not weakly s-Martin-L¨of random. Let {Vk}k ω be a test for weak s-Martin-L¨of randomness such that A k Vk . Then it is easy to check that {(s|σ| − k, σ) : σ Vk2 , k > 2} is a KC set, so for each c there is an n such that K(A n) < sn − c.

Of course, weak s-randomness is closely related to e ective dimension.

Proposition 13.5.3. For all A, we have dim(A) = sup{s : A is weakly s- random}.

Proof. If A is weakly s-random then lim infn

K(A n)

s, so dim(A) s.

n

Conversely, if A is not weakly s-random and r < s, then lim infn

K(A n)

 

n

s < r, so dim(A) < r. Since r < s is arbitrary, dim(A) s.

 

Whether A is dim(A)-random depends on A, of course. We have seen that there are sets of e ective dimension 1 that are not 1-random, and it is easy to construct similar examples for s < 1. On the other hand, the examples we gave in connection with Theorem 13.3.5 have e ective dimension s and are weakly s-random, so we have the following extension of that result.

Theorem 13.5.4 (Lutz [252, 254]). For every computable real s [0, 1], there is a set of e ective Hausdor dimension s that is weakly s-random.

The discussion above tells us that the concept of s-randomness does give us some information beyond what we can get from e ective dimension alone.

Using Theorem 13.5.2, we can emulate the proof that Ω is 1-random to show the following.

Theorem 13.5.5 (Tadaki [385]). Let s (0, 1] and let Ωs = 2s| .

U(σ)

Then Ωs is weakly s-random.

Similarly, we can construct a universal test for weak s-Martin-L¨of randomness and develop other tools of the theory of 1-randomness. For example, it is useful to have an analog of Solovay tests for s-randomness.

Definition 13.5.6. Let s [0, 1]. A Solovay s-test is a c.e. set S 2

such that σ S 2 < ∞. A set A is covered by this test if A σ for infinitely many σ S.

The following result can be proved in much the same way as the s = 1 case. Although one direction is slightly weaker than in the s = 1 case, this result is enough to establish that for all A, we have dim(A) = inf{s : A is covered by some Solovay s-test}.

604 13. Algorithmic Dimension

Theorem 13.5.7 (Reimann [324]). If A is weakly s-random then A is not covered by any Solovay s-test. If A is covered by some Solovay s-test then A is not weakly t-random for any t > s.

The following variant will be useful in the proof of Theorem 13.8.1 below.

Definition 13.5.8. An interval Solovay s-test is a c.e. set T of intervals

in [0, 1] with dyadic rational endpoints such that I T |I|s < ∞ (where |I| is the length of I).

It is easy to transform an interval Solovay s-test T into a Solovay s- test S as follows. For each [q, q + r] T , first let n be largest such that

[q, q + r] [q, q + 2−n]. Note that 2−n < 2r. Let σ0 and σ1 be such that 00 = q and 01 = q + 2−n. Let τi = (σi0ω) n. Put τ0 and τ1 into S. It

is easy to check that if A I for

infinitely many I

 

T then A

 

τ for

s τ

| 2

 

 

s

 

infinitely many τ

 

S, and

τ S

2− |

 

I T (2|I|) < ∞.

 

 

 

 

 

 

 

 

 

Thus, we

can use interval Solovay s-

tests in place of Solovay s-tests in bounding the

 

 

 

 

 

 

 

 

 

 

dimension of a real.

Turning now to the martingale approach to randomness, we run into some trouble. The following definitions are natural.

Definition 13.5.9. Let s [0, 1].

(i)A is martingale s-random if for all c.e. martingales d, we have A /

S2(1−s)n [d].

(ii)A is supermartingale s-random if for all c.e. supermartingales d, we have A / S2(1−s)n [d].

It is currently unknown whether these notions are the same, for the kind of reasons we mentioned when discussing Theorem 13.3.2.

We would like to show that at least one of these notions of martingale s-randomness coincides with weak s-randomness, but one direction of the equivalence does not work. Consider Schnorr’s proof that if a set is Martin- L¨of random then no c.e. (super)martingale can succeed on it. We are given a c.e. (super)martingale d, and when we see d(σ) > 2k, we put σ into a test set Uk. Now imagine we follow the same proof for the s < 1 case. Here our test sets Vk are sets of strings. We might see that ds(σ0) > 2k and put σ0 into Vk. Since d is only c.e., at some later stage t > s we might see that dt(σ) > 2k. We would like to have σ Vk , but of course putting σ into Vk is a waste, since σ0 is already there, and indeed the measure calculation in Schnorr’s proof is unlikely to work if Vk is not prefix-free. If we were to follow Schnorr’s proof, we would simply put σ1 into Vk . Unfortunately, 2(2−s(|σ|+1)) > 2−s|σ|, so the simple measure calculation that allowed Schnorr’s proof to succeed is no longer available.

This problem was overcome by the following definition of Calude, Staiger, and Terwijn [52].

Definition 13.5.10 (Calude, Staiger, and Terwijn [52]). Let s [0, 1].

13.5. Partial randomness

605

(i)A test for strong s-Martin-L¨of randomness is a sequence of uniformly

c.e. sets of strings {Vk}k ω such that for every k and every prefix-free Vk Vk , we have 2−s|σ| 2−k

σVk .

(ii) A is strongly s-Martin-L¨of random if A /

k

V

for all tests for

strong s-Martin-L¨of randomness {Vk}k ω.

k

 

Now the following result can be proved by a straightforward modification of Schnorr’s proof mentioned above. The point is that when we notice that d(σ) > 2k, we can put σ into Vk even if, say, σ0 is already there, because any prefix-free subset of Vk will contain at most one of σ and σ0.

Theorem 13.5.11 (Calude, Staiger, and Terwijn [52]). Let s (0, 1] be a computable real. A set A is strongly s-Martin-L¨of random iff it is supermartingale s-random.

Reimann and Stephan [330] showed that weak and strong s-randomness are indeed distinct. The proof is technical and basically involves careful KC calculations, and we omit it due to space considerations.

Theorem 13.5.12 (Reimann and Stephan [330]). For all computable s

(0, 1), there is a set that is weakly s-random but not strongly Martin-L¨of s-random.

On the other hand, we do have the following.

Theorem 13.5.13 (Reimann [324]). If t < s and then A is strongly Martin-L¨of t-random.

Proof. Suppose that A is not strongly Martin-L¨of test for strong t-Martin-L¨of randomness {Un}n ω For each n,

2−s|σ| =

 

 

2−sk

 

σ Un

k

σ Un

2k

 

 

 

= 2(t−s)k

2

 

 

 

A is weakly s-random,

t-random, so there is a

such that A n Un.

−tk 2(t−s)k2−n,

 

 

k

σ Un 2

k

 

k

 

 

 

 

 

 

where the last inequality holds because each Un

2k

is prefix-free. Now,

 

}

, so if we let m be such that

 

 

 

k 2(t−s)k <

 

k 2(t−s)k−m 1, then

{Un n m is a test for weak s-Martin L¨of randomness witnessing the fact that A is not weakly s-random.

As pointed out by Calude, Staiger, and Terwijn [52], Theorem 13.5.11 implies that strong s-randomness can be characterized in terms of KM since, as noted in Section 6.3.2, KM (σ) = log d(σ) + |σ| ± O(1) for an optimal c.e. supermartingale d. But we would still want to have a machine-based characterization of strong s-randomness. The following definition provides one.

606 13. Algorithmic Dimension

Definition 13.5.14 (Downey and Reid, see [323]). Let f : 22be a partial computable function computed by a Turing machine M , and let Q rng f . The M -pullback of Q is the unique subset Q of dom f satisfying the following conditions.6

(i)f (Q ) = Q.

(ii)If σ Q and f (τ ) = f (σ), then |τ | |σ|.

(iii)If σ Q and f (τ ) = f (σ) with |τ | = |σ| then σ is enumerated into dom M before τ is. (We assume that elements enter dom M one at a time.)

A machine

M is s-measurable if for all Q rng M , we

have

σ Q

2−s|σ|

1.

 

 

Theorem 13.5.15 (Downey, Reid, and Terwijn, see [323]). Let s

 

(0, 1]

 

 

 

 

 

 

 

 

 

be a computable real. A set A is strongly s-Martin-L¨of random iff KM (A n) n − O(1) for all s-measurable machines M .

We know of no characterization of strong s-randomness in terms of initial segment prefix-free or plain complexity.

Before we prove Theorem 13.5.15, we need a lemma, which takes the role of the KC Theorem in this setting.

Lemma 13.5.16 (Downey, Reid, and Terwijn, see [323]). Let s (0, 1]. Let {Uk}k ω be a test for strong s-Martin-L¨of randomness. There is an s-measurable machine M such that for each k and σ U2k+2, there is a τ of length |σ| − (k + 1) with M (τ ) = σ.

Proof. If σ U2k+2 then 2−|σ| 2−s|σ| 2(2k+2), so |σ| 2k + 2, whence |σ|−(k + 1) > 0. So for a single such σ, requiring that there be a τ of length |σ| − (k + 1) with M (τ ) = σ does not cause a problem.

We define M in the most obvious way: Whenever we see σ enter U2k+2, we find the leftmost τ of length |σ| − (k + 1) and let M (τ ) = σ. We must show that there is always such a τ available, that is, that for each n there are at most 2n many strings of length n needed for the definition of M .

As argued above, if σ U2k+2 then |σ| 2k + 2, so if k > n − 1 then |σ| − (k + 1) > n. Thus we need to worry about only those U2k+2 with k n − 1. Note that strings σ U2k+2 that require a domain element τ of length n must have length n + k + 1.

For a set S of strings, let #(S, n) denote the number of strings of length n in S. As the set of all strings in U2k+2 of length n + k + 1 is prefix-free,

{2−s(n+k+1) : σ U2k+2 |σ| = n + k + 1} 2(2k+2).

6Roughly speaking, this definition is an approximation to the notion of σ for Kolmogorov complexity introduced in Chapter 3.

13.6. A correspondence principle for e ective dimension

607

Thus

#(U2k+2, n + k + 1)2−s(n+k+1) 2(2k+2),

so

#(U2k+2, n + k + 1) 2s(n+k+1)(2k+2) 2(n+k+1)(2k+2) 2−n−k−1.

Therefore,

 

 

 

 

2n−1

#(U2k+2, n + k + 1) =

2n−k−1

2−k < 2n.

k<n

k<n

 

k<n

Finally, we need to show that M is an rng M be prefix-free. Then Q k U2k+2

s-measurable machine. Let Q , so let Qk = U2k+2 ∩ Q, which is

aprefix free subset of U2k+2. Then

2−s|σ|2s(k+1) = 2s(k+1) 2−s|σ| 2s(k+1)2(2k+2) 2−k−1.

σ Qk σ Qk

Since Qk is prefix-free and s 1, it follows that

 

 

2−s|σ|

 

 

2−s|σ|

2−k−1 = 1,

σ Q

k σ Qk

 

k

where Q is as in Definition 13.5.14.

Proof of Theorem 13.5.15. First suppose we have an s-measurable machine

M such that for eachkc there is an n with KM (A n) < n − c. Let Uk =

: KM (σ) < |σ| − s }. The Uk are uniformly c.e., and A

Uk

for all

k. Now we need a calculation to show that the U

k form a

test for strong

 

 

 

 

 

s-Martin-L¨of randomness. Let Q Uk be prefix-free. Then

 

 

σ Q

2−s|σ| 2−s(KM (σ)+ ks )

 

 

 

 

 

σ Q

 

 

 

 

 

 

 

 

 

 

 

 

 

2−k2−sKM (σ) 2−k

τ Q

2−s|τ | 2−k,

 

σ Q

 

 

 

 

 

 

 

 

 

 

 

where Q is as in Definition 13.5.14.

Now suppose that A is not strongly s-random. Then there is a test for

strong s-Martin-L¨of randomness with A

k Uk . Let M be as in Lemma

that K

13.5.16. Then for each c there is an n such

M (A n) < n − c.

13.6A correspondence principle for e ective dimension

Hausdor dimension and its e ective counterpart are of course quite different in general. However, there are situations in which they coincide. We have seen that many important subsets of Cantor space are Π01 classes.

608 13. Algorithmic Dimension

Hitchcock [180] showed that for such classes, and in fact even for arbitrary countable unions of Π01 classes, the classical Hausdor dimension of a class is equal to its e ective Hausdor dimension, and hence to the supremum of the Hausdor dimensions of its elements. Thus, for such classes, we can think of classical Hausdor dimension, which at a first look seems clearly to be a global property of a class, as a pointwise phenomenon.

Theorem 13.6.1 (Hitchcock [180]). Let P be a countable union of Π01 classes. Then dimH P = dim P .

Proof. It is enough to prove the result for the case in which P is a single Π01 class, since the Hausdor dimension of a countable union of classes is the supremum of the Hausdor dimension of the individual classes, and similarly for e ective dimension.

 

It is enough to show that dim P dimH P , so let s > dimH P . Then

for each n there is a set of strings Un

such that

2−s|σ| 2−n and

P

 

 

U

 

. Because P is closed and 2

ω

 

σ Un

 

 

 

n

 

is compact, P is compact, so the U

n

 

 

 

 

 

 

 

can

be taken to be finite.

 

 

 

 

 

 

 

 

 

 

 

 

 

Given n, we can search for a finite set of strings Vn and a stage s such

that

 

 

σ Vn 2−s|σ| 2−n and Ps Vn , where Ps is as usual the stage

s approximation to P . We have seen that such a V

 

 

 

 

 

 

 

 

n must exist, and we

can recognize when we have found one. Then the Vn form a test for weak s-Martin-L¨of randomness, and P n Vn , so no element of P is weakly s-Martin-L¨of random, and hence dim(A) < s for all A P .

Thus dim P < s. Since s > dimH P is arbitrary, dim P dimH P .

Hitchcock [180] also showed that if P is a Σ02 class (i.e., a union of uniformly Π01 classes), then the Hausdor dimension of P is equal to its computable Hausdor dimension (that is, the dimension defined using computable, rather than c.e., martingales, which is equivalent to the concept of Schnorr Hausdor dimension defined below).

Note that, as observed by Hitchcock [180], Theorem 13.6.1 cannot be extended even to a single Π02 class, since {Ω} is a Π02 class.

13.7Hausdor dimension and complexity extraction

There have been several results about the dimensions of classes of sets defined in computability-theoretic terms. One important theme is the following: given a set of positive e ective dimension, to what extent is it possible to extract a greater level of complexity/randomness from that set? As we will see, there are several ways to formalize this question. (Note that we have already seen in Theorem 8.16.6 that there are sets of reasonably high initial segment complexity that do not compute 1-random sets.)

13.7. Hausdor dimension and complexity extraction

609

We begin by pointing out a fundamental di erence between the degrees of 1-random sets and those of sets of high e ective Hausdor dimension.

Lemma 13.7.1 (Folklore). Let A m B. There is a C ≡m B such that A and C have the same e ective Hausdor dimension. The same result holds for any weaker reducibility, such as Turing reducibility.

Proof. Let f be a fast-growing computable function, such as Ackermann’s function (though a much slower-growing function than that will do for this argument). For each n, replace A(f (n)) by B(n). Call the resulting set C. Then C ≡m B, and A and C have the same e ective Hausdor dimension because K(A m) and K(C m) are very close for all m. The same procedure works for any weaker reducibility.

As we saw in Corollary 8.8.6, Kuˇcera [215] proved that the Turing degrees

of 1-random sets are not closed upwards. This fact is true even in the 0

2

degrees, for instance by Theorem 8.8.4. Hence, we have the following.

Corollary 13.7.2 (Folklore). There are

0

degrees of e ective Hausdor

 

2

 

dimension 1 containing no 1-random sets.

Proof. Choose a < b < 0 such that a is 1-random but b is not. By Lemma 13.7.1, b contains a set of e ective Hausdor dimension 1.

Since Ω tt , the method above also gives the following result.

Corollary 13.7.3 (Reimann [324]). The tt-degree of has e ective Hausdor dimension 1.

The only way we have seen to make a set of e ective Hausdor dimension 1 that is not 1-random is to take a 1-random set and “mess it up” by a process such as inserting 0’s at a sparse set of locations. It is natural to ask whether this is in some sense the only way we can produce such a set. One way to formalize this question is by asking whether it is always possible to extract randomness from a source of e ective Hausdor dimension 1.

Question 13.7.4 (Reimann). Let A have e ective Hausdor dimension 1. Is there necessarily a 1-random B T A?

We will give a negative answer to this question due to Greenberg and Miller [170] in Section 13.9, where we will present their construction of a set of e ective Hausdor dimension 1 and minimal Turing degree. We saw in Corollary 6.9.5 that no minimal degree is 1-random.

Similarly, one of the only methods we have seen for making a set of fractional e ective Hausdor dimension is to take a set of higher e ective Hausdor dimension and “thin it out” (the other method being the construction of shift complex sets in Section 13.4). Again we are led to ask whether, given a set A of fractional e ective Hausdor dimension, it is always possible to find a set B T A of higher e ective Hausdor dimension, or even of e ective Hausdor dimension 1, or even a 1-random B T A. By

610 13. Algorithmic Dimension

Lemma 13.7.1, these questions remain the same if we replace T by T, so we can think of them as questions about the relationship between the e ective Hausdor dimension of A and that of its degree. Thus we can ask the following question.

Question 13.7.5 (Reimann, Terwijn). Is there a Turing degree a of effective Hausdor dimension strictly between 0 and 1? If so, can a be

02?

Question 13.7.5 is known as the broken dimension question. In Section 13.8, we will give a positive answer to it due to Miller [273]. Before doing so, we mention some earlier attempts to solve this question that are interesting and insightful in their own right.

We have seen in Theorem 8.8.1 that 1-random sets can always compute DNC functions. Terwijn [387] proved that this is also the case for sets of nonzero e ective Hausdor dimension. We can easily extract this result from our results on autocomplex sets and DNC functions: As mentioned following Theorem 13.3.4, if A has positive e ective Hausdor dimension, then A is complex, so by Theorem 8.16.4, A computes a DNC function. However, Terwijn’s original proof is instructive and short, so we give it below.

Theorem 13.7.6 (Terwijn [387]). If dim(A) > 0 then A can compute a DNC function.

Proof. By Theorem 2.22.1, it is enough to show that A can compute a fixed-point free function.

Let s be such that dim(A) > s > 0 and 1s N. For each e, let e0 and e1 be indices chosen in some canonical way so that We = We0 We1 . For a string σ, let σ be the string of length |σ| such that σ(n) = 1 − σ(n). Let

Be,n =

 

 

 

n

: t (We

 

 

n = σ

 

We ,t

 

n = σ) .

 

 

 

 

{

σ

 

2 s

,t

 

 

 

 

 

 

 

 

 

0

 

s

 

1

s

 

}

 

 

 

 

 

 

 

 

 

 

n

 

 

 

 

 

n

 

 

then σ

 

B

 

,

If there is a σ such that We0

s = σ and We1 ,t

s

= σ,

 

e,n

 

 

 

 

but no other string is in Be,n. Otherwise, Be,n is empty. In either case,

Be,n

|

1, so

σ

 

Be,n 2−s|σ|

2−s ns

= 2−n, and hence the Be,n form a

|

 

 

 

 

 

 

test for weak s-Martin-L¨of randomness for each e. We can e ectively find indices g(e) for each such test. (That is, WΦg(e) (n) enumerates Be,n for each e and n.)

The usual construction of a universal test applied to tests for weak s- Martin L¨of randomness produces a universal test for weak s-Martin-L¨of randomness {Un}n ω such that for each index i, for the ith test for weak s-Martin-L¨of randomness {Vn}n ω, we have Vn+i+1 Un for all n. Since

dim(A) > s, there is an n such that A / Un , so A / Be,n+g(e)+1 for all e.

To construct the fixed-point free function h T A, given e, let h(e) be an index so that Wh(e)0 f (n + g(e) + 1) = A f (n + g(e) + 1) and Wh(e)1 f (n + g(e) + 1) = A f (n + g(e) + 1). If Wh(e) = We, then

13.8. A Turing degree of nonintegral Hausdor dimension

611

A f (n + g(e) + 1) Be,n+g(e)+1, which is not the case, so Wh(e) =We for all e.

Corollary 13.7.7 (Terwijn [387]). If A <T is c.e. then {B : B T A} has e ective Hausdor dimension 0.

Proof. If not, then A computes a set of positive e ective Hausdor dimension, and hence computes a fixed-point free function. But then A is Turing complete, by Arslanov’s Completeness Criterion, Theorem 2.22.3.

Note that the proofs above give weak truth table reductions, so we also obtain the following corollary for free.

Corollary 13.7.8. If A <wtt is c.e. then {B : B wtt A} has e ective Hausdor dimension 0.

One consequence of Corollary 13.7.7 is that if A has positive e ective Hausdor dimension, then, by Theorem 8.10.2, A computes an infinite subset of a 1-random set. Thus, if such an A cannot compute a 1-random set then something fairly subtle must be going on.

The results above also suggest an approach to the basic questions mentioned above. If we could show that being strong enough to compute a DNC function implies being strong enough to compute a 1-random, then we would be able to answer all those questions. However, we have already seen in Theorem 8.10.3 that this is not the case.

Reimann and Terwijn (see [324]) showed that Question 13.7.5 has a positive answer for m-reducibility, even for 02 sets, by showing that there is

a02 set A such that {B : B m A} has fractional e ective Hausdor di-

mension. (An example is the set A defined by starting with a 02 1-random set X and letting A(n) = X(n) · X(n + 1), where · denotes multiplication.) Before Miller’s solution to the broken dimension problem, the best partial result was the following one by Nies and Reimann [307]. We will not prove it here, as it is a consequence of Miller’s result presented in the next section. It is worth noting, though, that the method of proof in [307] is rather di erent than that used for Miller’s result.

Theorem 13.7.9 (Nies and Reimann [307]). For any computable real α with 0 < α < 1 there is a 02 set A such that the e ective Hausdor dimension of {B : B wtt A} is α.

13.8A Turing degree of nonintegral Hausdor dimension

In this section, we present Miller’s proof of the following result, whose significance was discussed in the previous section.

( 12 + 2

612 13. Algorithmic Dimension

Theorem 13.8.1 (Miller [273]). Let α (0, 1) be a left-c.e. real. There is

a02 set A of e ective Hausdor dimension α such that if B T A then

the e ective Hausdor dimension of B is at most α.

Proof. To simplify the presentation, we take α = 12 . It should be clear how to modify the proof to work for any rational α, and not too di cult to see how to handle the general case.

We will build A as the intersection of a sequence of nested Π01 classes of positive measure. We will begin with a class all of whose members have e ective dimension at least 12 , thus ensuring that dim(A) 12 . At each stage we will satisfy a requirement of the form ΨAe total k > n (KAe k) −n)k). To do so, we will want to wait until a large set of oracles that appear to be in our current Π01 class P compute the same string via Ψe, and then compress that string. A key to showing that we can do this will be to use the dimension of the measure of P . Thus we will work with Π01

classes such that dim(μ(P )) 12 .

We begin by introducing some concepts that will be needed in the proof.

Definition 13.8.2. Let S

 

2

. The direct weight

 

of S is DW(S) =

σ

 

 

σ S 2|2| . The weight of S is W (S) = inf{DW(V ) :

 

S

 

 

V }.

Note that W (S) 1 because

 

 

 

 

S {λ} and DW({λ}) = 1. The

weight of S is essentially the minimum cost of compressing some initial

 

 

segment of each set in S by a factor of 2 (although programs cannot have

fractional length, so the cost of compressing a string of length 5 by a factor of 2 is actually 22, not 22.5). Of course, there is no reason to think that this compression can be realized e ectively. In other words, even if S is c.e., there may not be a prefix-free machine M compressing some initial segment of each set in S by a factor of 2 so that the measure of the domain of M is close to W (S).

Suppose that S is finite and let V be such that S V . It is suboptimal for V to contain any string incomparable with every σ S or to contain a proper extension of some σ S. In other words, it is always possible to

find a V with

S

V

 

and DW(V ) DW(V ), so that every string in

V has an

extension in S. Therefore, there are only finitely many V that

 

 

 

 

 

 

 

need to be considered

in computing the infimum in the definition of W (S).

 

 

 

 

Hence this infimum is achieved, justifying the following definition when S is finite.

Definition 13.8.3. An optimal cover of S 2is a set Soc 2such that S Soc and DW(Soc) = W (S). For the sake of uniqueness, we also require Soc to have the minimum measure among all possible contenders.

It is not hard to see that optimal covers, if they exist, are unique and prefix-free. The analysis above shows that when S is finite, we can compute both the optimal cover of S and W (S).

13.8. A Turing degree of nonintegral Hausdor dimension

613

Now consider an infinite S. Let {St}t ω be an enumeration of S, i.e.,oca

sequence of finite sets S0

 

 

· · ·oc

 

St. If σ

 

oc

,

 

t

 

S1

such that S =

 

 

St

then the only way for σ to not be in St+1 is for some τ σ to be in St+1. This fact has some nice consequences. First, it implies a nesting property:

 

Soc

 

 

oc

 

for all t. Second, it

proves that the Soc have a pointwise limit

 

t

 

St+1

 

 

 

oc

t

V . It is not hard to see that V = S

 

, demonstrating that the definition

 

 

 

 

 

 

 

 

 

.

 

 

 

above is valid for any S 2

 

 

 

 

 

 

If S is c.e., the above construction applied to an e ective enumeration

shows that the optimal cover Soc

is

0. More importantly, the nesting

property implies that

 

 

is a Σ0

 

 

2

Soc

class. There will not generally be a c.e.

set V 2such that

Soc

= V

1 and DW(V ) = W (S), or even such that

the direct weight of V is

finite. However, we can find such a V for which

 

 

 

 

 

 

 

the direct weight of any prefix-free subset is bounded by W (S). Here and below, we write τ 2for {τ σ : σ 2}.

Lemma 13.8.4. For any c.e. set S 2, we can e ectively find a c.e. V 2such that V = Soc and if P V is prefix-free, then DW(P )

W (S).

 

 

 

 

 

 

 

 

 

 

 

 

ective enumeration of S. Let V

 

=

 

 

 

Soc.

Proof. Let {St}t ω be an e oc

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

P

t

 

 

t

 

 

Then V is c.e. and

V

 

 

=

S

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

V

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

. If there is an infinite prefix-free

 

 

 

such that DW(P )

 

> W (S), then there is a finite P

P with the same

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

property. So assume that P

V is finite and prefix-free. We will prove that

if τ

 

V , then DW(P

τ 2

) DW({τ }), by induction on the

distance

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

k from τ to its longest extension in P (the claim being trivial if P ∩ τ 2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

is immediate. Now take τ

 

 

 

V

\

P . There is

is empty). The case k = 0oc

 

 

 

oc

 

 

 

oc

 

 

DW( τ

 

),

a unique t such that τ

 

 

S

 

\ St . Then DW(St

∩ τ 2

 

 

 

 

)

 

 

}

as otherwise τ

 

 

 

oc

 

 

 

t+1

 

 

oc

 

 

 

 

 

{

 

 

 

 

S

t . The nesting

property implies that

 

S

t

 

 

τ

covers

 

 

 

 

 

 

P

 

 

 

 

 

 

τ 2

 

 

 

 

 

 

 

 

 

 

 

 

 

stage

τ , since every element of P

 

 

must have entered Vocby

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

t. Hence, applying the inductive hypothesis to the elements of S

 

 

∩ τ 2

 

 

 

,

 

 

 

τ 2

 

 

 

 

 

 

 

oc

 

τ 2

). Of course,

 

 

S

oc

 

 

t

 

 

 

 

we have DW(P ∩

 

 

) DW(St

 

 

 

 

 

 

covers

 

P ,

so DW(P ) DW(Soc) = W (S).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

We

will build

 

a

set A by

 

approximations. As usual,

 

the

 

type

of

approximation—or in terminology borrowed from set theory, the type of forcing condition—determines the nature of the requirements that we can satisfy during the construction. Our conditions will be pairs (σ, S) with σ 2and S σ2c.e. and such that σ / Soc. A condition describes a restriction on the set A that is being constructed. Specifically,

the class

of

all

sets consistent with a

condition (σ, S)

is the Π0

class

is

 

 

 

 

 

1

 

P(σ,S) =

σ

\

Soc . Our definition of condition guarantees that P(σ,S)

 

nonempty.

We say that a condition

(τ, T ) extends

(σ, S) and

write

614 13. Algorithmic Dimension

(τ, T ) (σ, S) if P(τ,T ) P(σ,S).7 Extending a condition corresponds, of course, to further restricting the possibilities for A.

We now establish some basic lemmas about these forcing conditions. The first shows that there is an S such that every element of P(λ,S) has e ective dimension at least 12 .

Lemma 13.8.5. Let S = : K(σ) 2| }. Then (λ, S) is a forcing condition.

Proof. All that needs to be shown is that λ / Soc. If λ Soc, then DW(S) W (S) = 1. But DW(S) = σ S 22| σ S 2−K(σ) Ω < 1.

The next two lemmas show that each Π01 class corresponding to a condition has positive measure, and that the e ective Hausdor dimension of its measure is at most 12 .

Lemma 13.8.6. Let S σ2. If σ \ Soc is nonempty, then it has positive measure.

Proof. Let n = |σ|. The fact that S σ implies that W (S) 2n2 . Because σ \ Soc is nonempty, we know that σ / Soc. Hence τ Soc implies that |τ | > n. By these observations,

 

 

 

|τ |+n

n

 

 

n

μ( Soc ) =

2−|τ | <

2

2

= 2

2

2

2

 

τ Soc

τ Soc

 

 

 

τ Soc

 

= 22 DW(Soc) = 22 W (S) 2−n.

Therefore, μ( σ \ Soc ) > 0.

Lemma 13.8.7. Let (σ, S) be a condition. Then dim(μ(P(σ,S))) 12 .

Proof. We prove that dim(μ( Soc )) 12 , which is su cient because

μ(P(σ,S)) = 2−|σ| − μ( Soc ). We may assume, without loss of generality, that Soc is infinite, since otherwise μ( Soc ) is rational. Let w = W (S). Let

V 2be the c.e. set guaranteed by Lemma 13.8.4. That is, V = Soc and DW(P ) w whenever P V is prefix-free. Note that V must be infinite. Let {Vt}t ω be an e ective enumeration of V such that V0 = .

Let s > 1/2. We produce an interval Solovay s-test T covering μ( V ). (See Definition 13.5.8.) It consists of two parts, T0 and T1.

1. If τ Vt+1 \ Vt, then put [μ( Vt+1 ), μ( Vt+1 ) + 2−|τ |] into T0.

7It might seem that the symbol goes in the wrong direction, but this usage is consistent with what is often done in set theory, and derives from the identification of a condition with the class of sets consistent with it.

13.8. A Turing degree of nonintegral Hausdor dimension

615

2.If μ( Vt 2>n ) k2−n and μ( Vt+1 2>n ) > k2−n for some n and k, then put [μ( Vt+1 ), μ( Vt+1 ) + 2−n] into T1.

Let T = T0 fact that V ∩

T1. Notice that T does not actually depend on s. Using the 2n is prefix-free, we have

|I|s

=

2−s|τ | = 2−sn|V ∩ 2n| =

2!

2 −s"n

22 |V ∩ 2n|

 

 

 

 

 

1

 

n

 

 

 

 

 

 

 

 

 

 

 

 

I T0

τ V

 

n

n

 

 

 

 

 

 

=

2!

2

−s"n DW(V ∩ 2n) 2!

2 −s"nw =

 

w

 

 

< ∞.

 

1

s

 

 

1

 

1

 

1 2

 

 

 

 

 

 

 

 

 

 

 

 

n

 

n

 

 

 

2

 

 

 

Now fix n and let k be the number of intervals of length 2−n added to T1.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

n

.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

>n

 

2>n ). Let P

V

2>n be a prefix-free

By construction, 2−nk < μ( V

 

2

 

 

 

 

 

 

 

n

 

 

 

 

2

 

=

 

 

 

 

2

 

 

 

 

 

 

 

2

 

 

 

 

 

set such that

 

P

 

V

 

 

 

 

 

 

Then by the same argument as in the

 

 

 

k < 2

 

 

 

 

 

 

 

 

 

 

DW(P ). Putting these facts together we

previous lemma, μ( P ) < 2

2

have 2

 

 

 

 

 

 

n

DW(P )

 

 

2

 

 

n

 

w, so k < 2

n

w. Thus

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

|I|s <

 

2 2 w(2−n)s =

2!

2 −s"nw < ∞,

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

n

 

 

 

 

 

 

 

 

1

 

 

 

 

 

 

 

 

I T1

 

 

 

 

n

 

 

 

 

 

 

 

 

 

 

n

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

which proves that T is an interval Solovay s-test.

 

 

 

Next we prove that T covers μ( V

). Call τ Vt+1 \ Vt timely if Vt+1

2 n

= V ∩2

n

, in other words, if

 

only strings longer than τ enter V after τ .

 

 

 

 

 

 

 

 

 

 

 

 

Because V is infinite, there are infinitely many timely τ V . Fix one. Let t+1 be the stage at which τ enters V and let n = |τ |. We claim that there is an interval of length 2−n in T that contains μ( V ). Note that if u > t, then

μ( V )

μ( V

 

 

 

 

 

 

>n

 

 

n

 

 

 

>n

 

. In response to τ entering

 

 

 

 

 

 

 

2

 

t

 

 

 

2

 

 

 

n

 

 

 

 

 

 

 

 

 

 

n

 

 

 

 

 

 

u )

μ( V

 

 

 

 

)

μ( Vu

)

] into T0

 

 

T at stage

V , we put the interval [μ( V +1

), μ( Vt+1

) + 2

 

 

 

 

 

 

 

 

 

>n

 

u

), μ( Vu

 

) + 2] be the last

 

n

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

interval of length 2

 

 

t + 1. Let I = [μ( V

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

added to T . If μ( V ) / I, then μ( V ) > μ( V

 

) + 2

 

. Since u > t, we

is

 

 

 

1

 

 

 

u

2

>n

 

 

n

 

 

 

 

u

 

 

 

 

 

 

 

 

 

 

 

 

n

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

have μ( V

 

2

 

 

) > μ( V

 

 

 

 

 

) + 2, so another interval of length 2

 

 

added to T

 

 

T after stage u, which is a contradiction. Thus μ( V

 

) I.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

that is the length of a timely element of V ,

We have proved that for any n

 

n

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

there is an interval of length 2

 

in T that contains μ( V ). Since there

are infinitely many timely strings, μ( V ) is covered by T .

 

 

 

 

 

 

 

 

 

Since Tocis an interval

Solovay s-test for every s >

1

 

 

 

 

 

 

 

 

 

2 ,

it

follows that

 

 

 

 

 

 

 

 

1

 

 

 

 

 

 

 

 

 

dim(μ( S

)) = dim(μ( V ))

2 .

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Miller [273] noted that, by applying the method of the main part of this proof (presented after the next lemma) to the identity functional, we can show that if (σ, S) extends the condition from Lemma 13.8.5, then dim(μ(P(σ,S))) 12 . Hence, Lemma 13.8.7 is tight.

Our final lemma gives a simple hypothesis on a collection of conditions that guarantees that they have a common extension.

616 13. Algorithmic Dimension

Lemma 13.8.8. Let (σ0, S0), . . . , (σn, Sn) be conditions such that P(σ0,S0)

· · · ∩ P(σn,Sn) has positive measure. Then there is a condition (τ, T ) such that (τ, T ) (σi, Si) for each i n.

Proof. The σ

are comparable by hypothesis, so let σ = σ

 

 

 

σ . Let

 

i

 

 

 

 

 

 

 

 

 

σ \

oc

 

 

oc

 

 

0

· · · n

σ .

P = P(σ0,S0) ∩· · ·∩P(σn ,Sn) =

 

S0

· · · Sn

. In particular, P

Let b be such that μ(P )

2

b

For each m

 

b, let

 

 

 

 

 

 

 

 

.

 

 

 

 

 

 

 

 

 

 

 

Dm = {τ σ : |τ | = m and no prefix of τ is in Sioc for any i n}.

 

Now μ(P )

D

m|

2−m, because if τ

 

2m is not in D

 

, then

τ

 

is disjoint

 

|

 

 

m| 2

m b

.

 

 

 

 

 

 

 

 

m

 

oc

 

 

 

from P . Hence D

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

|

 

 

 

 

 

 

oc

 

 

oc

 

 

 

 

 

Let τ Dm and let Tτ = τ 2

 

 

(S0

 

· · ·

Sn

). If τ / Tτ

, then (τ, Tτ )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

T

oc

 

is the condition required by the lemma. On the other hand, τ

 

τ implies

 

 

 

 

 

 

 

 

 

m

 

 

 

 

 

 

 

 

 

 

 

 

 

 

that DW(Tτ ) W (Tτ ) = 22

. So assuming that the lemma fails, we have

n + 1

i n

W (Si) =

 

DW(Sioc) DW(S0oc · · · Snoc)

 

 

 

 

 

 

 

 

 

i n

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

m

 

 

 

 

m

 

m

 

 

 

 

 

 

 

DW(Tτ )

 

 

2m−b2

 

 

 

 

 

 

 

 

 

22

2

= 2 2 −b.

 

 

 

 

 

 

τ Dm

 

 

 

 

 

 

 

τ Dm

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

For large enough m, we have a contradiction.

Note that can find the common extension guaranteed by the lemma.

We are now ready to build A.

We build a sequence of conditions

(σ0, S0) (σ1, S1) (σ2, S2)

 

and take A =

σt, which will be

total. Equivalently, A will be the

·unique· ·

element of

ttP(σt,St). The con-

struction will be carried out with a oracle, so that A T . We begin by letting (σ0, S0) be the condition from Lemma 13.8.5, which ensures that

dim(A) 12 .

For each e and n, we will meet the requirement

Re,n : ΨAe total k > n (KAe k) ( 12 + 2−n)k).

These requirements guarantee that if B T A, then dim(B) 12 , and in particular, that dim(A) = 12 .

Suppose we have defined (σt, St). We now show how to define (σt+1, St+1) to satisfy Re,n, where t = e, n . Let b be such that 2−b < μ(P(σt ,St)). Note that b exists by Lemma 13.8.6, and can be found using . We define a prefix-free machine M . The idea is that M will wait for a large set of oracles that appear to be in P(σt,St) to compute the same su ciently long initial segment via Ψe, and then will compress that initial segment.

For each ρ, define M (ρ) as follows. First, wait until U(ρ) . If that ever happens, then let σ = U(ρ) and m = |σ|. If σ is not an initial segment of the binary expansion of μ(P(σt,St)), then let M (ρ) . Otherwise, proceed as

follows. For each τ , let Tτ = {ν σt : τ Ψνe }. Search for a τ 2m−b such that μ(P(σt ,St Tτ )) < 0. This condition is Σ01, so can find such a τ if

13.8. A Turing degree of nonintegral Hausdor dimension

617

there is one. If there is such a τ , then let M (ρ) = τ . Note that the domain of M is a subset of the domain of U, and hence is prefix-free.

We can e ectively find a c such that τ (K(τ ) KM (τ ) + c). Using , we can search for a σ that is an initial segment of the binary expansion of

μ(P(σt,St)) of length m > n + b, and such that K(σ) + c ( 12 + 2−n)(m −b). Such a σ must exist by Lemma 13.8.7. Let ρ be a minimal length U-program

for σ. The construction now breaks into two cases, depending on whether

or not M (ρ) (which can determine, of course).

 

 

 

 

)) < 0and

Case 1. M (ρ)

= τ . In this case, we know that μ(P

 

 

 

 

 

 

 

 

 

\ P(σt,St Tτ )

 

 

(σt,St Tτoc

\

oc

 

 

μ(P(σt,St)) 0. Thus P(σt,St)

=

(St oc Tτ )

 

St

 

is

nonempty. So there is a σ

T

 

such that σ

 

 

S

 

 

, since otherwise

S

oc

 

 

t+1

 

τ

oct+1

 

 

t

 

 

 

 

 

 

 

t

would be the optimal cover of

)

. Note that

 

can find such

 

(St Tτ

 

 

 

 

 

 

 

 

 

 

a σt+1. By definition, σt+1 σt. Because Tτ is closed under extensions, we may additionally require that σt+1 properly extend σt. Let St+1 = σt+12∩St. Since no prefix of σt+1 is in Stoc, we have Stoc+1 = σt+12∩Stoc, which implies that P(σt+1,St+1) = σt+1 ∩ P(σt ,St) = . Thus (σt+1, St+1) is a valid condition and P(σt+1,St+1) P(σt,St), so (σt+1, St+1) (σt, St).

To verify that Re,n has been satisfied, take A P(σt+1 ,St+1). Since σt+1

A and σt+1 Tτ , we see that τ ΨAe . Let k = |τ | = m − b, which is larger than n by our choice of σ. Then

KAe k) = K(τ ) KM (τ ) + c |ρ| + c = K(σ) + c

( 12 + 2−n)(m − b) = ( 12 + 2−n)k.

Case 2. M (ρ) . In this case, μ(P(σt,St Tτ )) 0for each τ 2m−b. Thus, for each such τ , we have that (σt, St Tτ ) is a valid condition extending

(σt, St). Furthermore, since P(σt,St Tτ ) P(σt,St) and μ(P(σt,St)) 0+ 2−m, we have μ(P(σt,St) \ P(σt,St Tτ )) 2−m. So

μ τ

2m−b P(σt ,St Tτ ) = μ P(σt,St) \ τ

 

2m−b(P(σt ,St) \ P(σt ,St Tτ ))

 

#

 

 

 

 

μ(P(σt ,St))

 

μ(P(σt ,St) \ P(σt ,St Tτ )) > 2−b 2m−b2−m = 0.

 

 

m

b

 

 

 

 

τ 2

 

 

 

Thus by Lemma

13.8.8, there is a condition (σt+1, St+1) that extends

(σt, St Tτ ) for every τ 2m−b. Then (σt+1, St+1) (σt, St). Furthermore,can find (σt+1, St+1), and we may assume without loss of generality that σt+1 properly extends σt.

To verify that Re,n is satisfied in this case as well, assume that ΨeA is

 

e

oc

 

b). Since σ

t

A, some ρ

 

A is in T

τ

.

total and let τ = ΨA

 

(m

 

 

 

 

Therefore, A (St Tτ )

 

 

and hence A / P(σt,St Tτ ) P(σt+1,St+1).

 

 

 

completed the construction. Let A =

 

σ ,

which is total

We have

 

 

 

 

 

 

 

 

t

t

 

 

 

 

because we ensured that σ

 

 

 

 

 

 

every t. The construc-

 

 

 

t+1 properly extends σt for

 

 

 

 

 

618 13. Algorithmic Dimension

tion was done relative to , so A is 02. The remainder of the verification was given above.

13.9DNC functions and e ective Hausdor dimension

In this section we present work of Greenberg and Miller [170] connecting the growth rates of DNC functions with e ective Hausdor dimension. In particular, we describe their construction of a set of e ective Hausdor dimension 1 and minimal Turing degree. (As mentioned above, by Corollary 6.9.5, such a set cannot compute a 1-random set, and hence this result answers Question 13.7.4.) This theorem extends earlier work of Downey and Greenberg [106], who showed that there is a set of minimal Turing degree and e ective packing dimension 1 (a concept defined below in Section 13.11.3), as described in Section 13.13.

 

 

Greenberg and Miller [170] generalized e ective Hausdor dimension to

a class of

spaces similar to Cantor space. Let h :

N

N

\ {

0, 1

}

be com-

 

hω

be the product space

 

 

 

 

 

 

 

 

 

putable. Let

ω

 

n{0, 1, . . . , h(n) 1}. That is, the

elements of h

 

 

 

 

 

 

 

 

 

natural numbers such that the nth

 

 

 

 

 

 

are infinite sequences of

 

 

n

 

 

 

 

 

 

 

 

 

 

 

 

 

of the sequence is less than h(n). Let h

 

 

=

 

m<n{0, 1, . . . , h(m)

element

 

 

 

 

n

 

 

 

 

ω

 

 

 

 

 

 

1

}

and h

 

=

 

 

n h . We give h

a topology

similar to that of Cantor

 

 

 

 

 

 

h

ω

 

 

 

 

 

 

 

 

 

 

 

basic open sets being

σ =

{

α

 

 

 

: σ α} for σ h .

space, with the

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ω

 

 

 

 

h

 

 

 

write

 

 

 

for

 

 

 

 

 

As for Cantor space, for A

 

 

 

 

 

A

 

σ A

σ .

 

 

 

 

 

 

The space h

ω

 

 

 

 

, we

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

is a compact Polish space. An

 

analogue of the uniform

measure for h

 

 

 

 

 

 

 

 

 

 

 

 

 

 

is obtained by dividing mass equitably: for σ h

 

 

, let

 

 

 

 

 

 

 

 

μh( σ ) =

 

1

 

=

 

 

1

 

 

 

 

 

.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

|h|σ||

h(0) · · · h(|σ| − 1)

 

 

 

 

 

 

Then μh can be extended to a measure on all of hω. When h is fixed, we write simply μ for μh.

For a computable function h : N N \ {0, 1}, let DNCh be the set of all DNC functions in hω. Let DNCn be the set of all DNC functions in nω. Greenberg and Miller’s proof uses work of Kumabe and Lewis [225], who built a DNC function of minimal degree. We will not include the fairly long proof of this result here. An analysis of the construction in [225], which can be found in [170], shows that the following holds.

Theorem 13.9.1 (Kumabe and Lewis [225]). For any computable order h : N N \ {0, 1}, there is an f DNCh whose Turing degree is minimal.

Thus, to show that there is a set of e ective Hausdor dimension 1 and minimal degree, it will be enough to show that there is a computable order h : N N \ {0, 1} such that every f DNCh computes a set of e ective Hausdor dimension 1. We do so in the rest of this section.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]