Java程序辅导

C C++ Java Python Processing编程在线培训 程序编写 软件开发 视频讲解

客服在线QQ:2653320439 微信:ittutor Email:itutor@qq.com
wx: cjtutor
QQ: 2653320439
COMP4630: λ-Calculus
4. Standardisation
Michael Norrish
Michael.Norrish@nicta.com.au
Canberra Research Lab., NICTA
Semester 2, 2015
Last Time
Confluence
I The property that divergent evaluations can rejoin one another
Proof
I Diamond properties
I Uses parallel reduction (=⇒β); and
I Many inductions
Consequences
I Soundness of λ
I (With an analogous proof) Soundness of λη
I Incompleteness of λ
I (See end of lecture 2)
Today
Introduction
Head Reduction
Weak Head Reduction
The Proof
Failing Approaches
The Right Approach
Consequences
Conclusion
Objective
From last time, we know that a term M has at most one normal
form.
Unfortunately, we also know that not all evaluation strategies will
lead to that normal form.
I This is not inconsistent with confluence.
I Why?
Introduction 4/27
Evaluation Strategies
An evaluation strategy is basically a way of answering the question:
Where (i.e., in which sub-term) should I do my next
reduction?
Or:
Where should I do the next bit of work? Some languages (e.g., Java) do not allow for choices to be madeat all. They specify a precise evaluation order.
Why would they do that?

Introduction 5/27
Evaluation Strategy #1: Applicative Order
Evaluate everything from the bottom up.
I I.e., in (λv.M) N work will start with M, passing to N and
performing the top-level β-reduction last
A function’s arguments (and the function itself) will be evaluated
before the argument is passed to the function.
Also known as strict evaluation.
(Used in Fortran, C, Pascal, Ada, SML, Java. . . )
Causes (λx. y) Ω to go into an infinite loop.
Introduction 6/27
Evaluation Strategy #2: Normal Order
Evaluate top-down, left-to-right.
I With (λv.M) N, start by performing the β-reduction,
producing M[v := N]
I Find the top-most, left-most β-redex and reduce it.
I Keep going
This strategy is behind the lazy evaluation of languages like
Haskell.
Normal order evaluation will always terminate with a normal form
if a term has one. (Proof to come. . . )
Introduction 7/27
Evaluation Strategy Trade-offs
An evaluation strategy might
1. be guaranteed to find normal forms; or
2. aim to perform the least number of β-reductions
Na¨ıvely,
I normal order reduction does 1;
I applicative order “sort of” achieves 2, but gives up on 1
(In fact, optimal reduction is very difficult to get right.)
Introduction 8/27
Proving Normal Order Evaluation
Our focus is in showing that normal order evaluation is guaranteed
to find normal forms.
I (That’s why it’s called normal order. . . )
Here are the rules:
(λv.M) N −→n M[v := N] M −→n M ′(λv.M) −→n (λv.M ′)
M −→n M ′ M not an abstraction
M N −→n M ′ N
N −→n N ′ M not an abstraction M in β-nf
M N −→n M N ′
Introduction 9/27
Head Reduction
We can divide normal order reduction into two different sorts of
reduction.
First, normal order reduction:
(λv.M) N −→n M[v := N] M −→n M ′(λv.M) −→n (λv.M ′)
M −→n M ′ M not an abstraction
M N −→n M ′ N
N −→n N ′ M not an abstraction M in β-nf
M N −→n M N ′
When head reducing, you never reduce to the right of an
application
Head Reduction 10/27
Head Reduction
We can divide normal order reduction into two different sorts of
reduction.
Then, head reduction:
(λv.M) N −→h M[v := N] M −→h M ′(λv.M) −→h (λv.M ′)
M −→h M ′ M not an abstraction
M N −→h M ′ N
N −→h N ′ M not an abstraction M in β-nf
M N −→h M N ′
When head reducing, you never reduce to the right of an
application
Head Reduction 10/27
Hence, Head Normal Forms
Head Normal Form is a reasonable stopping place.
Rules again:
(λv.M) N h−→M[v := N] M h−→M ′(λv.M) h−→ (λv.M ′)
M h−→M ′ M not an abstraction
M N h−→M ′ N
Examples:
I v is in hnf
I (λv. v) is in hnf
I (λvw. v (λu.M) N) is in hnf
I (λuwz. v ((λu.M) N)) is in hnf
Head Reduction 11/27
Head Normal Forms, Generally
If term M is in hnf, then it will look like:
(λ~v. uM1 · · ·Mn)
The vector ~v may be empty, u may be free or bound, and
the number of extra arguments, n, may be 0.
Once a term is in hnf, its top-level structure can’t change.
Head Reduction 12/27
After Head Reductions
Once a term is in hnf, its top-level structure can’t change.
Any further reductions (of any sort) inside
(λ~v. uM1 · · ·Mn)
will be reductions within an Mi.
Each argument will evolve independently, and the number of
arguments can’t change.
These are internal reductions i−→.
I If the term is (λ~v. (λu.M)N1N2 . . . ) (not in hnf) and M
reduces, then that is an internal reduction too
I All reductions are either head or internal
Head Reduction 13/27
Normal Order Reduction Splits in Two
The last rule of normal order reduction
(which we deleted to get head reduction):
N −→n N ′ M not an abstraction M in β-nf
M N −→n M N ′
If M −→∗n N, and the reductions aren’t all head, then there must
be a first head normal form P, such that
M h−→∗ P i−→∗ N
We want to show the same sort of split for arbitrary
β-reduction (−→∗β)
Head Reduction 14/27
Interlude: Weak Head Reduction
Thanks to:
M h−→M ′
(λv.M) h−→ (λv.M ′)
head reduction proceeds inside function bodies.
If you take this rule out, you get weak head reduction.
Weak head normal forms are
I head normal forms, or
I abstractions.
Weak head reduction is used in implementations of functional
programming languages.
Head Reduction Weak Head Reduction 15/27
Basic Strategy
We want to know that, if by some path:
M −→∗β N
with N a normal form, then normal order reduction will
take M to N too.
Will do this by showing a more general result.
That for any N, if M −→∗β N, then there exists a P such that
M h−→∗ P i−→∗ N
The Proof 16/27
We Want to Commute Steps
If we had
M
N
P
i
h
N ′
h
∗
i
∗
we’d like to know that there was a N ′ that we could get to via
head reduction, and from which we could make internal reductions
to get to P (maybe with multiple steps?).
Maybe this would allow head and internal steps to be
“bubble-sorted” so that all head steps came first.
The Proof Failing Approaches 17/27
We Want to Commute Steps
If we had
M
N
P
i
h
N ′
h
∗
i
∗
we’d like to know that there was a N ′ that we could get to via
head reduction, and from which we could make internal reductions
to get to P
(maybe with multiple steps?).
Maybe this would allow head and internal steps to be
“bubble-sorted” so that all head steps came first.
The Proof Failing Approaches 17/27
We Want to Commute Steps
If we had
M
N
P
i
h
N ′
h
∗
i∗
we’d like to know that there was a N ′ that we could get to via
head reduction, and from which we could make internal reductions
to get to P (maybe with multiple steps?).
Maybe this would allow head and internal steps to be
“bubble-sorted” so that all head steps came first.
The Proof Failing Approaches 17/27
We Want to Commute Steps
If we had
M
N
P
i
h
N ′
h
∗
i∗
we’d like to know that there was a N ′ that we could get to via
head reduction, and from which we could make internal reductions
to get to P (maybe with multiple steps?).
Maybe this would allow head and internal steps to be
“bubble-sorted” so that all head steps came first.
The Proof Failing Approaches 17/27
But Direct Commuting is Hard
Commuting does require multiple steps:
(λx. f x x) ((λy. y z) u)
(λx. f x x) (u z)
f (u z) (u z)
h
i
f ((λy. y z) u) ((λy. y z) u)
f (u z) ((λy. y z) u)
f (u z) (u z)
i
i
f ((λy. y z) u) (u z)
f (u z) (u z)
i
i
h
This example requires multiple (2) internal reductions.
The Proof Failing Approaches 18/27
But Direct Commuting is Hard
Commuting does require multiple steps:
(λu. (λv. v u z) f) N
(λu. f u z) N
f N z
h
i
(λv. v N z) f
f N z
h
h
This example requires multiple head reductions (and no internals).
The Proof Failing Approaches 18/27
Commuting with Multiple Steps Isn’t Good Enough
This is a theorem:
M i−→ N h−→ P
=⇒
∃N ′. M h−→∗ N ′ i−→∗ P
But it’s not good enough.
Our examples show us that we can have
ih → hi2
ih → h2
The Proof Failing Approaches 19/27
The Bubble-Sort That Never Ends
Our examples show us that we can have
ih → hi2
ih → h2
Start with a reduction sequence ihihi:
ihihi → hi3hi→ hi2h2i→ hihi2hi→ · · ·
This is not progress: we still have two head reductions that haven’t
been “sorted” to the start of the sequence.
The Proof Failing Approaches 20/27
A Better Lemma
Though commuting internal and head reductions can result in
multiple internal reductions, the latter are all parallel (write i=⇒ ).
So, prove instead:
M
P
i
N
h
P ′
h
∗
i
If a internal parallel reduction is followed by a head reduction,
there is an alternative route where head reductions come first, and
there is one internal parallel reduction afterwards.
The Proof The Right Approach 21/27
A Better Lemma
Though commuting internal and head reductions can result in
multiple internal reductions, the latter are all parallel (write i=⇒ ).
So, prove instead:
M
P
i
N
h
P ′
h
∗
i
If a internal parallel reduction is followed by a head reduction,
there is an alternative route where head reductions come first, and
there is one internal parallel reduction afterwards.
The Proof The Right Approach 21/27
Proof in More Detail
Have
M i=⇒ P h−→ N
As P head-reduces it is (λ~v. (λu. P0)P1 P2 · · ·Pn), with n ≥ 1
And M is of form (λ~v. (λu.M0)M1M2 · · ·Mn), with Mi =⇒β Pi
So
M h−→ (λ~v.M0[u :=M1]M2 · · ·Mn)
=⇒β N
Last transition is =⇒β, not i=⇒ because M0’s reduction may be
at top level, making it head.
A little more work is still required
(decomposing =⇒β into head and internal parts).
The Proof The Right Approach 22/27
Proof in More Detail
Have
M i=⇒ P h−→ N
As P head-reduces it is (λ~v. (λu. P0)P1 P2 · · ·Pn), with n ≥ 1
And M is of form (λ~v. (λu.M0)M1M2 · · ·Mn), with Mi =⇒β Pi
So
M h−→ (λ~v.M0[u :=M1]M2 · · ·Mn)
=⇒β N
Last transition is =⇒β, not i=⇒ because M0’s reduction may be
at top level, making it head.
A little more work is still required
(decomposing =⇒β into head and internal parts).
The Proof The Right Approach 22/27
Proof in More Detail
Have
M i=⇒ P h−→ N
As P head-reduces it is (λ~v. (λu. P0)P1 P2 · · ·Pn), with n ≥ 1
And M is of form (λ~v. (λu.M0)M1M2 · · ·Mn), with Mi =⇒β Pi
So
M h−→ (λ~v.M0[u :=M1]M2 · · ·Mn)
=⇒β N
Last transition is =⇒β, not i=⇒ because M0’s reduction may be
at top level, making it head.
A little more work is still required
(decomposing =⇒β into head and internal parts).
The Proof The Right Approach 22/27
The Last Big Lemma
If M =⇒β N, then there are Mi such that
M h−→M1 h−→M2 · · · h−→Mn i=⇒ N
and each
Mi =⇒β N
This gives:
M N
P
β
h
∗
i
The Proof The Right Approach 23/27
Putting the Lemmas Together
Proving: M −→∗β N =⇒ ∃P. M h−→∗ P i−→∗ N
Have M −→∗β N, and so also M =⇒∗β N.
(Base case of zero steps trivial.)
By Last Big Lemma, also have P1 s.t. M
h−→∗ P1 i=⇒ M ′
By inductive hypothesis, have P2 s.t. M
′ h−→∗ P2 i−→∗ N
I.e.,
M h−→∗ P1 i=⇒ M ′ h−→∗ P2 i−→∗ N
Now, we can “bubble” head reductions after M ′ up over the i=⇒ ,
using the Better Lemma.
The Proof The Right Approach 24/27
Putting the Lemmas Together
Proving: M −→∗β N =⇒ ∃P. M h−→∗ P i−→∗ N
Have M −→∗β N, and so also M =⇒∗β N.
So, assume M =⇒β M ′ =⇒∗β N.
By Last Big Lemma, also have P1 s.t. M
h−→∗ P1 i=⇒ M ′
By inductive hypothesis, have P2 s.t. M
′ h−→∗ P2 i−→∗ N
I.e.,
M h−→∗ P1 i=⇒ M ′ h−→∗ P2 i−→∗ N
Now, we can “bubble” head reductions after M ′ up over the i=⇒ ,
using the Better Lemma.
The Proof The Right Approach 24/27
Putting the Lemmas Together
Proving: M −→∗β N =⇒ ∃P. M h−→∗ P i−→∗ N
Have M −→∗β N, and so also M =⇒∗β N.
So, assume M =⇒β M ′ =⇒∗β N.
By Last Big Lemma, also have P1 s.t. M
h−→∗ P1 i=⇒ M ′
By inductive hypothesis, have P2 s.t. M
′ h−→∗ P2 i−→∗ N
I.e.,
M h−→∗ P1 i=⇒ M ′ h−→∗ P2 i−→∗ N
Now, we can “bubble” head reductions after M ′ up over the i=⇒ ,
using the Better Lemma.
The Proof The Right Approach 24/27
Putting the Lemmas Together
Proving: M −→∗β N =⇒ ∃P. M h−→∗ P i−→∗ N
Have M −→∗β N, and so also M =⇒∗β N.
So, assume M =⇒β M ′ =⇒∗β N.
By Last Big Lemma, also have P1 s.t. M
h−→∗ P1 i=⇒ M ′
By inductive hypothesis, have P2 s.t. M
′ h−→∗ P2 i−→∗ N
I.e.,
M h−→∗ P1 i=⇒ M ′ h−→∗ P2 i−→∗ N
Now, we can “bubble” head reductions after M ′ up over the i=⇒ ,
using the Better Lemma.
The Proof The Right Approach 24/27
Putting the Lemmas Together
Proving: M −→∗β N =⇒ ∃P. M h−→∗ P i−→∗ N
Have M −→∗β N, and so also M =⇒∗β N.
So, assume M =⇒β M ′ =⇒∗β N.
By Last Big Lemma, also have P1 s.t. M
h−→∗ P1 i=⇒ M ′
By inductive hypothesis, have P2 s.t. M
′ h−→∗ P2 i−→∗ N
I.e.,
M h−→∗ P1 i=⇒ M ′ h−→∗ P2 i−→∗ N
Now, we can “bubble” head reductions after M ′ up over the i=⇒ ,
using the Better Lemma.
The Proof The Right Approach 24/27
Consequences: Standardisation
Have shown: M −→∗β N =⇒ ∃P. M h−→∗ P i−→∗ N
It’s obviously possible to order the internal reductions so that they
occur left-to-right.
I By induction.
The internal terms within N are all smaller than N itself,
so the internal reductions within each Ni can themselves be
sorted appropriately.
Gives Standardisation:
If M −→∗β N is possible, then N can be reached from M
in a “standard” way (doing reductions in left to right
order)
The Proof Consequences 25/27
Consequences: Normal Order Evaluation Works
Recall that normal order evaluation is a “standard” evaluation
strategy that does all possible reductions.
If M can reduce to N, a β-normal form, then there is a standard
reduction that does the same.
If a standard reduction terminates in a β-normal form, it has done
all possible reductions.
And so that standard reduction was a normal order reduction.
So, normal order evaluation finds normal forms if they exist.
The Proof Consequences 26/27
Consequences: Normal Order Evaluation Works
Recall that normal order evaluation is a “standard” evaluation
strategy that does all possible reductions.
If M can reduce to N, a β-normal form, then there is a standard
reduction that does the same.
If a standard reduction terminates in a β-normal form, it has done
all possible reductions.
And so that standard reduction was a normal order reduction.
So, normal order evaluation finds normal forms if they exist.
The Proof Consequences 26/27
Consequences: Normal Order Evaluation Works
Recall that normal order evaluation is a “standard” evaluation
strategy that does all possible reductions.
If M can reduce to N, a β-normal form, then there is a standard
reduction that does the same.
If a standard reduction terminates in a β-normal form, it has done
all possible reductions.
And so that standard reduction was a normal order reduction.
So, normal order evaluation finds normal forms if they exist.
The Proof Consequences 26/27
Consequences: Normal Order Evaluation Works
Recall that normal order evaluation is a “standard” evaluation
strategy that does all possible reductions.
If M can reduce to N, a β-normal form, then there is a standard
reduction that does the same.
If a standard reduction terminates in a β-normal form, it has done
all possible reductions.
And so that standard reduction was a normal order reduction.
So, normal order evaluation finds normal forms if they exist.
The Proof Consequences 26/27
Consequences: Normal Order Evaluation Works
Recall that normal order evaluation is a “standard” evaluation
strategy that does all possible reductions.
If M can reduce to N, a β-normal form, then there is a standard
reduction that does the same.
If a standard reduction terminates in a β-normal form, it has done
all possible reductions.
And so that standard reduction was a normal order reduction.
So, normal order evaluation finds normal forms if they exist.
The Proof Consequences 26/27
Summary
An involved proof.
Lesson #1: The λ-calculus is a plausible programming language
I there is an algorithm for turning λ-terms into values
I (when those terms have values at all)
Lesson #2: Evaluation Orders are a Design Question
I Do you want to guarantee as much termination as possible?
I Use normal order
I Or, do you want more speed, and unnecessary
non-terminations?
I Use applicative order (like C, Java etc)
Next time: adding plausibility.
Numbers, Pairs and Lists for the λ-calculus.
Conclusion 27/27