Файл: Ersoy O.K. Diffraction, Fourier optics, and imaging (Wiley, 2006)(ISBN 0471238163)(427s) PEo .pdf

ВУЗ: Не указан

Категория: Не указан

Дисциплина: Не указана

Добавлен: 28.06.2024

Просмотров: 896

Скачиваний: 0

ВНИМАНИЕ! Если данный файл нарушает Ваши авторские права, то обязательно сообщите нам.

Appendix B

Linear Vector Spaces

B.1 INTRODUCTION

In a large variety of applications, Fourier-related series and discrete transforms are used for the representation of signals by a set of basis functions. More generally, this is a subject of linear vector spaces in which basis functions are called vectors. Representation of signals by discrete Fourier-related transforms can be considered to be the same subject as representing signals by basis vectors in infinite and finitedimensional Hilbert spaces, respectively.

The most obvious example of a vector space is the familiar 3-D space R3. In general, the concept of a vector space is much more encompassing. Associated with every vector space is a set of scalars which belong to a field F. The elements of F can be added and multiplied to generate new elements in F. Some examples of fields are the field R of real numbers, the field C of complex numbers, the field of binary numbers given by [0,1], and the field of rational polynomials.

A vector space S over a field F is a set of elements called vectors, with which two operations called addition (þ) and multiplication ( ) are carried out. Let u, v, w be elements (vectors) of S, and a; b be scalars, which are the elements of the field F. S satisfies the following axioms:

1.u þ v ¼ v þ u 2 S

2.a ðu þ vÞ ¼ a u þ a v 2 S

3.ða þ bÞ u ¼ a u þ b u

4.ðu þ vÞ þ w ¼ u þ ðv þ wÞ

5.ða bÞ u ¼ a ðb uÞ

6.There exists a null vector denoted as h such that h þ u ¼ u. h is often written simply as 0.

7.For scalars 0 and 1, 0 u ¼ 0 and 1 u ¼ u.

8.There exists an additive inverse element for each u, denoted by u, such that

u þ ð uÞ ¼ h

u also equals 1 u.

Diffraction, Fourier Optics and Imaging, by Okan K. Ersoy

Copyright # 2007 John Wiley & Sons, Inc.

382


PROPERTIES OF VECTOR SPACES

383

Some important vector spaces are the following:

Fn is the space of column vectors with n components from a field F. Two special cases are R3 whose components are from R, the vector space of real numbers, and Cn whose components are from C, the vector space of complex numbers.

Fm n is the space of all m n matrices whose components belong to the field F. Some other vector spaces are discussed in the examples below.

EXAMPLE B.1 Let V and W be vector spaces over the same field F. The Cartesian product V W consists of the set of ordered pairs fv; wg with v 2 V and w 2 W. V W is a vector space. Vector addition and scalar multiplication on V W are defined as

fv; wg þ fp; qg ¼ fv þ p; w þ qg

afv; wg ¼ fav; awg

h ¼ fhv; hwg

where a 2 F, v and p 2 V, w and q 2 W, hv and hw are the null elements of V and W, respectively.

The Cartesian product vector space can be extended to include any number of vector spaces.

EXAMPLE B.2 The set of all complex-valued continuous functions of the variable t over the interval [a,b] of the real line forms a vector space, denoted by C[a,b]. Let u and v be vectors in this space, and a 2 F. The vector addition and the scalar multiplication are given by

ðu þ vÞðtÞ ¼ uðtÞ þ vðtÞ ðauÞðtÞ ¼ auðtÞ

The null vector h is the function identically equal to 0 over [a,b].

B.2 PROPERTIES OF VECTOR SPACES

In this section, we discuss properties of vector spaces which are valid in general, without being specific to a particular vector space.

Subspace

A nonempty vector space L is a subspace of a space S if the elements of L are also the elements of S, and S has possibly more number of elements.


384

APPENDIX B: LINEAR VECTOR SPACES

Let M and N be subspaces of S. They satisfy the following two properties:

1. The intersection M \ N is a subspace of S.

2. The direct sum M N is a subspace of S. The direct sum is described below.

Direct Sum

A set S is the direct sum of the subsets S1 and S2 if, for each s 2 S, there exists unique s1 2 S1 and s2 2 S2 such that s ¼ s1 þ s2. This is written as

S ¼ S1 S2 ðB:2-1Þ

EXAMPLE B.3 Let ða; bÞ ¼ ð 1; 1Þ in Example B.2. Consider the odd and even functions given by

ueðtÞ ¼ ueð tÞ

u0ðtÞ ¼ u0ð tÞ

The odd and even functions form subspaces So and Se, respectively. Any function xðtÞ in the total space S can be decomposed into even and odd functions as

u

 

t

Þ ¼

uðtÞ þ uð tÞ

ð

B:2-2

Þ

 

e

ð

2

 

u

 

t

Þ ¼

uðtÞ uðtÞ

ð

B:2-3

Þ

 

0

ð

2

 

Then,

 

 

 

 

 

 

 

uðtÞ ¼ ueðtÞ þ u0ðtÞ

ðB:2-4Þ

Consequently, the direct sum of So and Se equals S.

Convexity

A subspace Sc of a vector space S is convex if, for each vector s0 and s1 2 Sc, the vector s2 given by

s2 ¼ ls0 þ ð1 lÞs1; 0 l 1

ðB:2-5Þ

also belongs to Sc.

In a convex subspace, the line segment between any two points (vectors) in the subspace also belongs to the same subspace.


PROPERTIES OF VECTOR SPACES

385

Linear Independence

A vector u is said to be linearly dependent upon a set S of vectors vi expressed as a linear combination of the vectors in S:

X

u ¼ ckvk

k

if u can be

ðB:2-6Þ

where ck 2 F.

x is linearly independent of the vectors in S if Eq. (B.2-6) is invalid. A set of vectors is called a linearly independent set if each vector in the set is linearly independent of the other vectors in the set.

The following two properties of a set of linearly independent vectors are stated without proof:

1.A set of vectors u0; u1 . . .

ck ¼ 0 for all k.

2.If u0; u1 . . . are linearly ck ¼ bk for all k.

P

are linearly independent iff k ckuk ¼ 0 means

PP

independent, then,

k ckuk ¼ k bkuk means

Span

A vector u belongs to the subspace spanned by a subset S if u is a linear combination of vectors in S, as in Eq. (B.2-6). The subspace spanned by the vectors in S is denoted by span (S).

Bases and Dimension

A basis (or a coordinate system) in a vector space S is a set B of linearly independent vectors such that every vector in S is a linear combination of the elements of B.

The dimension M of the vector space S equals the number of elements of B. If M is finite, S is a finite-dimensional vector space. Otherwise, it is infinitedimensional.

If the elements of B are b0; b1; b2 . . . bM 1, then any vector x in S can be written

as

MX1

x ¼ wkbk

ðB:2-7Þ

k¼0

 

where wk s are scalars from the field F.

The number of elements in a basis of a finite-dimensional vector space S is the same as in any other basis of the space S.


386

APPENDIX B: LINEAR VECTOR SPACES

In an orthogonal basis, the vectors bm are orthogonal to each other. If B is an orthogonal basis, taking the inner product of x with bm in Eq. (B.2-7) yields

MX1

ðu; bmÞ ¼ wkðbk; bmÞ ¼ wmðbm; bmÞ

k¼0

 

 

 

 

so that

 

 

 

 

w

ðx; bmÞ

ð

B:2-8

Þ

m ¼

ðbm; bmÞ

 

EXAMPLE B.4 If the space is Fn, the columns of the n n identity matrix I are linearly independent and span Fn. Hence, they form a basis of Fn. This is called the standard basis.

B.3 INNER-PRODUCT VECTOR SPACES

The vector spaces of interest in practice are usually structured such that there are a norm indicating the length or the size of a vector, a measure of orientation between two vectors called the inner-product, and a distance measure (metric) between any two vectors. Such spaces are called inner-product vector spaces. The rest of the chapter is restricted to such spaces. Their properties are discussed below.

An inner product of two vectors u and v in an inner-product vector space S is written as (u,v) and is a mapping S S ! D, satisfying the following:

1.ðu; vÞ ¼ ðv; uÞ

2.ðau; vÞ ¼ aðu; vÞ; a being a scalar

3.ðu þ v; wÞ ¼ ðu; wÞ þ ðv; wÞ

4.ðu; uÞ > 0 when u ¼6 0, and ðu; uÞ ¼ 0 if u ¼ 0

When u and v are N-tuples,

u¼ ½u0 u1 . . . uN 1&t

v¼ ½v0v1 . . . vN 1&t

ðu; vÞ can be defined as

XN 1

ð

u; v

Þ ¼

u

v

ð

B:3-1

Þ

 

k

k

 

k¼0