In Digital communication, we apply input as binary bits which are converted into symbols and waveforms by a digital modulator. These waveforms should be unique and different from each other so we can easily identify what symbol/bit is transmitted. To make them unique, we apply Gram-Schmidt Orthogonalization procedure.

Now consider that we have a waveform [latex]s_1(t)[/latex] and we assume that its energy is [latex]\varepsilon_1[/latex]. Then we can construct our first waveform as:

[latex]\begin{equation} \psi_1(t) = \frac{s_1(t)}{\sqrt{\varepsilon_1}} \label{energy_1} \end{equation}[/latex]

So now we have our first waveform which has energy = 1. Now we have our second waveform available known as [latex]s_2(t)[/latex] But it may or may not be orthogonal to [latex]\psi_1(t)[/latex] So, it is necessary to make it orthogonal and that’s where gram-schmidt comes in handy. The procedure tells us that to make [latex]s_2(t)[/latex] orthogonal to [latex]\psi_1(t)[/latex] we compute its projection onto space spanned by [latex]\psi_1(t)[/latex] and to compute its projection we first find its scaling number which when multiplied by [latex]\psi_1(t)[/latex] will give the projection. The scaling number is found as:

[latex]c_{21} = \int_{-\infty}^{\infty} s_2(t) \psi_1(t) dt[/latex]

Then we multiply the scaling factor [latex]c_{21}[/latex] with [latex]\psi_1(t)[/latex] and subtract the product from [latex]s_2(t)[/latex] :

[latex]d_2(t) = s_2(t) – c_{21}\psi_1(t)[/latex]

At this stage you might ask is [latex]d_2(t)[/latex] orthogonal to [latex]\psi_1(t)[/latex] the answer is yes, it is orthogonal to [latex]\psi_1(t)[/latex] but its energy is not = 1 so it is not orthonormal. Hence that is why we didn’t call it [latex]\psi_2(t)[/latex] . Now to make it orthonormal we simply divide [latex]d_2(t)[/latex] by its energy [latex]\varepsilon_2 [/latex]:

[latex]\psi_2(t) = \frac{d_2(t)}{\varepsilon_2}[/latex]

where,

[latex]\varepsilon_2 = \int_{-\infty}^{\infty} d_2(t)^2 dt[/latex]

Up till now we have orthogonalize 2 waveforms and we can go on and on by using the above procedure. Simply we can now write in general form for upto [latex]k^th[/latex] waveform:

[latex]\psi_k(t) = \frac{d_k(t)}{\sqrt{\varepsilon_k}}[/latex]

where,

[latex]d_k(t) = s_k(t) – \sum_{i=1}^{k-1} c_{ki} \psi_i(t) \; ,[/latex]

[latex]\varepsilon_k = \int_{-\infty}^{\infty} d_k(t)^2 dt \; ,[/latex]

and

[latex]c_{ki} = \int_{-\infty}^{\infty} s_k(t)\psi_i(t) dt \; , \; \text{for}\;\; i=1,2,\ldots,k-1[/latex]

Hence we can orthogonalize all waveforms which are normally M waveforms by using this procedure. The benefit of orthogonalizing these waveforms is that they don’t overlap with each other and are easy to identify at the demodulation side.  Now we would give some examples so you have clear idea on how to orthogonalize this:

Fun Fact: Identity Matrix is already orthogonlized so if you apply gram-schmidt on orthogonal vectors the scaling factor will turn out to be zero.

Example-1

Consider the matrix  [latex]A = \left[ \begin{matrix} 1 & 1 & 1\\2 & 1 & 0\\5 & 1 & 3\end{matrix}\right][/latex]. Apply Gram-Schmidt to orthogonalize this Matrix:

The Method described above is for continuous signals but it can be applied to these vectors as well. The only difference is the definition of energy. What we call energy of the waveform is the inner product of the signal with itself which in vector case taking the vector [latex]v[/latex] and finding the inner product with itself by using [latex]v^Tv[/latex]

Hence now taking the first vector [latex] v = \left[ \begin{matrix} 1 \\ 2 \\ 5 \end{matrix}\right][/latex] and finding its energy (also known as squared euclidean norm) by using [latex]v^Tv[/latex]

[latex]v^Tv = \begin{matrix} [1 & 2 & 5] \end{matrix} \left[ \begin{matrix} 1 \\ 2 \\ 5 \end{matrix}\right] = 30[/latex]

then [latex]\psi_1 = \frac{v}{\sqrt{v^Tv}} = \frac{1}{\sqrt{30}} \left[ \begin{matrix} 1 \\ 2 \\ 5 \end{matrix}\right][/latex]

For [latex]\psi_2[/latex] we will subtract the scaling factor from [latex]v_2 = \left[ \begin{matrix} 1 \\ 1 \\ 1 \end{matrix}\right][/latex]

the scaling factor is found to be:

[latex]c_{21} = \frac{v_2^T \psi_1}{\psi_1^T\psi_1} = \frac{4\sqrt{30}}{15}[/latex]

then [latex]d_2 = \left[ \begin{matrix} \frac{11}{15} \\ \frac{7}{15} \\ \frac{-1}{3} \end{matrix}\right][/latex]

the energy (or squared euclidean norm) of [latex]d_2[/latex] is given as [latex]v_2^Tv = \frac{13}{15}[/latex]. Hence we have:

[latex]\psi_2 = \frac{1}{\sqrt{195}}  \left[ \begin{matrix} 11 \\ 7 \\ -5 \end{matrix}\right][/latex]

Now for last [latex]v_3 = \left[ \begin{matrix} 1 \\ 0 \\ 3 \end{matrix}\right][/latex]

[latex]d_3 = v_3 – c_{31}\psi_1 – c_{32}\psi_2[/latex]

where [latex]c_{31} = \frac{v_3^T \psi_1}{\psi_1^T\psi_1} = \frac{8\sqrt{30}}{15}[/latex]

and [latex]c_{32} = \frac{v_3^T \psi_2}{\psi_2^T\psi_2} = \frac{-4}{\sqrt{195}}[/latex]

then we have [latex]d_3 = \left[ \begin{matrix} \frac{9}{13} \\ \frac{-12}{13} \\ \frac{3}{13} \end{matrix}\right][/latex]

then energy comes out to be [latex]\frac{18}{13}[/latex]

then we have,

[latex]\psi_3= \left[ \begin{matrix} \frac{3}{\sqrt{26}} \\ \frac{-2\sqrt{26}}{13} \\ \frac{1}{\sqrt{26}} \end{matrix}\right][/latex]

Now we have the final matrix by Gram-Schmidt Orthogonalization given as:

[latex]Q = \left[ \begin{matrix} \frac{1}{\sqrt{30}} & \frac{11}{\sqrt{195}} & \frac{3}{\sqrt{26}} \\ \frac{2}{\sqrt{30}} & \frac{7}{\sqrt{195}} & \frac{-2\sqrt{26}}{13} \\ \frac{5}{\sqrt{30}} & \frac{-5}{\sqrt{195}} & \frac{1}{\sqrt{26}}\end{matrix}\right][/latex]

Columns of above matrix are orthogonal to each other and each vector has its norm or energy = 1.