The spatial division multiple access (SDMA) concept for mobile radio cellular systems has recently attracted much attention. SDMA is a spectral bandwidth saving multiple access technique which provides increased cellular capacity via effective exploitation of the spatial dimension of the radio resource. In SDMA based wireless networks, several users within the same cell share the same time frequency channel, as opposed to the other popular multiple access methodologies, e.g., time division multiple access (TDMA) or frequency division multiple access (FDMA), where each channel is occupied at most by one user at a time. This efficient spectral allocation strategy per cell permits to expand the overall capacity of current cellular infrastructures, without consuming additional radio frequency (RF) bandwidth. From the receiver viewpoint, the SDMA technique raises a new signal processing problem: in addition to suppression of the intersymbol interference (ISI) induced by multipath propagation, the SDMA receiver has to separate the linearly superimposed users. Current research on SDMA architectures focus on developing algorithms capable of resolving linear convolutive mixtures of digital sources. The main goal of this proposal is the optimal design of SDMA receivers based on differential-geometric tools. Here, optimality results from the full exploitation of the data model, with possible incorporation of prior knowledge (Bayesian processing).
In fact, spatial and/or temporal over sampling is the preferred data acquisition scheme in SDMA receivers, and leads to highly structured baseband data matrices. In general, these can be written as the product of a block Hankel channel matrix and a block Toeplitz signal matrix, embedded in (usually Gaussian) additive noise. Also, the entries of the signal matrix are restricted to a finite alphabet, dictated by the chosen linear digital modulation format. In the majority of current approaches, this information is only partially exploited so that they are sub optimal in that respect. Moreover, by exploiting 2nd order statistic, further structure can be incorporated into the problem, as the channel matrix can be turned unitary. In this proposal, we aim at designing maximum-likelihood (ML) estimators of the mixing channel matrix and/or of the emitted data sequences, which respect all the known algebraic restrictions. By fully matching the estimators to the data model constraints, a significant improvement of their performance can be expected. The constrained ML estimators are to be derived in a differential geometry framework. This viewpoint has recently proven to be successful in solving some other relevant signal processing problems, e.g., direction-of-arrival (DOA) estimation, denoising of corrupted Hankel matrices, and adaptive subspace tracking. For the structured ML estimation problem at hand, manifold theory seems to be the most natural setting, as the algebraic restrictions on the parameters can be efficiently expressed as Cartesian products of certain differentiable manifolds (Lie groups orthogonal matrices, linear varieties of Hankel matrices, etc.). Optimization of the constrained likelihood function is to be achieved by developing techniques of optimization over differentiable manifolds. This implies a detailed characterization of the constraint differentiable surfaces (tangent spaces, curvature, etc.), which also provides the appropriate tools to study the convergence properties of the class of algorithms to be derived.