Recent of the result can consist of a few

Recent years have witnessed an exponential growth of big data exchanges in numerous multipurpose applications. Many efforts have been devoted to the studies how to significantly reduce data processing time and complexity. Compressed sensing cite{}, a revolutionary paradigm for data acquisition and reconstruction, has showed the exploitation of compressibility to achieve high signal/image reconstructions with less data. This research direction has gained a great deal of researchers’ attentions recentlycite{}.

The basic idea behind CS is searching for sparsest solutions to underdetermined linear systems, recovering signals from far fewer samples than traditionally required using Nyquist sampling theorem. Such problems of limited number of samples appear in different scenarios. For instance, in medical imaging such as  magnetic resonance imaging (MRI)cite{}, the required time to achieve MR data set is proportional to its dimensionality and the number of spatial frequency measured. Hence, development of CS in MRI can reduce the amount of data needed in creating an image, consequently leading to a reduction in scan time.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

For x-ray computed tomography (CT), CS is of interest due to the possibilities of low dose scan and motion artifact mitigationcite{}. In typical diagnostic CT scan, a patient suffers an x-ray dose of about one hundred times of single projections. Minimizing the sampling requirements obviously not only reduces the dose burden in CT imaging but also can help facilitate motion x-ray applications effectively, e.g. guidance in radiation therapy and surgerycite{}.

 While CS plays a significant role in signal processing and medical imaging, it is not always perfect in practice. CS measurements are unavoidably contaminated by variety of noisy sources, i.e. thermal noise from hardware imperfections, quantization errors, and/or interferences. Most traditional approaches treat the interference plus noise as independent and identically distributed (i.i.

d.) Gaussian noise which evenly affects over all observationscite{}. Although this assumption holds for many cases; however, in some scenarios, faulty sensors, meters and system malfunctions will cause impulsive-like interference emerging into CS operations. Consequently, parts of the result can consist of a few bad data (known as “gross errors” or outliers) in addition to the common observation noises. Outliers arise in different ways, for example, they can be caused by sensor failures and calibration errors in data acquisition processcite{}, or a result from signal clippingcite{}. Data obtained from corrupted measurements by outliers are significantly different from their nominal values and damage portions are unknown. That makes it very challenging to recover the original signals.

To address this issue, outliers can be modeled as a sparse error vector; the overall observed signal from CS operation are expressed as