![]() The signal reconstruction employs a commonly used algorithm based on the minimalization of a distance norm between the original measurement vector and vector calculated from the reconstructed signal. The integrator’s reset is controlled by a random sequence generator. The main idea of the method is based on input signal integration by a randomly resettable integrator before the AD conversion. This method is intended primarily for compressed sensing of aperiodic or quasiperiodic signals acquired by commonly used sensors such as ECG, environmental, and other sensors, the output of which can be modeled by multi-harmonic signals. We then focus on practical designs for symbol-spaced and fractionally-spaced sampling subject to a constraint on the number of slicers, and propose an algorithm for optimizing slicer thresholds which significantly improves performance over a standard design.Ī novel method of analog-to-information conversion-the random interval integration-is proposed and studied in this paper. Using ideas similar to those underlying compressive sensing, we prove that such architectures have no fundamental limitations in theory: randomly dispersing enough one-bit slicers over space and time does provide information sufficient for reliable equalization. We observe that the performance is sensitive to channel realization and sampling phase, which motivates a more flexible space-time architecture. We first study standard symbol-spaced ADC with severe quantization constraints, estimating the minimum number of slicers needed to avoid error floors. In this paper, we consider a “space-time” generalization of the flash architecture by allowing a fixed number of slicers to be dispersed in time (i.e., sampling offset) as well as space (i.e., amplitude), with the goal of investigating its capabilities for analog-to-information conversion (i.e., enabling reliable recovery of digital information, rather than faithful reproduction of the input signal) in the context of channel equalization for binary signaling over a dispersive channel. Most of the time, speed vs memory is a tradeoff, you can have either one or the other.The scaling of analog-to-digital converter (ADC) power consumption with communication bandwidth imposes severe limits on its precision which significantly impacts receiver performance. ![]() With gcc, you can use the option -Os to optimize for size and as x86 architecture is byte-addressable, this may result in uint8_t being used without padding on a PC, but consequently with lower speed. That just introduces unused padding bytes. If they don't require it, it might still be faster to have them aligned, so the compiler might decide to do so. : As you're asking about memory performance, too: it is unlikely that you gain something by using uint8_t because some architectures require values smaller than the native word size to be aligned to word boundaries. ![]() The version using just unsigned int might perform a bit better. The performance of all variants using the exactly sized types should be the same, because a decent compiler will emit the same code for all of them. for gcc -Wall -Wextra -pedantic, to catch cases where you do such a conversion by accident). The explicit cast is here to document the intention! Without it, a good compiler will warn you about the implicit conversion possibly losing information (and you should always enable compiler warnings, e.g. Without exactly sized integer types (and, speaking of performance, you should consider that, because performance is in general best when using the native word size of the machine), this would be different: unsigned int val = 0xabcd īut when you really need to have these exactly sized type, the best code is IMHO uint16_t val = 0xabcd It happens implicitly when converting to uint8_t. While both answers are correct, the bit masking here is completely redundant.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |