Skip to main content

Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

Abstract

The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme) presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

1. Introduction

This work is part of a large project aimed to enhance the signal processing chain implemented in the mobile systems. The motivation is to reduce their size, cost, processing noise, electromagnetic emission and especially power consumption, as they are most often powered by batteries. This can be achieved by intelligently reorganizing their associated signal processing theory, and architecture. The idea is to combine event driven signal processing with asynchronous circuit design, in order to reduce the system processing activity and energy cost.

Almost all natural signals like speech, seismic, and biomedical are time varying in nature. Moreover, the man made signals like Doppler, Amplitude Shift Keying (ASK), and Frequency Shift Keying (FSK), also lay in the same category. The spectral contents of these signals vary with time, which is a direct consequence of the signal generation process [1]

The classical systems are based on the Nyquist signal processing architectures. These systems do not exploit the signal variations. Indeed, they sample the signal at a fixed rate without taking into account the intrinsic signal nature. Moreover they are highly constrained due to the Shannon theory especially in the case of low activity sporadic signals like electrocardiogram, phonocardiogram, seismic, and so forth. It causes to capture, and to process a large number of samples without any relevant information, a useless increase of the system activity, and its power consumption.

The power efficiency can be enhanced by intelligently adapting the system processing load according to the signal local variations. In this end, a signal driven sampling scheme, which is based on "level-crossing" is employed. The Level Crossing Sampling Scheme (LCSS) [2] adapts the sampling rate by following the local characteristics of the input signal [3, 4]. Hence, it drastically reduces the activity of the post-processing chain, because it only captures the relevant information [5, 6]. In this context, LCSS Based Analog to Digital Converters (LCADCs) have been developed [7–9]. Algorithms for processing [6, 10–12], and analysis [3, 5, 13, 14] of the nonuniformly spaced out in time-sampled data, obtained with the LCSS have also been developed.

Filtering is a basic operation, almost required in every signal processing chain. Therefore, this paper focuses on the development of efficient Finite Impulse Response (FIR) filtering techniques. The idea is to pilot the system processing activity by the input signal variations. By following this idea, an efficient solution is proposed by intelligently combining the features of both nonuniform and uniform signal processing tools, which promise a drastic computational gain of the proposed techniques compared to the classical one.

Section 2 briefly reviews the nonuniform signal processing tools employed in the proposed approach. Complete functionality of the proposed filtering techniques is described in Section 3. Section 4 demonstrates the appealing features of the proposed techniques with the help of an illustrative example. The computational complexities of both proposed techniques are deduced and compared, among and to the classical case in Section 5. Section 6 discusses the processing error. In Section 7, the proposed techniques performance is evaluated for a speech signal. Section 8 finally concludes the article.

2. Nonuniform Signal Processing Tools

2.1. LCSS (Level Crossing Sampling Scheme)

The LCSS belongs to the signal-dependent sampling schemes like zero-crossing sampling [15], Lebesgue sampling [16], and reference signal crossing sampling [17]. The concept of LCSS is not new and has been known at least since 1950s [18]. It is also known as an event-based sampling [19, 20]. In recent years, there have been considerable interests in the LCSS, in a broad spectrum of technology and applications. In [21–24], authors have employed it for monitoring and control systems. It has also been suggested in literature for compression [2], random processes [25], and band-limited Gaussian random processes [26].

The LCSS is a natural choice for sampling the time-varying signals. It lets the signal to dictate the sampling process [4]. The nonuniformity in the sampling process represents the signal local variations [3]. In the case of LCSS, a sample is captured only when the input analog signal crosses one of the predefined thresholds. The samples are not uniformly spaced in time because they depend on variations as it is clear from Figure 1.

Figure 1
figure 1

Level-crossing sampling scheme.

Let a set of levels which span the analog signal amplitude range be . These levels are equally spaced by a quantum . When crosses one of these predefined levels, a sample is taken [2]. This sample is the couple of an amplitude and a time . However is clearly equal to one of the levels and can be computed by employing

(1)

In (1), is the current sampling instant, is the previous one, and is the time elapsed between the current and the previous sampling instants.

2.2. LCADC (LCSS-Based Analog to Digital Converter)

Classically, during an ideal A/D conversion process the sampling instants are exactly known, where as samples amplitudes are quantized at the ADC resolution [27], which is defined by the ADC number of bits. This error is characterized by the Signal to Noise Ratio (SNR) [27], which can be expressed by

(2)

Here, M is the ADC number of bits. It follows that the SNR of an ideal ADC depends only on M and it can be improved by 6.02 dB for each increment in M.

The A/D conversion process, which occurs in the LCADCs [7–9], is dual in nature. Ideally in this case, samples amplitudes are exactly known since they are exactly equal to one of the predefined levels, while the sampling instants are quantized at the timer resolution . According to [7, 8], the SNR in this case is given by

(3)

Here, and are the powers of and of its derivative, respectively. It shows that in this case, the SNR does not depend on M any more, but on characteristics and . An improvement of in the SNR can be achieved by simply halving .

The choice of M is however crucial. It should be taken large enough to ensure a proper reconstruction of the signal. This problem has been addressed in [28–31]. In particular, in [31], it is shown that a band-limited signal can be ideally reconstructed from nonuniformly spaced samples if the average number of samples satisfies the Nyquist criterion. In the case of LCADCs, the average sampling frequency depends on M and the signal characteristics [7–9]. Thus, for a given application an appropriate M should be chosen in order to respect the reconstruction criterion [31].

In [7–9], authors have shown advantages of the LCADCs over the classical ones. The major advantages are the reduced activity, the power saving, the reduced electromagnetic emission, and the processing noise reduction. Inspiring from these interesting features, the Asynchronous Analog to Digital Converter (AADC) [7] is employed to digitize in the studied case. The characteristics of the filtering techniques described in the sequel are highly determined by the characteristics of the nonuniformly sampled signal produced by the AADC. We have already defined the AADC amplitude range , the number of bits M and the quantum q. They are linked by the following relation:

(4)

This quantum together with the AADC processing delay for one sample yields the upper limit on the input signal slope, which can be captured properly:

(5)

In order to respect the reconstruction criterion [31] and the tracking condition [7], a band pass filter with pass-band is employed at the AADC input. This together with a given M induces the AADC maximum and minimum sampling frequencies [6, 11], defined by

(6)
(7)

Here, and are the bandwidth and fundamental frequencies, and are the AADC maximum and minimum sampling frequencies, respectively.

2.3. ASA (Activity Selection Algorithm)

The nonuniformly sampled signal obtained with the AADC can be used for further nonuniform digital processing [3, 10, 13]. However in the studied case, the nonuniformity of the sampling process, which yields information on the signal local features, is employed to select only the relevant signal parts. Furthermore, the characteristics of each signal selected part are analyzed and are employed later on to adapt the proposed system parameters accordingly. This selection and local-features extraction process is named as the ASA.

For activity selection, the ASA exploits the information laying in the level-crossing sampled signal nonuniformity [5]. This selection process corresponds to an adaptive length rectangular windowing. It defines a series of selected windows within the whole signal length. The ability of activity selection is extremely important to reduce the proposed system processing activity and consequently its power consumption. Indeed, in the proposed case, no processing is performed during idle signal parts, which is one of the reasons of the achieved computational gain compared to the classical case. The ASA is defined as follow:

(8)

Here, is clear from (1). is the fundamental period of the bandlimited signal and detect parts of the nonuniformly sampled signal with activity. If the measured time delay is greater than is considered to be idle. The condition is chosen to ensure the Nyquist sampling criterion for .

is the reference window length. Its choice depends on the input signal characteristics and the system resources. The upper bound on is posed by the maximum number of samples that the system can treat at once. Whereas the lower bound on is posed by the condition , which should be respected in order to achieve a proper spectral representation [5].

represents the length in seconds of the selected window . poses the upper bound on represents the number of nonuniform samples laying in , which lies on the active part of the nonuniformly sampled signal. and both belong to the set of natural numbers . The signal activity can be longer than . In this case, it will be splitted into more than one selected windows.

The above-described loop repeats for each selected window, which occurs during the observation length of . Every time before starting the next loop, is incremented and and are initialized to zero.

The maximum number of samples , which can take place within a chosen can be calculated by employing

(9)

The ASA displays interesting features, which are not available in the classical case. It only selects the active parts of the nonuniformly sampled signal. Moreover, it correlates the length of the selected window with the input signal activity, laying in it. In addition, it also provides an efficient reduction of the phenomenon of spectral leakage in the case of transient signals. The leakage reduction is achieved by avoiding the signal truncation problem with a simple and an efficient algorithm, instead of employing a smoothening (cosine) window function, which is used in the classical schemes [5]. These abilities make the ASA extremely effective in reducing the overall system processing activity, especially in the case of low activity sporadic signals [5, 6, 11, 12, 14].

3. Proposed Adaptive Rate Filtering

3.1. General Principle

Two techniques are described to filter the selected signal obtained at the ASA output. The signal processing chain common to both filtering techniques is shown in Figure 2.

Figure 2
figure 2

Signal processing chain common to both filtering techniques.

The activity selection and the local features extraction are the bases of the proposed techniques. They make to achieve the adaptive rate sampling (only relevant samples to process) along with the adaptive rate filtering (only relevant operations to deliver a filtered sample). Such an achievement assures a drastic computational gain of the proposed filtering techniques compared to the classical one. The steps of realizing these ideas are detailed in the following subsections.

3.1.1. Adaptive Rate Sampling

The AADC sampling frequency is correlated to local variations [6, 11, 12, 14]. It follows that the local sampling frequency can be specific for . According to [5] can be calculated by employing

(10)

The upper and the lower bounds on are posed by and , respectively. In order to perform a classical filtering algorithm, the selected signal laying in is uniformly resampled before proceeding to the filtering stage (cf. Figure 2). Characteristics of the selected signal part laying in are employed to choose its resampling frequency . Once the resampling is done, there are samples in . Choice of is crucial and this procedure is detailed in the following subsection.

3.1.2. Adaptive Rate Filtering

It is known that for fixed design parameters (cut-off frequency, transition-band width, pass-band, and stop-band ripples) the FIR filter order varies as a function of the operational sampling frequency. For high sampling frequency, the order is high and vice versa. In the classical case, the sampling frequency and filter order both remains unique regardless of the input signal variations, so they have to be chosen for the worst case. This time invariant nature of the classical filtering causes a useless increase of the computational load. This drawback has been resolved up to a certain extent by employing the multirate filtering techniques [32–34].

The proposed filtering techniques of this paper are the intelligent alternatives to the multirate filtering techniques. They achieve computational efficiency by adapting the sampling frequency and the filter order according to the input signal local variations. Both techniques have some common features, which are described in the following.

In both cases, a reference FIR filter is offline designed for a reference sampling frequency . Its impulse response is , where is indexing the reference filter coefficients. is chosen in order to satisfy the Nyquist sampling criterion for , namely .

During online computation, and the local sampling frequency of window are used to define the local resampling frequency and a decimation factor . The is employed to uniformly resample the selected signal laying in , where as is employed to decimate for filtering .

can be specific depending upon [11, 12]. For proper online filtering, and should match. The approaches of keeping and coherent are explained below.

In the case, when , is chosen and remains unchanged. This case is treated similarly by both proposed techniques. This choice of makes to resample closer to the Nyquist rate, so avoiding unnecessary interpolations during the data resampling process. It thus further improves the proposed technique computational efficiency. This case is included in the description (see flowcharts in Figures 3 and 4) of the following two filtering techniques.

Figure 3
figure 3

Flowchart of the ARD.

Figure 4
figure 4

Flowchart of the ARR.

In the opposite case, that is, , is chosen and is online decimated in order to reduce to . In this case, the reference filter order is reduced for , which reduces the number of operations to deliver a filtered sample [6, 11]. Hence, it improves the proposed techniques computational efficiency. In this case, it appears that may be lower than the Nyquist frequency of and so it can cause aliasing. According to [6, 11], if the local signal amplitude is of the order of the maximal range then for a suitable choice of M (application-dependent) the signal crosses enough consecutive thresholds. Thus, it is locally oversampled with respect to its local bandwidth and so there is no aliasing problem. This statement is further illustrated with the results summarized in Table 3.

In order to decimate the decimation factor for is online calculated by employing

(11)

can be specific for each selected window depending upon . For an integral both techniques decimate in a similar way. Thus, a test on is made by computing and verifying if. Here, floor operation delivers only the integral part of . If the answer is yes, then is decimated with , the process is clear from

(12)

Equation (12) shows that the decimated filter impulse response for the selected window is obtained by picking every coefficient from . Here, j is indexing the decimated filter coefficients. If the order of is , then the order of is given as:.

A simple decimation causes a reduction of the decimated filter energy compared to the reference one. It will lead to an attenuated version of the filtered signal. is a good approximate of the ratio between the energy of the reference filter and that of the decimated one. Thus, this effect of decimation is compensated by scaling with . The process is clear from

(13)

The two techniques mainly differ in the way of decimating for a fractional . The process is explained in the following Sections.

3.2. ARD (Activity Reduction by Filter Decimation)

In the ARD technique, is decimated by employing . It calls for an adjustment of which is achieved as . As in this case, , so it makes . For the ARD scaling is performed with . The complete procedure of obtaining and for the ARD is described in Figure 3.

3.3. ARR (Activity Reduction by Filter Resampling)

In the ARR technique, is employed to decimated . In this case, is given as , so it remains equal to . The process of matching with requires a fractional decimation of , which is achieved by resampling at . Again NNRI is employed for the purpose of resampling. For the ARR scaling is performed with . The complete procedure of obtaining and for the ARR is described in Figure 4.

4. Illustrative Example

In order to illustrate the ARD and the ARR filtering techniques, an input signal shown on the left part of Figure 5 is employed. Its total duration is 20 seconds and it consists of three active parts. Summary of activities is given in Table 1.

Table 1 Summary of the input signal active parts.
Figure 5
figure 5

The input signal (left) and the selected signal obtained with the ASA (right).

Table 1 shows that is band limited between Hz and  kHz. In this case, is digitized by employing a 3-bit resolution AADC. Thus, for given ENOB the corresponding minimum and maximum sampling frequencies are  Hz and  kHz. The AADC amplitude range v is chosen, which results into a quantum  v.

Each activity contains a low- and a high-frequency component (cf. Table 1). In order to filter out the high-frequency parts from each activity, a low pass reference FIR filter is implemented by employing the standard Parks-McClellan algorithm. The reference filter parameters are summarized in Table 2.

Table 2 Summary of the reference filter parameters.
Table 3 Summary of the selected windows parameters.

For this example the reference window length second is chosen. It satisfies the boundary conditions discussed in Section 2.3. The given delivers samples in this case (cf. Equation (9). The ASA delivers three selected windows for the whole span of 20 seconds, which are shown on the right part of Figure 5. The selected windows parameters are displayed in Table 3.

Table 3 shows that the first window is an example of the case, so it is tackled similarly by both techniques. In the other windows, is valid, so the online decimation is employed. As and , calculated by employing Equation (11) are fractional ones, so this case is tackled in a different way by the ARD and the ARR.

Values of , , and are calculated for the ARD, and the ARR by employing the methods shown in Figures 3 and 4, respectively. The obtained results are summarized in Tables 4 and 5.

Table 4 Values of Frs i, Nr i, D i, and Pi for each selected window in the ARD.
Table 5 Values of Frs i, Nr i, d iand P i for each selected window in the ARR.

Tables 3, 4, and 5 jointly exhibit the interesting features of the proposed filtering techniques, which are achieved by an intelligent combination of the nonuniform, and the uniform signal processing tools (cf. Figure 2). represents the sampling frequency adaptation by following the local variations of shows that the relevant signal parts are locally over-sampled in time with respect to their local bandwidths [6, 11]. shows the adaptation of the resampling frequency for each selected window. It further adds to the computational gain of the proposed techniques by avoiding the unnecessary interpolations during the resampling process. shows how the adjustment of avoids the processing of unnecessary samples during the post filtering process. represents how the adaptation of for avoids the unnecessary operations to deliver the filtered signal. exhibits the dynamic feature of ASA, which is to correlate with the signal activity laying in it [5].

These results have to be compared with what is done in the corresponding classical case. If is chosen as the sampling frequency, then the total span is sampled at 2500 Hz. It makes samples to process with the 127th-order FIR filter. On the other hand, in both proposed techniques the total number of resampled data points is much lower, 3000 and 2794 for the ARD and the ARR, respectively. Moreover, the local filter orders in and are also lower than 127. It promises the computational efficiency of the proposed techniques compared to the classical one. A detailed complexity comparison is made in the following Section.

5. Computational Complexity

In the classical case, with a order filter, it is well known that multiplications and additions are required to compute each filtered sample. If is the number of samples then the total computational complexity can be calculated by employing

(14)

In the adaptive techniques presented here, the adaptation process requires extra operations for each selected window. The computational complexities of both techniques, and are deduces as follow.

The following steps are common to both the ARD and the ARR techniques. The choice of is a common operation for both proposed techniques. It requires one comparison between and . The data resampling operation is also required in both techniques before filtering. In the studied case, the resampling process is performed by employing the Nearest Neighbour Resampling Interpolation (NNRI). The NNRI is chosen because of its simplicity, as it employs only one nonuniform observation for each resampled one. Moreover, it provides an unbiased estimate of the original signal variance. Due to this reason, it is also known as a robust interpolation method [35, 36]. The detailed reasons of inclination toward NNRI are discussed in [5, 35, 36]. The NNRI is performed as follow.

For each interpolation instant , the interval of nonuniform samples , within which lies is determined. Then the distance of to each and is computed and a comparison among the computed distances is performed to decide the smaller among them. For, the complexity of the first step is comparisons and the complexity of the second step is additions and comparisons. Hence, the NNRI total complexity for becomes comparisons and additions.

In the case, when , the decimation of is performed in both techniques. In order to do so, is computed by performing a division between and . is calculated by employing a floor operation on . A comparison is made between and. In the case when, the process of obtaining is similar for both techniques (cf. Figures 3 and 4). In this case, the decimator simply picks every th coefficient from . It has a negligible complexity compared to the operations like addition and multiplication. This is the reason why its complexity is not taken into account during the complexity evaluation process. In both techniques, the decimated filter impulse response is scaled, it requires multiplications. The fractional is tackled in a different way by each filtering technique and is detailed in the following subsections.

5.1. Complexity of the ARD Technique

Even if is fractional in the case of ARD technique, decimation is performed by employing D i . Frs i is modified in order to keep it coherent with and it requires one division (cf. Figure 3). Finally, a -order filter performs multiplications and additions for . The combine computational complexity for the ARD technique is given by

(15)

5.2. Complexity of the ARR Technique

In the case of ARR technique, is employed as the decimation factor. The fractional decimation is achieved by resampling at . The resampling is performed by employing the NNRI, which performs comparisons and additions to deliver h j i . The remaining operation cost between the ARD and the ARR is common. The combine computational complexity for the ARR technique is given by

(16)

In Equations (15) and (16), represents the selected windows index. and are the multiplying factors. is for the case when and it is otherwise. is for the case when and it is otherwise.

5.3. Complexity Comparison of the ARD and the ARR with the Classical Filtering

From (14), (15), and (16), it is clear that there are uncommon operations between the classical and the proposed adaptive rate filtering techniques. In order to make them approximately comparable, it is assumed that a comparison has the same processing cost as that of an addition and a division or a floor has the same processing cost as that of a multiplication. By following these assumptions, comparisons are merged into the additions count and divisions plus floors are merged into the multiplications count, during the complexity evaluation process. Now Equations (15) and (16) can be written as follow:

(17)
(18)

By employing results of the example studied in the previous section, computational comparisons of the ARD and the ARR with the classical one are made in terms of additions and multiplications. The results are computed for different time spans and are summarized in Tables 6 and 7.

Table 6 Computational gain of the ARD over the classical one for different x(t) time spans.
Table 7 Computational gain of the ARR over the classical one for different time spans.

Gains in additions and multiplications of the proposed techniques over the classical one are clear from the above results. In the case of , where the resampling frequency and the filter order is the same as in the classical case (cf. Tables 4 and 5), a gain is achieved by using the proposed adaptive techniques. This is only due to the fact that the ASA correlates the window length to the activity (0.5 second), while the classic case computes during the total duration of  second. Gains are of course much larger in other windows, since the proposed techniques are taking benefit of processing the lesser samples along with the lower filter orders. When treating the whole span of 20 seconds, the proposed techniques also take advantage of the idle parts, which further induces additional gains compared to the classical case.

The above results confirm that the proposed filtering techniques lead toward a drastic reduction in the number of operations compared to the classical one. This reduction in operations is achieved due to the joint benefits of the AADC, the ASA and the resampling, as they enable to adapt the sampling frequency and the filter order by following the input signal local variations.

5.4. Complexity Comparison between the ARD and the ARR

The main difference between both proposed techniques occurs for the case when and is fractional (cf. Section 3).

The ARD makes an increment in in order to keep it coherent with . Increase in causes to increase and also to increase . Thus, in comparison to the ARR, this technique increases the computational load of the post-filtering operation, while keeping the decimation process of simple.

The ARR performs resampling at . Thus, in comparison to the ARD, this technique increases the complexity of the decimation process of while keeping the computational load of the post-filtering process lower.

In continuation to Section 5.3, a complexity comparison between the ARD and the ARR is made in terms of additions, and multiplications by employing Equations (17) and (18), respectively. It concludes that the ARR remains computationally efficient compared to the ARD, in terms of additions and multiplications, as far as the conditions given by expressions (19) and (20) remain true. Please note that Nr i and P i can be different for the ARD and the ARR (cf. Tables 4 and 5):

(19)
(20)

For this studied example, and are fractional ones, thus the ARD and the ARR proceed differently. Conditions (19) and (20) remain true for both and (cf. Tables 4 and 5). Hence, the gains in additions and multiplications of the ARR are higher than those of the ARD for and (cf. Tables 6 and 7). It shows that except for very specific situation the ARR technique will always remain less expensive than the ARD. The ARR achieves this computational performance by employing the fractional decimation of , which may lead a quality compromise of the ARR compared to the ARD. This issue is addressed in the following section.

6. Processing Error

6.1. Approximation Error

In the proposed techniques, the approximation error occurs due to two effects: the time quantization error which occurs due to the AADC finite timer precision and the interpolation error which occurs in the course of the uniform resampling process. After these two operations, the mean approximation error for can be computed by employing the following:

(21)

Here, is the resampled observation, interpolated with respect to the time instant , is the original sample value which should be obtained by sampling at . In the studied example discussed in Section 4, is analytically known, thus it is possible to compute its original sample value at any given time instant. It allows us to compute the approximation error introduced by the proposed adaptive rate techniques by employing Equation (21).

The results obtained for each selected window for both the ARD and the ARR are summarized in Table 8.

Table 8 Mean approximation error of each selected window for the ARD and the ARR.

Table 8 shows the approximation error introduced by the proposed techniques. This process is accurate enough for a 3-bit AADC. For the higher precision applications, the approximation accuracy can be improved by increasing the AADC resolution M and the interpolation order [6, 8, 37, 38]. Thus, an increased accuracy can be achieved at the cost of an increased computational load. Therefore, by making a suitable compromise between the accuracy level and the computational load, an appropriate solution can be devised for a specific application.

For a given M and interpolation order the approximation accuracy can be further improved by employing the symmetry during the interpolation process. It results into a reduced resampling error [38, 39]. The pros and cons of this approach are under investigation and a description on it is given in [40].

6.2. Filtering Error

In the proposed filtering techniques, a reference filter is employed and then it is online decimated for , depending on the chosen . This online decimation can cause the filtering precision degradation. In order to evaluate this phenomenon on our test signal the following procedure is adapted.

A reference filtered signal is generated. In this case, instead of decimating to obtain , a specific filter is directly designed for by using the Parks-McClellan algorithm. It is designed for by employing the same design parameters, summarized in Table 2. The signal activity corresponding to is sampled at with a high precision classical ADC. This sampled signal is filtered by employing . The filtered signal obtained in this way is used as a reference one for and its comparison is made with the results obtained by the proposed techniques.

Let be the th reference-filtered sample and be theth filtered sample obtained by one of the proposed filtering techniques. Then, the mean filtering error for can be calculated by employing

(22)

The mean filtering error of both proposed techniques is calculated, for each activity by employing (22). The results are summarized in Table 9.

Table 9 Mean filtering error of each selected window for the ARD and the ARR.

Table 9 shows that the online decimation of in the proposed techniques causes a loss of the desired filtering quality. Indeed, the filtering error increases with the increase in . The measure of this error can be used to decide an upper bound to (by performing an offline calculation), for which the decimated and the scaled filters provide results with an acceptable level of accuracy. The level of accuracy is application-dependent. Moreover, for high precision applications, an appropriate filter can be online calculated for each selected window at the cost of an increased computational load. The process is clear from generating the reference filtered signal , discussed above.

Table 9 shows that and for the ARR are higher than that of the ARD. It is due to the fact of resampling for the ARR to deliver and . It makes to employ the interpolated coefficients of for filtering the resampled data, lies in and respectively, which results in an increased filtering error of the ARR compared to the ARD. Similar to Section 6.1, this resampling error can also be reduced to a certain extent, by employing a higher order interpolator [37, 38]. In conclusion, a certain increase in the accuracy can be achieved at a certain loss of the processing efficiency.

7. Speech Signal as a Case Study

In order to evaluate performances of the ARD and the ARR for real life signals, a speech signal x(t) shown on Figure 6(a) is employed. is a 1.6 second, [50 Hz; 5000 Hz] band-limited signal corresponding to a three-word sentence. The goal is to determine the pitch (fundamental frequency) of in order to determine the speaker's gender. For a male speaker, the pitch lies with the frequency range [100 Hz, 150 Hz], whereas for a female speaker, the pitch lies with the frequency range [200 Hz, 300 Hz] [41].

Figure 6
figure 6

On the top, the input speech signal (a), the selected signal with the ASA (b) and a zoom of the second window W2 (c). On the bottom, a spectrum zoom of the filtered signal laying in W2 obtained with the reference filtering (d), with the ARD (e) and with the ARR (f), respectively.

The reference frequency is chosen as kHz, which is a common sampling frequency for speech. A 4-bit resolution AADC is used for digitizing and therefore we have  kHz, and  kHz. The amplitude range is always set to  V, which leads to a quantum  v. The amplitude of is normalized to  v in order to avoid the AADC saturation.

The studied signal is part of a conversation and during a dialog, the speech activity is of the total dialog time [42]. A classical filtering system would remain active during the total dialog duration. The proposed LCSS-based filtering techniques will remain active only during of the dialog time span, which will reduce the system power consumption.

A speech signal mainly consists of vowels and consonants. Consonants are of lower amplitude compared to vowels [41, 43]. In order to determine the speakers pitch, vowels are the relevant parts of . For  v, consonants are ignored during the signal acquisition process, and are considered as low amplitude noise. In contrast, vowels are locally over-sampled like any harmonic signal [6, 10, 11]. This intelligent signal acquisition further avoids the processing of useless samples, within the of activity, and so further improves the proposed techniques computational efficiency.

In order to apply the ASA,  seconds is chosen. It results in in this case (cf. Equation (9). The ASA delivers three selected windows, which are shown on Figure 6(b). The parameters of each selected window are summarized in Table 10.

Table 10 Summary of the selected windows parameters.

Although the consonants are partially filtered out during the data acquisition process, yet for proper pitch estimation, it is required to filter out the remaining effect of high frequencies still present in . To this aim, a reference low pass filter is designed, with the standard Parks-McClellan algorithm. Its characteristics are summarized in Table 11.

Table 11 Summary of the reference filter paramete 1

To find the pitch, we now focus on , which corresponds to the vowel "a". A zoom on this signal part is plotted on Figure 6(c). The condition is valid, and is fractional (cf. Equation (11).Thus, the filtering process for each proposed technique will differ, which makes it possible to compare their performances. The values of , , , and for both techniques are given in Table 12.

Table 12 Values of , , , and for the ARD and the ARR.

Computational gains of the proposed filtering techniques compared to the classical one are computed by employing Equations (14), (17), and (18). The results show 8.62 and 13.17 times gains in additions and 8.71 and 13.26 times gains in multiplications, respectively, for the ARD and the ARR, for . It confirms the computational efficiency of the proposed techniques compared to the classical one. It is gained firstly by achieving an intelligent signal acquisition and secondly by adapting the sampling frequency and the filter order by following the local variations of .

Once more the conditions (19) and (20) remain true for so the ARR technique remains computationally efficient than the ARD one.

Spectra of the filtered signal laying in , obtained with the reference filtering (cf. Section 6.2), with the ARD and with the ARR techniques are plotted, respectively, on Figures 6(d), 6(e), and 6(f).

The spectra on Figure 6 show that the fundamental frequency is about 215 HZ. Thus, one can easily conclude that the analyzed sentence is pronounced by a female speaker. Although it is required to decimate the reference filter 3 times and 3.7 times, respectively, for the ARD and the ARR, yet spectra of the filtered signal, obtained with the proposed techniques are quite comparable to spectrum of the reference-filtered signal. It shows that even after such a level of decimation, results delivered by the proposed techniques are of acceptable quality for the studied speech application.

The above discussion shows the suitability of the proposed techniques for the low activity time-varying signals like electrocardiogram, phonocardiogram, seismic, and speech. Speech is a common, and easily accessible signal. Therefore, the proposed techniques performance is studied for a speech application, though it can be applied to other appropriate real signals like electrocardiogram, phonocardiogram, and seismic. The devised approach versatility lays in the appropriate choice of system parameters like the AADC resolution M, the distribution of level crossing thresholds, and the interpolation order. These parameters should be tactfully chosen for a targeted application, so that they ensure an attractive tradeoff between the system computational complexity and the delivered output quality.

8. Conclusion

Two novel adaptive rate filtering techniques have been devised. These are well suited for low activity sporadic signals like electrocardiogram, phonocardiogram and seismic signals. For both filtering techniques, a reference filter is offline designed by taking into account the input signal statistical characteristics and the application requirements.

The complete procedure of obtaining the resampling frequency and the decimated filter coefficients for is described for both proposed techniques. The computational complexities of the ARD and the ARR are deduced and compared with the classical one. It is shown that the proposed techniques result into a more than one-order magnitude gain in terms of additions and multiplications over the classical one. It is achieved due to the joint benefits of the AADC, the ASA and the resampling as they allow the online adaptation of parameters ( and ) by exploiting the input signal local variations. It drastically reduces the total number of operations and therefore, the energy consumption compared to the classical case.

A complexity comparison between the ARD and the ARR is also made. It is shown that the ARR outperforms the ARD in most of the cases. Performances of the ARD and the ARR are also demonstrated for a speech application. The results obtained in this case are in coherence with those obtained for the illustrative example.

Methods to compute the approximation and the filtering errors for the proposed techniques are also devised. It is shown that the errors made by the proposed techniques are minor ones, in the studied case. A higher precision can be achieved by increasing the AADC resolution and the interpolation order. Thus, a suitable solution can be proposed for a given application by making an appropriate tradeoff between the accuracy level and the computational load.

A detailed study of the proposed filtering techniques computational complexities by taking into account the real processing cost at circuit level is in progress. Future works focus on the optimization of these filtering techniques and their further employment in real life applications.

References

  1. Sekhar SC, Sreenivas TV: Adaptive window zero-crossing-based instantaneous frequency estimation. EURASIP Journal on Applied Signal Processing 2004,2004(12):1791-1806. 10.1155/S111086570440417X

    Article  Google Scholar 

  2. Mark JW, Todd TD: A nonuniform sampling approach to data compression. IEEE Transactions on Communications 1981, 29: 24-32. 10.1109/TCOM.1981.1094872

    Article  Google Scholar 

  3. Gretains M: Time-frequency representation based chirp like signal analysis using multiple level crossings. Proceedings of 15th European Signal Processing Conference (EUSIPCO '07), September 2007, Poznan, Poland 2154-2158.

    Google Scholar 

  4. Guan KM, Singer AC: Opportunistic sampling by level-crossing. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '07), April 2007, Honolulu, Hawaii, USA 3: 1513-1516.

    Google Scholar 

  5. Qaisar SM, Fesquet L, Renaudin M: Spectral analysis of a signal driven sampling scheme. Proceedings of the 14th European Signal Processing Conference (EUSIPCO '06), September 2006, Florence, Italy

    Google Scholar 

  6. Qaisar SM, Fesquet L, Renaudin M: Computationally efficient adaptive rate sampling and filtering. Proceedings of 15th European Signal Processing Conference (EUSIPCO '07), September 2007, Poznan, Poland 2139-2143.

    Google Scholar 

  7. Allier E, Sicard G, Fesquet L, Renaudin M: A new class of asynchronous A/D converters based on time quantization. Proceedings of the 9th International Symposium on Asynchronous Circuits and Systems (ASYNC '03), May 2003, Vancouver, Canada 197-205.

    Google Scholar 

  8. Sayiner N, Sorensen HV, Viswanathan TR: A level-crossing sampling scheme for A/D conversion. IEEE Transactions on Circuits and Systems II 1996,43(4):335-339. 10.1109/82.488288

    Article  Google Scholar 

  9. Akopyan F, Manohar R, Apsel AB: A level-crossing flash asynchronous analog-to-digital converter. Proceedings of the International Symposium on Asynchronous Circuits and Systems (ASYNC '06), March 2006, Grenoble, France 12-22.

    Chapter  Google Scholar 

  10. Aeschlimann F, Allier E, Fesquet L, Renaudin M: Asynchronous FIR filters: towards a new digital processing chain. Proceedings of the International Symposium on Asynchronous Circuits and Systems (ASYNC '04), April 2004, Crete, Greece 10: 198-206.

    Google Scholar 

  11. Qaisar SM, Fesquet L, Renaudin M: Adaptive rate filtering for a signal driven sampling scheme. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '07), April 2007, Honolulu, Hawaii, USA 3: 1465-1468.

    Google Scholar 

  12. Qaisar SM, Fesquet L, Renaudin M: Computationally efficient adaptive rate sampling and filtering for low power embedded systems. Proceedings of the International Conference on Sampling Theory and Applications (SampTA '07), June 2007, Thessaloniki, Greece

    Google Scholar 

  13. Aeschlimann F, Allier E, Fesquet L, Renaudin M: Spectral analysis of level crossing sampling scheme. Proceedings of the International Conference on Sampling Theory and Applications (SampTA '05), July 2005, Samsun, Turkey

    Google Scholar 

  14. Qaisar SM, Fesquet L, Renaudin M: An adaptive resolution computationally efficient short-time Fourier transform. Research Letters in Signal Processing 2008, 2008:-5.

    Google Scholar 

  15. Bond FE, Cahn CR: On sampling the zeros of bandwidth limited signals. IRE Transactions on Information Theory 1958, 4: 110-113. 10.1109/TIT.1958.1057457

    Article  Google Scholar 

  16. Astrom KJ, Bernhardsson B: Comparison of Riemann and Lebesgue sampling for first order stochastic systems. Proceedings of the 41st IEEE Conference on Decision and Control (CDC '02), December 2002, Las Vegas, Nev, USA 2: 2011-2016.

    Article  Google Scholar 

  17. Bilinskis I: Digital Alias Free Signal Processing. John Wiley & Sons, New York, NY, USA; 2007.

    Book  MATH  Google Scholar 

  18. Ellis PH: Extension of phase plane analysis to quantized systems. IRE Transactions on Automatic Control 1959, 4: 43-59. 10.1109/TAC.1959.1104845

    Article  Google Scholar 

  19. Lim M, Saloma C: Direct signal recovery from threshold crossings. Physical Review E 1998,58(5B):6759-6765.

    Article  Google Scholar 

  20. Miskowicz M: Asymptotic effectiveness of the event-based sampling according to the integral criterion. Sensors 2007,7(1):16-37. 10.3390/s7010016

    Article  Google Scholar 

  21. Astrom KJ, Bernhardsson B: Comparison of periodic and event based sampling for first-order stochastic systems. Proceedings of IFAC World Congress, 1999 301-306.

    Google Scholar 

  22. Miskowicz M: Send-on-delta concept: an event-based data reporting strategy. Sensors 2006,6(1):49-63. 10.3390/s6010049

    Article  Google Scholar 

  23. Otanez PG, Moyne JR, Tilbury DM: Using deadbands to reduce communication in networked control systems. Proceedings of the American Control Conference (ACC '02), May 2002, Anchorage, Alaska, USA 4: 3015-3020.

    Google Scholar 

  24. Gupta SC: Increasing the sampling efficiency for a control system. IEEE Transactions on Automatic and Control 1963, 263-264.

    Google Scholar 

  25. Blake IF, Lindsey WC: Level-crossing problems for random processes. IEEE Transactions on Information Theory 1973, 295-315.

    Google Scholar 

  26. Miskowicz M: Efficiency of level-crossing sampling for bandlimited Gaussian random processes. Proceedings of IEEE International Workshop on Factory Communication Systems (WFCS '06), June 2006, Torino, Italy 137-142.

    Google Scholar 

  27. Walden RH: Analog-to-digital converter survey and analysis. IEEE Journal on Selected Areas in Communications 1999,17(4):539-550. 10.1109/49.761034

    Article  Google Scholar 

  28. Nazario MA, Saloma C: Signal recovery in sinusoid-crossing sampling by use of the minimum-negative constraint. Applied Optics 1988, 37: 2953-2963.

    Article  Google Scholar 

  29. Lim M, Saloma C: Direct signal recovery from threshold crossings. Physical Review E 1998,58(5B):6759-6765.

    Article  Google Scholar 

  30. Beutler FJ: Error free recovery from irregularly spaced samples. SIAM Review 1996, 8: 328-335.

    Article  MathSciNet  Google Scholar 

  31. Marvasti F: Nonuniform Sampling Theory and Practice. Kluwer Academic/Plenum Publishers, New York, NY, USA; 2001.

    Book  MATH  Google Scholar 

  32. Vetterli M: A theory of multirate filter banks. IEEE Transactions on Acoustics, Speech, and Signal Processing 1987,35(3):356-372. 10.1109/TASSP.1987.1165137

    Article  Google Scholar 

  33. Chu S, Burrus CS: Multirate filter designs using comb filters. IEEE Transactions on Circuits and Systems 1984,31(11):913-924. 10.1109/TCS.1984.1085447

    Article  Google Scholar 

  34. Crochiere RE, Rabiner LR: Multirate Digital Signal Processing. Prentice-Hall, Englewood Cliffs, NJ, USA; 1993.

    Google Scholar 

  35. de Waele S, Broersen PMT: Time domain error measure for resampled irregular data. Proceedings of the 16th IEEE Instrumentation and Measurement Technology Conference (IMTC '99), May 1999, Venice, Italy 2: 1172-1177.

    Google Scholar 

  36. de Waele S, Broersen PMT: Error measures for resampled irregular data. IEEE Transactions on Instrumentation and Measurement 2000,49(2):216-222. 10.1109/19.843052

    Article  Google Scholar 

  37. Harris F: Multirate signal processing in communication systems. Proceedings of 15th European Signal Processing Conference (EUSIPCO '07), September 2007, Poznan, Poland

    Google Scholar 

  38. Klamer DM, Masry E: Polynomial interpolation of randomly sampled bandlimited functions and processes. SIAM Journal on Applied Mathematics 1982,42(5):1004-1019. 10.1137/0142071

    Article  MATH  MathSciNet  Google Scholar 

  39. Hildebrand FB: Introduction to Numerical Analysis. McGraw-Hill, Boston, Mass, USA; 1956.

    MATH  Google Scholar 

  40. Qaisar SM, Fesquet L, Renaudin M: An improved quality adaptive rate filtering technique based on the level crossing sampling. Proceedings of the World Academy of Science, Engineering and Technology, July 2008 31: 79-84.

    Google Scholar 

  41. Rabiner LR, Schafer RW: Digital Processing of Speech Signals. Prentice-Hall, Englewood Cliffs, NJ, USA; 1978.

    Google Scholar 

  42. Fontolliet PG: Systèmes de Télécommunications. Dunod, Paris, France; 1983.

    Google Scholar 

  43. Quatieri TF: Discrete-Time Speech Signal Processing: Principles and Practice. Prentice-Hall, Englewood Cliffs, NJ, USA; 2001.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saeed Mian Qaisar.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Mian Qaisar, S., Fesquet, L. & Renaudin, M. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling. EURASIP J. Adv. Signal Process. 2009, 971656 (2009). https://doi.org/10.1155/2009/971656

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2009/971656

Keywords