The paper presents results of numerical investigations of two algorithms for parametric approximation of such responses of electric circuits which can be accurately approximated by rational functions of frequency, time etc. for fixed values of parameters. A piecewise multilinear algorithm and a two-stage algorithm with frequency and magnitude scaling of root rational models are compared. Both techniqus use Vector Fitting algorithm for preliminary rational approximations of the responses for parameter values from a fixed grid. Numerical investigation uses three circuits which differ in dependence of rational model order on parameter values (fixed or variable) and also on parameterization of circuit elements (linear or non-linear). Dependence of RMS approximation error on grid density is found to be approximately quadratic w.r.t. each variable, admitting pretty large size of grid cells. It is shown, that the two-stage parametric approximation is not only (typically much) more accurate, but does not show spurious oscillations of the predicted responses. The price for the accuracy is the exponential complexity of the more accurate of the two algorithms.
The paper presents an overview of design methodology and results of experiments with a Prototype of highly efficient optimal adaptive feedback communication systems (AFCS), transmitting low frequency analog signals without coding. The paper emphasizes the role of the forward transmitter saturation as the factor that blocked implementation of theoretical results of pioneer (1960-1970s) and later research on FCS. Deepened analysis of the role of statistical fitting condition in adequate formulation and solution of AFCS optimization task is given. Solution of the task – optimal transmission/reception algorithms is presented in the form useful for elaboration of the hardware/software Prototype. A notable particularity of the Prototype is absence of the encoding/decoding units, whose functions are realized by the adaptive pulse amplitude modulator (PAM) of the forward transmitter (FT) and estimating/controlling algorithm in the receiver of base station (BS). Experiments confirm that the Prototype transmits signals from FT to BS “perfectly”: with the bit rate equal to the capacity of the system, and with limit energy [J/bit] and spectral [bps/Hz] efficiency. Another, not less important and confirmed experimentally, particularity of AFCS is its capability to adjust parameters of FT and BS to the characteristics of scenario of application and maintain the ideal regime of transmission including spectralenergy efficiency. AFCS adjustment can be made using BS estimates of mean square error (MSE). The concluding part of the paper contains discussion of the presented results, stressing capability of AFCS to solve problems appearing in development of dense wireless networks.
KEYWORDS: Berkelium, Signal processing, Calibration, Transmitters, Atomic force microscopy, Telecommunications, Analog electronics, Interference (communication), Monte Carlo methods, Algorithm development
The brilliant idea of Adaptive Feedback Control Systems (AFCS) makes possible creation of highly efficient adaptive systems for estimation, identification and filtering of signals and physical processes. The research problem considered in this paper is: how performance of AFCS changes if some of the assumptions used to formulate iterative estimation algorithm are not fulfilled exactly. To limit the scope of research a particular implementation of the AFCS concept was considered, i.e. an adaptive feedback measurement system (AFMS). The iterative measurement algorithm used was derived under some idealized conditions, notably with perfect knowledge of the system model and Gaussian communication channels. The selected non-idealities of interest are non-zero mean value of noise processes and non-ideal calibration of transmission gain in the forward channel - because they are related to intrinsic non-idealities of analog building blocks, used for the AFMS implementation. The presented original analysis of the iterative measurement algorithm provides quantitative information on speed of convergence and limit behavior. The analysis should be useful for AFCS implementors in the measurement area - since the results are presented in terms of accuracy and precision of iterative measurement process.
The research problem of interest to this paper is: how to determine efficiently and objectively the most and the least
influential parameters of a multimodule electronic system - given the system model f and the module parameter variation
ranges. The author investigates if existing generic global sensitivity methods are applicable for electronic circuit design,
even if they were developed (and successfully applied) in quite distant engineering areas. A photodiode detector analog
front-end system response time is used to reveal capability of the selected global sensitivity approaches under study.
The paper presents results of stochastic analysis of deep metastability behavior of some bistable circuit models using frameworks of Stochastic Ordinary Differential Equations (SODE) and Randomized Ordinary Differential Equations (RODE). Three models of bistable circuits are investigated: the standard linear and two non-linear dynamics models. Random uncertainty is modeled with (additive) Gaussian noise. The paper demonstrates dependence of the stochastic distribution of response time of the bistable circuits on both the dynamics model and stochastic modeling framework used.
The paper attempts to summarize author’s investigation of parallel computation capability of MATLAB environment in solving large ordinary differential equations (ODEs). Two MATLAB versions were tested and two parallelization techniques: one used multiple processors-cores, the other – CUDA compatible Graphics Processing Units (GPUs). A set of parameterized test problems was specially designed to expose different capabilities/limitations of the different variants of the parallel computation environment tested. Presented results illustrate clearly the superiority of the newer MATLAB version and, elapsed time advantage of GPU-parallelized computations for large dimensionality problems over the multiple processor-cores (with speed-up factor strongly dependent on the problem structure).
This paper presents a novel, three-stage approach to optimum selection of calibration standard lengths for broadband Vector Network Analyzers (VNA). First, an initial standard D-optimal calibration selection problem is reformulated such as to eliminate redundant locally optimal solutions. Second, good quality basic solution to the calibration selection problem is found, as a result of analytic investigation of the problem properties. Finally, a multistep numeric bi-criterion optimization procedure with variable frequency range is proposed to generate a set of candidate solutions, with different relationship of bandwidth and ripple of the normalized determinant of the Fisher matrix. Example results demonstrate high quality of the solutions found and high efficiency of the proposed optimization-based approach.
The paper presents a model-based derivative-free minimax optimizer and a benchmarking methodology. The optimizer is
dedicated to such simulation-based design scenarios, when a single simulation is time-consuming and not very accurate,
and so when fast improvement of design is expected rather then high accuracy optimization. A benchmarking methodology
is formulated to compare efficiency of optimizers in such scenarios. A set of electromagnetic (EM) designs, for which a
design simulation involves solution of very large partial differential equations, is used to exemplify the methodology.
Presented results demonstrate how accuracy requirements and computational budget change ranking of optimizers. The
proposed optimizer is shown to provide competetively rapid and reliable initial design improvement for modest 5-10%
tolerance, but for higher accuracy demands and larger computational budget other solvers become more competitive.
The paper presents an idea and directives on construction of a measurement system for estimation of ions' concentration
in water. System presented in paper has been fully designed and manufactured in Warsaw University of Technology in
Institute of Electronic Systems. The measurement system works with cheap ion-selective potentiometric sensors. System
allows for potentiometric, transient response and voltamperometric measurements. Data fusion method has been
implemented in the system to increase the estimation's accuracy. Presented solution contains of many modern electronic
elements like 32bit ARM microcontroller, precise operational amplifiers and some hydraulics subsystems essential for
chemical measurements.
The paper starts from a brief introduction to the EU FP6 WARMER (Water Risk Management in EuRope) project, but
the main body is a presentation of current WARMER R&D in data processing for in-situ probes. The paper focuses on
software development for local fusion of multi-sensor measurement data collected from WARMER-developed
potentiometric sensors.
The paper presents selected results of an empirical study of transient responses of ion-selective potentiometric sensor to
two classes of stimulus. First, voltage time-domain response of
ion-selective structures was observed to changes of ion
concentrations in the sample, flow speed and electric load of the sensor. Presented results explain essential
accuracy/precision limitations of ion concentration estimates that can be obtained in practical applications for clean
water. Second, current time sensor response to small voltage stimulus was observed for different ionic content of the
sample. A behavioral model was built, that can be used (along traditional steady-state model) for Data Fusion-based
enhancement of multiple-sensor measurements.
This paper presents a problem of design of on-line water monitoring systems including electro-chemical sensors. In systems
like this several basic problems have to be solved, e.g. sufficient quality of sensor fabrication, appropriate sensor
conditioning and water sample treatment, sensor signal processing for good measurement accuracy/precision, environmental
data transmission/storage and Web connectivity of the system. We focus on multiple-sensor signal processing (Data Fusion) for improvement of measurement accuracy/precision. This in turn calls for sensor response modelling, data processing algorithm design and implementation in measurement system firmware (a hardware-software
co-design problem). A demonstrator measurement system has been designed and built to verify applicability of proposed technology
to European water monitoring.
The authors have been involved in uncertainty analyses of multiple-sensor measurement procedures and Data
Fusion (DF) algorithms under development in the 6TH FP WARMER (Water Risk Management in EuRope) project [1].
The main goal of this uncertainty study was evaluation of measurement procedure-dependent factors that determine
basic uncertainty characteristics, i.e. accuracy and precision. Several uncertainty sources were taken into account, most
important being: read-out/sensor modeling inaccuracy and dosing imprecision. Results of uncertainty analyses will be
used to optimize measurement procedures for potentiometric sensor-based measurement heads and to perform rational
cost/accuracy trade-off of in-situ measurement probe components.
The paper presents the most important results of the uncertainty study. Accuracy and precision of different
variants of in-situ measurement system were estimated with Monte Carlo method, using realistic estimates of
uncertainty sources. First type of results concerns analysis of measurement uncertainty sensitivity to different sources of
imprecision. Next, dependence of measurement uncertainty upon measurement scenario is shown. Single- and
double-sensor measurements with different selectivity coefficients are compared. Advantages and limitations of multiple sensor
Data Fusion are discussed and recommendations formulated.
The paper sums up investigations of electrical properties of potentiometric sensors that are under development in the EU FP6 WARMER project. There are 2 main goals of the empirical study: to determine minimum input resistance/bias current of data acquisition board and to determine the quality of analytical signal in DC and transient responses of the potentiometric sensors.
This paper presents statistical algorithm and measurement system for precise evaluation of flip-flop dynamical parameters in asynchronous operation. The analyzed flip-flop parameters are failure probability, MTBF and propagation delay. It is shown how these parameters depend on metastable operation of flip-flops. The numerical and hardware solutions shown in article allow for precise and reliable comparison of flip-flops. Also the analysis of influence of flip-flop electrical parameters of flip-flop electrical parameters on their metastable operation is possible with use of presented statistical method. Statistical estimation of parameters of flip-flops in which metastability occurs, seems to be more reliable than standard empirical methods of flip-flop analysis. Presented method allows for showing inaccuracies in theoretical model of metastability.
The paper contains brief introduction to the EU FP6 WARMER project. First EU activities in the field of environmental
monitoring are overviewed, to place research activities of the project in a proper context. Then the fundamental goals of
the project are defined and a set of research and engineering problems to be solved is formulated. The main part of the
paper is focused on solving the problem of software development for data acquisition and local fusion of measurement
data from many, possibly multiple-mode sensors.
KEYWORDS: Sensors, Control systems, Data processing, Telecommunications, Data centers, Network architectures, Data transmission, Standards development, Process control, Data fusion
This paper presents results of analyses of possible architectures for the 6th FP WARMER project system with
emphasis on hardware and software modularization. A short survey of different system architectures and evaluation
criteria is presented, so as to rationalize selection of one of the architecture as a basis for further work and development.
Following factors are taken into account during evaluation: flexibility of architecture, standardization (necessary
interfaces), existing solutions, development and build costs, hardware resource sharing and openness for future changes
and extensions. The best system architecture in author's opinion has been presented, and some implementation issues
have been discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.