bibliography

Die Bioökonomie gilt als zentrales Zukunfts- und Innovationsfeld, das ökologische und ökonomische Entwicklungen miteinander in Einklang bringen kann. Ihr Ausbau ist für den Übergang von einem fossilbasierten, hin zu einem weitestgehend biobasierten, nachhaltigen und an natürlichen Stoffkreisläufen orientierten Wirtschaftssystem essenziell. Die mit der zirkulären Bioökonomie verbundenen Möglichkeiten zur Defossilisierung sind ein zentraler Schlüssel zur Transformation in Richtung Klimaneutralität im Jahr 2045 bei gleichzeitiger Schonung der begrenzten natürlichen Ressourcen. Die steigende Bedeutung der an natürlichen Stoffkreisläufen orientierten, biobasierten Wirtschaft erfordert, dass Wertschöpfungsketten in ihrer Gesamtheit – von der Gewinnung biogener Ressourcen bis zu deren Recycling – in ihrem Stoffdurchsatz reduziert und optimiert werden (Thrän und Moesenfechtel 2020). Für eine nachhaltige Nutzung von Biomasseressourcen sind die fortlaufende Erfassung der biogenen Abfälle und Reststoffe und deren Bewertung der stofflichen und energetischen Nutzung essenziell. Das DBFZ hat daher im Jahr 2016 (Projekt AG BioRestMon) ein Biomasse Monitoring entwickelt, dass die nationalen Biomassepotenziale für das Jahr 2015 erfasst und diese Daten der Öffentlichkeit in einer frei zugänglichen Webanwendung bereitstellt. Die aufgebaute Datenbank wurde 2019 veröffentlicht (https://datalab.dbfz.de/resdb/potentials) und befindet sich derzeit in der Aktualisierung (Fortschreibung der Biomassedaten bis 2020, sowie methodische Anpassungen, die in Kapitel 2.5 beschrieben werden). Mit diesem Hintergrundpapier möchten wir einen Überblick über die Herangehensweise zur Ermittlung der heimischen Biomassepotenziale aus Abfällen und Reststoffen sowie dem Aufbau der Datenplattform gegeben. Importe, die nicht Teil der Datenanalysen sind, werden ergänzend eingeordnet. Ebenso erfolgt eine Übersicht über die Treiber und erwartete Nutzungskonkurrenzen, die die weitere Entwicklung der Potenziale beeinflussen. Des Weiteren wird ein Ausblick gegeben, wie diese in die Weiterentwicklung der Datenbank einfließen sollen.

DOI 10.5281/zenodo.10404436

The German "National Bioeconomy Strategy" has set the course for sustainable economies based on renewable resources. Circular economy and thus the utilization of biogenic wastes and residues are integral elements of this strategy. To sustainably harness these resources, comprehensive and up-to-date information about biogenic wastes and residues and their suitability for material and energy utilization are crucial. Scientists of the German Biomass Research Centre (DBFZ) have thus established a national biomass monitoring, capturing German biomass potentials from 2010 onwards. The data is made available to the public through the freely accessible web application "DBFZ Resource Database", supporting the implementation of the "National Bioeconomy Strategy" by providing transparent resource information. The dataset at hand presents an excerpt from the "DBFZ Resource Database" ("DE Biomass Monitor") as of 08.12.2023 (v.5.3.5). It includes results on theoretical, technical, and mobilizable biomass potentials, as well as the utilization of 77 wastes and residues in Germany. Comprehensive data is available for the year 2015. Existing data gaps for the years 2010 to 2014 and 2016 to 2020 will be gradually filled.

DOI 10.5281/zenodo.10370136

Upcoming large-scale spectroscopic surveys with e.g. WEAVE (William herschel telescope Enhanced Area Velocity Explorer) and 4MOST (4-metre Multi-Object Spectroscopic Telescope) will provide thousands of spectra of massive stars, which need to be analysed in an efficient and homogeneous way. Usually, studies of massive stars are limited to samples of a few hundred objects, which pushes current spectroscopic analysis tools to their limits because visual inspection is necessary to verify the spectroscopic fit. Often uncertainties are only estimated rather than derived and prior information cannot be incorporated without a Bayesian approach. In addition, uncertainties of stellar atmospheres and radiative transfer codes are not considered as a result of simplified, inaccurate, or incomplete/missing physics or, in short, idealized physical models. Here, we address the question of 'How to compare an idealized model of complex objects to real data?' with an empirical Bayesian approach and maximum a posteriori approximations. We focus on application to large-scale optical spectroscopic studies of complex astrophysical objects like stars. More specifically, we test and verify our methodology on samples of OB stars in 30 Doradus region of the Large Magellanic Clouds using a grid of fastwind model atmospheres. Our spectroscopic model de-idealization analysis pipeline takes advantage of the statistics that large samples provide by determining the model error to account for the idealized stellar atmosphere models, which are included into the error budget. The pipeline performs well over a wide parameter space and derives robust stellar parameters with representative uncertainties.

2024MNRAS.528.6735B arXiv:2309.06474

This dataset contains 24 factsheets on bio-based carbon dioxide removal (CDR) options in Germany in the areas of peatlands and paludiculture, forest management, agriculture and soils, long-lived building materials, and bioenergy with CCS. Please find an overview over the factsheet topics in the figure below. The factsheets are structured into the following 8 categories containing 41 parameters: basic concept characteristics, systemic, input, output, environmental, institutional, economic, and social parameters. The factsheets were elaborated in the BioNET project which is part of CDRterra.

DOI 10.48480/x293-8050 DOI 10.48480/tga8-t109 (German version)

By 2045, Germany aims to achieve climate neutrality over all parts of society. This is the mandate stipulated by the "inter-generational contract for climate" of the Federal Government and ensuing tightened climate goals. These goals can only be reached if, on the one hand side, energy supply becomes fully renewable, and on the other hand side, if our current economic system is converted into a true circular economy. However, the share of renewable energies of total primary energy consumption was only about 17% in 2022. The industrial shift from petro- to bio-based raw materials, i.e. to a bio-economy, is also still in its infancy. The key challenge regarding biomass usage is to optimise its deployment with regard to efficiency, environmental and systemic benefit. Since its foundation in 2008, the Leipzig-based Deutsches Biomasseforschungszentrum (DBFZ, German Biomass Research Centre) has developed into the central federal research institution for the energetic and integrated material use of biomass. The DBFZ’s key mandate is to provide the scientific basis for Germany's development towards sustainable biomass use. Against this background, this article, based on the contribution to the 2022 Bioenergy Forum Rostock (Nelles et al.), describes the status of biomass use for energy in Germany, and outlines the DBFZ's expectations regarding future developments.

DOI 10.18453/rosdok_id00004269 (German only)
10th status conference "Energetische Biomassenutzung" (German only)

A computer-implemented method, system, and computer program product for classifying a sequence of log entries of a computing system may be provided. The method may include pre-processing the log entries. The method may also include predicting, as a first output of a first trained machine-learning system, a likelihood of a particular next log entry after the window. The method may also include, predicting, as a second output of a second trained machine-learning system, whether the next log entry is unprecedented. The method may also include combining the first output and the second output for determining a classification of the sequence of log entries.

US 2023-0188549 A1 / 11263025 B2

Proactively performing tasks based on estimating hardware reconfiguration times. A determination is made, prior to performing one or more reconfiguration actions to reconfigure a configuration of the computing environment, at least one estimated reconfiguration time to perform the one or more reconfiguration actions. At least one reconfiguration action of the one or more reconfiguration actions is performed, and one or more tasks are initiated prior to completing the one or more reconfiguration actions. The initiating is based on the at least one estimated reconfiguration time.

US 2021-0373913 A1 / 11263025 B2

Aspects of the invention include determining, by a machine learning model, a predicted workload for a system and a current system state of the system, determining an action to be enacted for the system based at least in part on the predicted workload and the current system state, enacting the action for the system, evaluating a state of the system after the action has been enacted, determining a reward for the machine learning model based at least in part on the state of the system after the action has been enacted, and updating the machine learning model based on the reward.

US 2021-0311786 A1

A method for selectively generating suggested default values for I/O configurations is provided. The method identifies a first selection including a first input value for an I/O configuration. The method determines a set of remaining input options based on the first selection. The method accesses a set of decision trees based on the set of remaining input options and selects a decision tree of the set of decision trees based on the first input value. The method generates a suggested value for a subsequent selection for the I/O configuration and causes presentation of the suggested value and a user interface element representing the subsequent selection.

US 2020-0401909 A1

A computer-implemented method for metadata-based retention of personal data may be provided. The method comprises recording data by a recording system. The data comprise payload data and metadata comprising information about the payload data and an event type; and a rule is associated with the event type, wherein the rule is indicative whether the data shall be stored persistently or temporary. The method comprises further segmenting the recorded data into a plurality of non-overlapping data segments, encrypting each data segment of the plurality of non-overlapping data segments with a unique key each, transmitting the encrypted data segments wirelessly, and storing, using a secure service container, selected ones of the plurality of non-overlapping data segments as a function of the rule.

US 2020-0285767 A1 / 11176269 B2

Provided is a method for determining a target host from a plurality of candidate hosts for migrating a software container. A management software component may instantiate a source agent software component on a source host and a target agent software component on each of a plurality of candidate target hosts. Resource requirements of at least one software container may be determined by the source agent software component. Resource capabilities of each of a plurality of target hosts may be determined by the target agent software components. The source agent software component may compare the resource requirements to the resource capabilities of each of the plurality of candidate target hosts. If the resource requirements are satisfied by a particular candidate target host, the particular candidate target host is assigned to be a target host. The at least one software container is migrated from the source host to the target host.

US 2019-0250946 A1

We analyze the 6.5 year all-sky data from the Fermi Large Area Telescope in the energy range 0.6–307.2 GeV. Raw count maps show a superposition of diffuse and point-like emission structures and are subject to shot noise and instrumental artifacts. Using the Denoising, Deconvolving, and Decomposing Photon Observations D3PO algorithm, we modeled the observed photon counts as the sum of a diffuse and a point-like photon flux, convolved with the instrumental beam and subject to Poissonian shot noise, without the use of spatial or spectral templates. The D3PO algorithm performs a Bayesian inference and yields separate estimates for the two flux components. We show that the diffuse gamma-ray flux can be described phenomenologically by only two distinct components: a soft component, presumably dominated by hadronic processes, tracing the dense, cold interstellar medium, and a hard component, presumably dominated by leptonic interactions, following the hot and dilute medium and outflows such as the Fermi bubbles.

PoS(ICRC2015)768

Predefined spatial templates to describe the background of gamma-ray emission from astrophysical processes, like cosmic ray interactions, are used in previous searches for the gamma-ray signatures of annihilating galactic dark matter. In this proceeding, we investigate the GeV excess in the inner Galaxy using an alternative approach, in which the astrophysical components are identified solely by their spectral and morphological properties. We confirm the reported GeV excess and derive related parameters for dark matter interpretation, which are consistent with previous results. We investigate the morphology of this spectral excess as preferred by the data only. This emission component exhibits a central Galaxy cusp as expected for a dark matter annihilation signal. However, Galactic disk regions with a morphology of that of the hot interstellar medium also host such a spectral component. This points to a possible astrophysical origin of the excess and requests a more detailed understanding of astrophysical gamma-ray emitting processes in the galactic center region before definite claims about a dark matter annihilation signal can be made.

JPCS4(718)042029 arXiv:1511.09015

Previous searches for the gamma-ray signatures of annihilating galactic dark matter used predefined spatial templates to describe the background of gamma-ray emission from astrophysical processes like cosmic ray interactions. In this work, we aim to establish an alternative approach, in which the astrophysical components are identified solely by their spectral and morphological properties. To this end, we adopt the recent reconstruction of the diffuse gamma-ray sky from Fermi data by the D3PO algorithm and the fact that more than 90% of its flux can be represented by only two spectral components, resulting form the dense and dilute interstellar medium. Under these presumptions, we confirm the reported DM annihilation-like signal in the inner Galaxy and derive upper limits for dark matter annihilation cross sections. We investigate whether the DM signal could be a residual of the simplified modeling of astrophysical emission by inspecting the morphology of the regions, which favor a dark matter component. The central galactic region favors strongest for such a component with the expected spherically symmetric and radially declining profile. However, astrophysical structures, in particular sky regions which seem to host most of the dilute interstellar medium, obviously would benefit from a DM annihilation-like component as well. Although these regions do not drive the fit, they warn that a more detailed understanding of astrophysical gamma-ray emitting processes in the galactic center region are necessary before definite claims about a DM annihilation signal can be made. The regions off the Galactic plane actually disfavor the best fit DM annihilation cross section from the inner Galactic region unless the radial decline of the Galactic DM density profile in the outer regions is significantly steeper than that usually assumed.

JCAP04(2016)030 arXiv:1511.02621

To better understand the origin and properties of cosmological magnetic fields, a detailed knowledge of magnetic fields in the large-scale structure of the Universe (galaxy clusters, filaments) is crucial. We propose a new statistical approach to study magnetic fields on large scales with the rotation measure grid data that will be obtained with the new generation of radio interferometers.

A&A 591 A13 arXiv:1509.00747

To better understand the origin and properties of cosmological magnetic fields, a detailed knowledge of magnetic fields in the large-scale structure of the Universe (galaxy clusters, filaments) is crucial. We propose a new statistical approach to study magnetic fields on large scales with the rotation measure grid data that will be obtained with the new generation of radio interferometers.

PoS(AASKA14)114 arXiv:1501.00415

This thesis is focused on the development of imaging techniques for high energy photon observations and their applications in gamma-ray astronomy and medical X-ray computed tomography. By analyzing data from the Fermi Gamma-ray Space Telescope, we advance the knowledge on the origin and composition of the celestial gamma-ray sky. The numerical method required for this purpose is derived using probabilistic reasoning and implemented in an abstract way so that it can be used in a wide range of applications. [...]

LMU Library PDF (26.5 MB)

The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of CURE, developed in the framework of information field theory, is starting with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify CURE by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov Chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a non-iterative alternative to it.

PhysRevE.91.013311 arXiv:1410.6289

We analyze the 6.5 yr all-sky data from the Fermi LAT restricted to gamma-ray photons with energies between 0.6-307.2 GeV. Raw count maps show a superposition of diffuse and point-like emission structures and are subject to shot noise and instrumental artifacts. Using the D3PO inference algorithm, we model the observed photon counts as the sum of a diffuse and a point-like photon flux, convolved with the instrumental beam and subject to Poissonian shot noise. D3PO performs a Bayesian inference in this setting without the use of spatial or spectral templates; i.e., it removes the shot noise, deconvolves the instrumental response, and yields estimates for the two flux components separately. The non-parametric reconstruction uncovers the morphology of the diffuse photon flux up to several hundred GeV. We present an all-sky spectral index map for the diffuse component. We show that the diffuse gamma-ray flux can be described phenomenologically by only two distinct components: a soft component, presumably dominated by hadronic processes, tracing the dense, cold interstellar medium and a hard component, presumably dominated by leptonic interactions, following the hot and dilute medium and outflows such as the Fermi bubbles. A comparison of the soft component with the Galactic dust emission indicates that the dust-to-soft-gamma ratio in the interstellar medium decreases with latitude. The spectrally hard component exists in a thick Galactic disk and tends to flow out of the Galaxy at some locations. Furthermore, we find the angular power spectrum of the diffuse flux to roughly follow a power law with an index of 2.47 on large scales, independent of energy. Our first catalog of source candidates includes 3106 candidates of which we associate 1381(1897) with known sources from the 2nd(3rd) Fermi catalog. We observe gamma-ray emission in the direction of a few galaxy clusters hosting radio halos.

A&A 581 A126 arXiv:1410.4562 PDF (14.8 MB)

We introduce NIFTY, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

AIP Conf. Proc. 1636 68 arXiv:1412.7160

Response calibration is the process of inferring how much the measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate self-calibration methods for linear signal measurements and linear dependence of the response on the calibration parameters. The common practice is to augment an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration by exploiting redundancies in the measurements. This can be understood in terms of maximizing the joint probability of signal and calibration. However, the full uncertainty structure of this joint probability around its maximum is thereby not taken into account by these schemes. Therefore better schemes -- in sense of minimal square error -- can be designed by accounting for asymmetries in the uncertainty of signal and calibration. We argue that at least a systematic correction of the common self-calibration scheme should be applied in many measurement situations in order to properly treat uncertainties of the signal on which one calibrates. Otherwise the calibration solutions suffer from a systematic bias, which consequently distorts the signal reconstruction. Furthermore, we argue that non-parametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.

PhysRevE.90.043301 arXiv:1312.1349

We present RESOLVE, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. RESOLVE not only estimates the measured sky brightness in total intensity, but also its spatial correlation structure, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. For a radio interferometer, it succeeds in deconvolving the effects of the instrumental point spread function during this process. Additionally, RESOLVE provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with RESOLVE we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.

A&A 586 A76 arXiv:1311.5282

The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the NIFTY package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 x 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components.

A&A 574 A74 arXiv:1311.1888 PDF (2.8 MB)

We present an approximate calculation of the full Bayesian posterior probability distribution for the local non-Gaussianity parameter fnl from observations of cosmic microwave background anisotropies within the framework of information field theory. The approximation that we introduce allows us to dispense with numerically expensive sampling techniques. We use a novel posterior validation method (DIP test) in cosmology to test the precision of our method. It transfers inaccuracies of the calculated posterior into deviations from a uniform distribution for a specially constructed test quantity. For this procedure we study toy cases that use one- and two-dimensional flat skies, as well as the full spherical sky. We find that we are able to calculate the posterior precisely under a flat-sky approximation, albeit not in the spherical case. We argue that this is most likely due to an insufficient precision of the used numerical implementation of the spherical harmonic transform, which might affect other non-Gaussianity estimators as well. Furthermore, we present how a nonlinear reconstruction of the primordial gravitational potential on the full spherical sky can be obtained in principle. Using the flat-sky approximation, we find deviations for the posterior of fnl from a Gaussian shape that become more significant for larger values of the underlying true fnl. We also perform a comparison to the well-known estimator of Komatsu et al. [Astrophys. J. 634, 14 (2005)] and finally derive the posterior for the local non-Gaussianity parameter gnl as an example of how to extend the introduced formalism to higher orders of non-Gaussianity.

PhysRevD.88.103516 arXiv:1307.3884

NIFTY, "Numerical Information Field Theory", is a software package designed to enable the development of signal inference algorithms that operate regardless of the underlying spatial grid and its resolution. Its object-oriented framework is written in Python, although it accesses libraries written in Cython, C++, and C for efficiency. NIFTY offers a toolkit that abstracts discretized representations of continuous spaces, fields in these spaces, and operators acting on fields into classes. Thereby, the correct normalization of operations on fields is taken care of automatically without concerning the user. This allows for an abstract formulation and programming of inference algorithms, including those derived within information field theory. Thus, NIFTY permits its user to rapidly prototype algorithms in 1D, and then apply the developed code in higher-dimensional settings of real world problems. The set of spaces on which NIFTY operates comprises point sets, n-dimensional regular grids, spherical spaces, their harmonic counterparts, and product spaces constructed as combinations of those. The functionality and diversity of the package is demonstrated by a Wiener filter code example that successfully runs without modification regardless of the space on which the inference problem is defined.

A&A 554 A26 arXiv:1301.4499 PDF (2.5 MB)

We develop a method to infer log-normal random fields from measurement data affected by Gaussian noise. The log-normal model is well suited to describe strictly positive signals with fluctuations whose amplitude varies over several orders of magnitude. We use the formalism of minimum Gibbs free energy to derive an algorithm that uses the signal's correlation structure to regularize the reconstruction. The correlation structure, described by the signal's power spectrum, is thereby reconstructed from the same data set. We show that the minimization of the Gibbs free energy, corresponding to a Gaussian approximation to the posterior marginalized over the power spectrum, is equivalent to the empirical Bayes ansatz, in which the power spectrum is fixed to its maximum a posteriori value. We further introduce a prior for the power spectrum that enforces spectral smoothness. The appropriateness of this prior in different scenarios is discussed and its effects on the reconstruction's results are demonstrated. We validate the performance of our reconstruction algorithm in a series of one- and two-dimensional test cases with varying degrees of non-linearity and different noise levels.

PhysRevE.87.032136 arXiv:1210.6866

The simulation of complex stochastic network dynamics arising, for instance, from models of coupled biomolecular processes remains computationally challenging. Often, the necessity to scan a models' dynamics over a large parameter space renders full-fledged stochastic simulations impractical, motivating approximation schemes. Here we propose an approximation scheme which improves upon the standard linear noise approximation while retaining similar computational complexity. The underlying idea is to minimize, at each time step, the Kullback-Leibler divergence between the true time evolved probability distribution and a Gaussian approximation (entropic matching). This condition leads to ordinary differential equations for the mean and the covariance matrix of the Gaussian. For cases of weak nonlinearity, the method is more accurate than the linear method when both are compared to stochastic simulations.

PhysRevE.87.022719 arXiv:1209.3700

Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method.

PhysRevE.85.021134 arXiv:1108.0600 PDF (520 kB)