Book contents
- Frontmatter
- Contents
- Foreword
- Preface
- Part I Density-Ratio Approach to Machine Learning
- Part II Methods of Density-Ratio Estimation
- Part III Applications of Density Ratios in Machine Learning
- Part IV Theoretical Analysis of Density-Ratio Estimation
- Part V Conclusions
- List of Symbols and Abbreviations
- Bibliography
- Index
Part IV - Theoretical Analysis of Density-Ratio Estimation
Published online by Cambridge University Press: 05 March 2012
- Frontmatter
- Contents
- Foreword
- Preface
- Part I Density-Ratio Approach to Machine Learning
- Part II Methods of Density-Ratio Estimation
- Part III Applications of Density Ratios in Machine Learning
- Part IV Theoretical Analysis of Density-Ratio Estimation
- Part V Conclusions
- List of Symbols and Abbreviations
- Bibliography
- Index
Summary
In this part we address theoretical aspects of density-ratio estimation.
In Chapter 13, we analyze the asymptotic properties of density-ratio estimation. We first establish the consistency and asymptotic normality of the KLIEP method (see Chapter 5) in Section 13.1, and we elucidate the asymptotic learning curve of the LSIF method (see Chapter 6) in Section 13.2. Then, in Section 13.3, we explain that the logistic regression method (see Chapter 4) achieves the minimum asymptotic variance when the parametric model is specified correctly. Finally, in Section 13.4, we compare theoretically the performance of density-ratio estimation methods, showing that separate density estimation (see Chapter 2) is favorable if correct density models are available, and direct density-ratio estimation is favorable otherwise.
In Chapter 14, the convergence rates of KLIEP (see Chapter 5) and uLSIF (see Chapter 6) are investigated theoretically under the non-parametric setup.
In Chapter 15, a parametric method of a two-sample test is described, and its properties are analyzed. We derive an optimal estimator of the divergence in the sense of the asymptotic variance, which is based on parametric densityratio estimation. Then we provide a statistic for two-sample tests based on the optimal divergence estimator, which is proved to dominate the existing empirical likelihood-score test.
Finally, in Chapter 16, the numerical stability of kernelized density-ratio estimators is analyzed. As shown in Section 7.2.2, the ratio fitting and the moment matching methods share the same solution in theory, although the optimization criteria are different.
- Type
- Chapter
- Information
- Density Ratio Estimation in Machine Learning , pp. 213 - 214Publisher: Cambridge University PressPrint publication year: 2012