Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-26T22:23:51.126Z Has data issue: false hasContentIssue false

References

Published online by Cambridge University Press:  30 May 2024

Darius M. Dziuda
Affiliation:
Central Connecticut State University
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Multivariate Biomarker Discovery
Data Science Methods for Efficient Analysis of High-Dimensional Biomedical Data
, pp. 267 - 272
Publisher: Cambridge University Press
Print publication year: 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aggarwal, R., Sounderajah, V., Martin, G. et al. (2021). Diagnostic accuracy of deep learning in medical imaging: A systematic review and meta-analysis. NPJ Digital Medicine, 4(65). https://doi.org/10.1038/s41746-021-00438-z.CrossRefGoogle ScholarPubMed
Ahmad, F. B., Cisewski, J. A., and Anderson, R. N. (2022). Provisional mortality data: United States, 2021. Morbidity and Mortality Weekly Report, April 29, 2022.CrossRefGoogle Scholar
Ambroise, C. and McLachlan, G. J. (2002). Selection bias in gene extraction on the basis of microarray gene-expression data. Proceedings of the National Academy of Sciences, 99(10), 6562–6.CrossRefGoogle ScholarPubMed
American Cancer Society. (2023). Cancer Facts & Figures 2023. Atlanta: American Cancer Society.Google Scholar
Azuaje, F. (2010). Bioinformatics and Biomarker Discovery: ‘‘Omic’’ Data Analysis for Personalized Medicine. Hoboken: Wiley-Blackwell.CrossRefGoogle Scholar
Belkin, M., Hsu, D., Ma, S., and Mandal, S. (2019). Reconciling modern machine-learning practice and the classical bias-variance trade-off. Proceedings of the National Academy of Sciences, 116(32), 15849–54.CrossRefGoogle ScholarPubMed
Bellman, R. E. (1961). Adaptive Control Processes: A Guided Tour. Princeton: Princeton University Press.CrossRefGoogle Scholar
Biomarkers Definitions Working Group. (2001). Biomarkers and surrogate endpoints: Preferred definitions and conceptual framework. Clinical Pharmacology & Therapeutics, 69(3), 8995.CrossRefGoogle Scholar
Bishop, C. M. (1995). Neural Networks for Pattern Recognition. New York: Oxford University Press.CrossRefGoogle Scholar
Bishop, C. M. (2006). Pattern Recognition and Machine Learning. New York: Springer.Google Scholar
Boser, B. E., Guyon, I., and Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In Fifth Annual Workshop on Computational Learning Theory. Pittsburgh: ACM, pp. 144–52.Google Scholar
Breiman, L. (1996a). Bagging predictors. Machine Learning, 24, 123–40.CrossRefGoogle Scholar
Breiman, L. (1996b). Out-of-Bag Estimation: Technical Report. Berkeley: Department of Statistics, University of California.Google Scholar
Breiman, L. (2001). Random forests. Machine Learning, 45(5), 532.CrossRefGoogle Scholar
Breiman, L., Friedman, J., Olshen, R., and Stone, C. (1984). Classification and Regression Trees. New York: Chapman & Hall.Google Scholar
Cichosz, P. (2015). Data Mining Algorithms: Explained Using R. Hoboken: Wiley-Blackwell.CrossRefGoogle Scholar
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 3746.CrossRefGoogle Scholar
Cortes, C. and Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273–97.CrossRefGoogle Scholar
De Jong, K. (2005). Genetic algorithms: A 30-year perspective. In Booker, L., Forrest, S., Mitchell, M., and Riolo, R. (eds.), Perspectives on Adaptation in Natural and Artificial Systems. New York: Oxford University Press, pp. 1131.Google Scholar
De Jong, S. (1993). SIMPLS: An alternative approach to partial least squares regression. Chemometrics and Intelligent Laboratory Systems, 18, 251–63.CrossRefGoogle Scholar
Di Ruscio, D. (2000). A weighted view on the partial least-squares algorithm. Automatica, 36, 831–50.CrossRefGoogle Scholar
Domingos, P. (1999). MetaCost: A general method for making classifiers cost-sensitive. In Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Diego: Association for Computing Machinery, pp. 155–64.Google Scholar
Drori, I. (2023). The Science of Deep Learning. Cambridge: Cambridge University Press.Google Scholar
Duda, R. O., Hart, P. E., and Stork, D. G. (2001). Pattern Classification, 2nd ed. New York: Wiley.Google Scholar
Dziuda, D. (2010). Data Mining for Genomics and Proteomics: Analysis of Gene and Protein Expression Data. Hoboken: Wiley-Interscience.CrossRefGoogle Scholar
Efron, B. (1979). Bootstrap methods: Another look at the jacknife. Annals of Statistics, 7(1), 126.CrossRefGoogle Scholar
Efron, B. and Hastie, T. (2016). Computer Age Statistical Inference: Algorithms, Evidence, and Data Science. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Efron, B., Hastie, T., Johnstone, I., and Tibshirani, R. (2004). Least angle regression. Annals of Statistics, 32(2), 407–51.CrossRefGoogle Scholar
Efron, B. and Tibshirani, R. (1993). An Introduction to the Bootstrap. New York: Chapman & Hall.CrossRefGoogle Scholar
Engelbrecht, A. P. (2007). Computational Intelligence: An Introduction, 2nd ed. Hoboken: John Wiley & Sons.CrossRefGoogle Scholar
Etzioni, R., Gulati, R., and Weiss, N. S. (2022). Multicancer early detection: Learning from the past to meet the future. Journal of the National Cancer Institute, 114(3), 349–52.CrossRefGoogle ScholarPubMed
Fahrmeir, L., Kneib, T., Lang, S., and Marx, B. (2013). Regression: Models, Methods and Applications. New York: Springer.CrossRefGoogle Scholar
Fan, J., Li, R., Zhang, C.-H., and Zou, H. (2020). Statistical Foundations of Data Science. London: CRC Press.CrossRefGoogle Scholar
FDA-NIH Biomarker Working Group. (2021). BEST (Biomarkers, EndpointS, and other Tools) Resource. Silver Spring: Food and Drug Administration, Bethesda: National Institutes of Health.Google Scholar
Fernandez, A., Garcia, S., Galar, M., Prati, R. C., Krawczyk, B., and Herrera, F. (2018). Learning from Imbalanced Data Sets. Cham: Springer Nature.CrossRefGoogle Scholar
Fisher, R. A. (1936). The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7, 179–88.CrossRefGoogle Scholar
Fisher, R. A. (1938). The statistical utilization of multiple measurements. Annals of Eugenics, 8, 376–86.CrossRefGoogle Scholar
Frank, I. E. and Friedman, J. H. (1993). A statistical view of some chemometrics regression tools. Technometrics, 35(2), 109–35.Google Scholar
Friedman, J. H. (1989). Regularized discriminant analysis. Journal of the American Statistical Association, 84(405), 165–75.CrossRefGoogle Scholar
Gómez-Verdejo, V., Parrado-Hernández, E., and Tohka, J. (2019). Sign-consistency based variable importance for machine learning in brain imaging. Neuroinformatics, 17, 593609.CrossRefGoogle ScholarPubMed
Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning. Cambridge, MA: MIT Press.Google Scholar
Guyon, I., Weston, J., Barnhill, S., and Vapnik, V. N. (2002). Gene selection for cancer classification using support vector machines. Machine Learning, 46(1–3), 389422.CrossRefGoogle Scholar
Hair, J. F., Black, W. C., Babin, B. J., and Anderson, R. E. (2014). Multivariate Data Analysis. New York: Pearson.Google Scholar
Hanahan, D. and Weinberg, R. A. (2011). Hallmarks of cancer: The next generation. Cell, 144(5), 646–74.CrossRefGoogle ScholarPubMed
Hartigan, J. A. (1972). Direct clustering of data matrix. Journal of the American Statistical Association, 67(337), 123–9.CrossRefGoogle Scholar
Hastie, T., Montanari, A., Rosset, S., and Tibshirani, R. J. (2022). Surprises in high-dimensional ridgeless least squares interpolation. Annals of Statistics, 50(2), 949–86.CrossRefGoogle ScholarPubMed
Hastie, T., Tibshirani, R., and Friedman, J. H. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. New York: Springer.CrossRefGoogle Scholar
Heaton, J. (2015). Artificial Intelligence for Humans, Volume 3: Deep Learning and Neural Networks. St. Louis: Heaton Research.Google Scholar
Hebb, D. O. (1949). The Organization of Behavior: A Neuropsychological Theory. New York: Wiley.Google Scholar
Helland, I. S. (1988). On the structure of partial least squares regression. Communications in Statistics: Simulation and Computation, 17(2), 581607.CrossRefGoogle Scholar
Henze, N. and Zirkler, B. (1990). A class of invariant consistent tests for multivariate normality. Communications in Statistics: Theory and Methods, 19(10), 3595–617.Google Scholar
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. ArXiv.1207.0580.Google Scholar
Hoerl, A. E. and Kennard, R. W. (1970). Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1), 5567.CrossRefGoogle Scholar
Holland, J. H. (1992). Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, 1st ed. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Hotelling, H. (1951). A generalized T test and measure of multivariate dispersion. In Neyman, J. (ed.), Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (July 31–August 12, 1950) (vol. 2). Berkeley: University of California Press, pp. 2341.Google Scholar
Hoyert, D. L. and Xu, J. (2012). Deaths: Preliminary data for 2011. National Vital Statistics Reports, 61(6), 151.Google ScholarPubMed
Huber, P. J. (1964). Robust estimation of a location parameter. Annals of Mathematical Statistics, 35(1), 73101.CrossRefGoogle Scholar
Huberty, C. J. and Olejnik, S. (2006). Applied MANOVA and Discriminant Analysis. Hoboken: Wiley.CrossRefGoogle Scholar
Hunger, S. P., Lu, X., Devidas, M. et al. (2012). Improved survival for children and adolescents with acute lymphoblastic leukemia between 1990 and 2005: A report from the Children’s Oncology Group. Journal of Clinical Oncology, 30(14), 1663–9.CrossRefGoogle ScholarPubMed
Izenman, A. J. (2008). Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning. New York: Springer.CrossRefGoogle Scholar
James, G., Witten, D., Hastie, T., and Tibshirani, R. (2014). An Introduction to Statistical Learning with Applications in R. New York: Springer.Google Scholar
Johnson, R. A. and Wichern, D. W. (2007). Applied Multivariate Statistical Analysis, 6th ed. Upper Saddle River: Prentice Hall.Google Scholar
Karatzoglou, A., Smola, A., Hornik, K., and Zeileis, A. (2004). kernlab: An S4 package for kernel methods in R. Journal of Statistical Software, 11(9), 120.CrossRefGoogle Scholar
Kennedy, J. and Eberhart, R. (1995). Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks: Volume 4. Piscataway: IEEE, pp. 1942–8.Google Scholar
Kennedy, J., Eberhart, R. C., and Shi, Y. (2001). Swarm Intelligence. San Diego: Morgan Kaufmann.Google Scholar
Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671–80.CrossRefGoogle ScholarPubMed
Kohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biological Cybernetics, 43(1), 5969.CrossRefGoogle Scholar
Kohonen, T. (1995). Self-Organizing Maps. Berlin: Springer.CrossRefGoogle Scholar
Korot, E., Guan, Z., Ferraz, D. et al. (2021). Code-free deep learning for multi-modality medical image classification. Nature Machine Intelligence, 3, 288–98.CrossRefGoogle Scholar
Kosinski, M. (2022). RTCGA: The Cancer Genome Atlas data integration. R package version 1.29.0. https://bioconductor.org/packages/RTCGA.Google Scholar
Kuhn, M. (2022). caret: Classification and regression training. R package version 6.0-93. https://CRAN.R-project.org/package=caret.Google Scholar
Kuhn, M. and Johnson, K. (2013). Applied Predictive Modeling. New York: Springer.CrossRefGoogle Scholar
Kuhn, M. and Johnson, K. (2020). Feature Engineering and Selection: A Practical Approach for Predictive Models. Boca Raton: CRC Press.Google Scholar
Kumar, C. and Van Gool, A. J. (2013). Biomarkers in translational and personalized medicine. In Horvatovich, P. and Bischoff, R. (eds.), Comprehensive Biomarker Discovery and Validation for Clinical Application. Cambridge: The Royal Society of Chemistry, pp. 339.CrossRefGoogle Scholar
Lal, T. N., Chapelle, O., Weston, J., and Elisseeff, A. (2006). Embedded methods. In Guyon, I., Gunn, S., Nikravesh, M., and Zadeh, L. A. (eds.), Feature Extraction: Foundations and Applications. Berlin: Springer-Verlag, pp. 137–65.Google Scholar
Lawley, D. N. (1938). A generalization of Fisher’s z test. Biometrika, 30(1–2), 180–7, correction 467–9.CrossRefGoogle Scholar
Lazzeroni, L. and Oven, A. O. (2002). Plaid model for gene expression data. Statistica Sinica, 12, 6186.Google Scholar
Lennon, A. M., Buchanan, A. H., Kinde, I. et al. (2020). Feasibility of blood testing combined with PET-CT to screen for cancer and guide intervention. Science, 369(6499), eabb9601.CrossRefGoogle ScholarPubMed
Lindgren, F., Geladi, P., and Wold, S. (1993). The kernel algorithm for PLS. Journal of Chemometrics, 7, 4559.CrossRefGoogle Scholar
Ling, C. X. and Sheng, V. S. (2017). Cost-Sensitive Learning. In Sammut, C. and Webb, G. I. (eds.), Encyclopedia of Machine Learning and Data Mining. New York: Springer, pp. 285–9.Google Scholar
Liu, X., Faes, L., Kale, A. U. et al. (2019). A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. Lancet. Digital Health, 1(6), e271–97.CrossRefGoogle ScholarPubMed
Lu, H., Plataniotis, K. N., and Venetsanopoulos, A. (2014). Multilinear Subspace Learning: Dimensionality Reduction of Multidimensional Data. New York: Chapman & Hall.Google Scholar
Mardia, K. V. (1970). Measures of multivariate skewness and kurtosis with applications. Biometrika, 57, 519–30.CrossRefGoogle Scholar
McCulloch, W. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115–33.CrossRefGoogle Scholar
Microsoft Corporation and Weston, S. (2022). doParallel: Foreach parallel adaptor for the “parallel” package. R package version 1.0.17. https://CRAN.R-project.org/package=doParallel.Google Scholar
Mitchell, M. (1996). An Introduction to Genetic Algorithms. Cambridge, MA: MIT Press.Google Scholar
Nuffield Council on Bioethics. (2010). Medical Profiling and Online Medicine: The Ethics of “Personalised Healthcare” in a Consumer Age. London: Nuffield Council on Bioethics.Google Scholar
Orestes Cerdeira, J., Duarte Silva, P., Cadima, J., and Minhoto, M. (2023). subselect: Selecting variable subsets. R package version 0.15.4. https://CRAN.R-project.org/package=subselect.Google Scholar
Pepe, M. S., Etzioni, R., Feng, Z. et al. (2002). Elements of study design for biomarker development. In Diamondis, E., Fritsche, H. A., Lilja, H. et al. (eds.), Tumor Markers: Physiology, Pathobiology, Technology, and Clinical Applications. Washington, DC: AACC Press, pp. 141–50.Google Scholar
Prainsack, B. (2017). Personalized Medicine: Empowered Patients in the 21st Century? New York: New York University Press.Google Scholar
R Core Team. (2022). R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing. www.R-project.org.Google Scholar
Rakotomamonjy, A. (2003). Variable selection using SVM-based criteria. Journal of Machine Learning Research, 3, 1357–70.Google Scholar
Rana, J. S., Khan, S. S., Lloyd-Jones, D. M., and Sidney, S. (2021). Changes in mortality in top 10 causes of death from 2011 to 2018. Journal of General Internal Medicine, 36, 2517–18.CrossRefGoogle ScholarPubMed
Rännar, S., Lindgren, F., Geladi, P., and Wold, S. (1994). A PLS kernel algorithm for data sets with many variables and fewer objects. Part 1: Theory and algorithm. Journal of Chemometrics, 8, 111–25.CrossRefGoogle Scholar
Rencher, A. C. (2002). Methods of Multivariate Analysis, 2nd ed. New York: Wiley.CrossRefGoogle Scholar
Ripley, B. D. (1996). Pattern Recognition and Neural Networks. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Rish, I. and Grabarnik, G. (2015). Sparse Modeling: Theory, Algorithms, and Applications. Boca Raton: CRC Press.Google Scholar
Rocks, J. W. and Mehta, P. (2022). Memorizing without overfitting: Bias, variance, and interpolation in overparameterized models. Physical Review Research, 4(1), 013201.CrossRefGoogle ScholarPubMed
Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386408.CrossRefGoogle ScholarPubMed
Rosipal, R. and Krämer, N. (2006). Overview and recent advances in partial least squares. In Saunders, C., Grobelnik, M., Gunn, S., and Shawe-Taylor, J. (eds.), Subspace, Latent Structure and Feature Selection. SLSFS 2005. Lecture Notes in Computer Science (vol. 3940). Berlin: Springer.Google Scholar
Royston, J. P. (1983). Some techniques for assessing multivariate normality based on the Shapiro–Wilk W. Applied Statistics, 32, 121–33.CrossRefGoogle Scholar
Rumelhart, D., Hinton, G., and Williams, R. (1986). Learning representations by back-propagating errors. Nature, 323, 533–6.CrossRefGoogle Scholar
Sanger, F., Nicklen, S., and Coulson, A. R. (1977). DNA sequencing with chain-terminating inhibitors. Proceedings of the National Academy of Sciences, 74(12), 5463–7.CrossRefGoogle ScholarPubMed
Schӧlkopf, B. and Smola, A. J. (2002). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Cambridge, MA: MIT Press.Google Scholar
Schӧlkopf, B., Smola, A. J., Williamson, R. C., and Bartlett, P. L. (2000). New support vector algorithms. Neural Computation, 12(5), 1207–45.CrossRefGoogle Scholar
Siegel, R. L., Miller, K. D., Fuchs, H. E., and Jemal, A. (2022). Cancer statistics, 2022. CA Cancer Journal for Clinicians, 72(1), 733.CrossRefGoogle Scholar
Sievert, C. (2020). Interactive Web-Based Data Visualization with R, plotly, and shiny. New York: Chapman & Hall.CrossRefGoogle Scholar
Smola, A. J. and Schӧlkopf, B. (2004). A tutorial on support vector regression. Statistics and Computing, 14, 199222.CrossRefGoogle Scholar
Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15, 1929–58.Google Scholar
Srivastava, S., Koay, E. J., Borowsky, A. D. et al. (2019). Cancer overdiagnosis: A biological challenge and clinical dilemma. Nature Reviews: Cancer, 19(6), 349–58.Google ScholarPubMed
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1), 267–88.Google Scholar
Tibshirani, R., Hastie, T., Eisen, M. et al. (1999). Clustering Methods for the Analysis of DNA Microarray Data: Technical Report. Stanford: Department of Statistics, Stanford University.Google Scholar
Uhlén, M., Fagerberg, L., Hallström, B. M. et al. (2015). Proteomics: Tissue-based map of the human proteome. Science, 347(6220), 1260419.CrossRefGoogle ScholarPubMed
Vapnik, V. N. (1998). Statistical Learning Theory. New York: Wiley.Google Scholar
Vapnik, V. N. (2000). The Nature of Statistical Learning Theory, 2nd ed. New York: Springer.CrossRefGoogle Scholar
Viscio, J. A. (2017). Diagnosis of Alzheimer’s Disease Based on a Parsimonious Serum Autoantibody Biomarker Derived from Multivariate Feature Selection. New Britain: Central Connecticut State University.Google Scholar
Walsh, J. E. (1962). Handbook of Nonparametric Statistics. Princeton: Van Nostrand.Google Scholar
Welch, B. L. (1939). Note on discriminant functions. Biometrika, 31(1–2), 218–20.Google Scholar
Werbos, P. J. (1974). Beyond Regression: New Tools for Prediction and Analysis in the Behavioural Sciences. Ph.D. thesis, Harvard University.Google Scholar
Wickham, H., François, R., Henry, L., and Müller, K. (2022). dplyr: A grammar of data manipulation. R package version 1.0.10. https://CRAN.R-project.org/package=dplyr.Google Scholar
Wold, H. (1966). Estimation of principal components and related models by iterative least squares. In Krishnajah, P. R. (ed.), Multivariate Analysis. New York: Academic Press, pp. 391420.Google Scholar
Wold, H. (1975). Path models with latent variables: The NIPALS approach. In Blalock, H. M. (ed.), Quantitative Sociology. New York: Academic Press, pp. 307–57.Google Scholar
Wolters, M. A. (2015). A genetic algorithm for selection of fixed-size subsets, with application to design problems. Journal of Statistical Software, 68(1), 118.CrossRefGoogle Scholar
Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 301–20.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • References
  • Darius M. Dziuda, Central Connecticut State University
  • Book: Multivariate Biomarker Discovery
  • Online publication: 30 May 2024
  • Chapter DOI: https://doi.org/10.1017/9781009006767.024
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • References
  • Darius M. Dziuda, Central Connecticut State University
  • Book: Multivariate Biomarker Discovery
  • Online publication: 30 May 2024
  • Chapter DOI: https://doi.org/10.1017/9781009006767.024
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • References
  • Darius M. Dziuda, Central Connecticut State University
  • Book: Multivariate Biomarker Discovery
  • Online publication: 30 May 2024
  • Chapter DOI: https://doi.org/10.1017/9781009006767.024
Available formats
×