Machine Learning and Predicting Clinical Outcomes
Chandrani Kumari
Advisor:
Rahul Siddharthan
Degree:
Ph.D
Main Subject:
Computational Biology
Institution:
HBNI
Year:
2025-02-28
Pages:
133p.
Date:
2025
xmlui.dri2xhtml.METS-1.0.item-relation-isbasedon:
[1] C. Kumari, G. I. Menon, L. Narlikar, U. Ram, and R. Siddharthan,
“Accurate birth weight prediction from fetal biometry using the gompertz
model,” European Journal of Obstetrics & Gynecology and Reproductive
Biology: X (2024) 100344.
[2] C. Kumari and R. Siddharthan, “Mmm and mmmsynth: Clustering of
heterogeneous tabular data, and synthetic data generation,” Plos one 19
no. 4, (2024) e0302271.
[3] S. Nandi, U. R. Potunuru, C. Kumari, A. A. Nathan, J. Gopal, G. I. Menon,
R. Siddharthan, M. Dixit, and P. R. Thangaraj, “Altered kinetics of
circulating progenitor cells in cardiopulmonary bypass (cpb) associated
vasoplegic patients: A pilot study,” Plos one 15 no. 11, (2020) e0242375.
[4] A. L. Tarca, B. A. Pataki, R. Romero, M. Sirota, Y. Guan, R. Kutum,
N. Gomez-Lopez, B. Done, G. Bhatti, T. Yu, et al., “Crowdsourcing
assessment of maternal blood multi-omics for predicting gestational age and
preterm birth,” Cell Reports Medicine 2 no. 6, (2021) .
[5] A. M. Rahmani, E. Yousefpoor, M. S. Yousefpoor, Z. Mehmood, A. Haider,
M. Hosseinzadeh, and R. Ali Naqvi, “Machine learning (ml) in medicine:
Review, applications, and challenges,” Mathematics 9 no. 22, (2021) 2970.
121[6] P. Rajpurkar, E. Chen, O. Banerjee, and E. J. Topol, “Ai in health and
medicine,” Nature medicine 28 no. 1, (2022) 31–38.
[7] P. Pattnayak and A. R. Panda, “Innovation on machine learning in
healthcare services—an introduction,” Technical Advancements of Machine
Learning in Healthcare (2021) 1–30.
[8] T. Mitchell, “Machine learning,” Publisher: McGraw Hill (1997) .
[9] E. Elsebakhi, F. Lee, E. Schendel, A. Haque, N. Kathireason, T. Pathare,
N. Syed, and R. Al-Ali, “Large-scale machine learning based on functional
networks for biomedical big data with high performance computing
platforms,” Journal of Computational Science 11 (2015) 69–81.
[10] A. M. Rahmani, S. Ali, M. S. Yousefpoor, E. Yousefpoor, R. A. Naqvi,
K. Siddique, and M. Hosseinzadeh, “An area coverage scheme based on fuzzy
logic and shu%ed frog-leaping algorithm (sfla) in heterogeneous wireless
sensor networks,” Mathematics 9 no. 18, (2021) 2251.
[11] D. V. Dimitrov, “Medical internet of things and big data in healthcare,”
Healthcare informatics research 22 no. 3, (2016) 156–163.
[12] M. Rana and M. Bhushan, “Machine learning and deep learning approach
for medical image analysis: diagnosis to detection,” Multimedia Tools and
Applications (2022) 1–39.
[13] S. Hussain, I. Mubeen, N. Ullah, S. S. U. D. Shah, B. A. Khan, M. Zahoor,
R. Ullah, F. A. Khan, and M. A. Sultan, “Modern diagnostic imaging
technique applications and risk factors in the medical field: A review,”
BioMed Research International 2022 (2022) .
[14] K. B. Johnson, W.-Q. Wei, D. Weeraratne, M. E. Frisse, K. Misulis,
K. Rhee, J. Zhao, and J. L. Snowdon, “Precision medicine, ai, and the future
122of personalized health care,” Clinical and translational science 14 no. 1,
(2021) 86–93.
[15] H. Fröhlich, R. Balling, N. Beerenwinkel, O. Kohlbacher, S. Kumar,
T. Lengauer, M. H. Maathuis, Y. Moreau, S. A. Murphy, T. M. Przytycka,
et al., “From hype to reality: data science enabling personalized medicine,”
BMC medicine 16 no. 1, (2018) 1–15.
[16] K. Kreimeyer, M. Foster, A. Pandey, N. Arya, G. Halford, S. F. Jones,
R. Forshee, M. Walderhaug, and T. Botsis, “Natural language processing
systems for capturing and standardizing unstructured clinical information: a
systematic review,” Journal of biomedical informatics 73 (2017) 14–29.
[17] R. Gupta, D. Srivastava, M. Sahu, S. Tiwari, R. K. Ambasta, and P. Kumar,
“Artificial intelligence to deep learning: machine intelligence approach for
drug discovery,” Molecular diversity 25 (2021) 1315–1360.
[18] I. Mandal, “Machine learning algorithms for the creation of clinical
healthcare enterprise systems,” Enterprise Information Systems 11 no. 9,
(2017) 1374–1400.
[19] U. Schmidt-Erfurth, A. Sadeghipour, B. S. Gerendas, S. M. Waldstein, and
H. Bogunović, “Artificial intelligence in retina,” Progress in retinal and eye
research 67 (2018) 1–29.
[20] S. J. Adams, R. D. Henderson, X. Yi, and P. Babyn, “Artificial intelligence
solutions for analysis of x-ray images,” Canadian Association of Radiologists
Journal 72 no. 1, (2021) 60–72.
[21] F. M. De La Vega, S. Chowdhury, B. Moore, E. Frise, J. McCarthy, E. J.
Hernandez, T. Wong, K. James, L. Guidugli, P. B. Agrawal, et al., “Artificial
intelligence enables comprehensive genome interpretation and nomination of
123candidate diagnoses for rare genetic diseases,” Genome Medicine 13 (2021)
1–19.
[22] M. Moor, O. Banerjee, Z. S. H. Abad, H. M. Krumholz, J. Leskovec, E. J.
Topol, and P. Rajpurkar, “Foundation models for generalist medical artificial
intelligence,” Nature 616 no. 7956, (2023) 259–265.
[23] M. M. Li, K. Huang, and M. Zitnik, “Graph representation learning in
biomedicine and healthcare,” Nature Biomedical Engineering 6 no. 12,
(2022) 1353–1369.
[24] M. W. Berry, A. Mohamed, and B. W. Yap, Supervised and unsupervised
learning for data science. Springer, 2019.
[25] A. Mucherino, P. J. Papajorgji, P. M. Pardalos, A. Mucherino, P. J.
Papajorgji, and P. M. Pardalos, “K-nearest neighbor classification,” Data
mining in agriculture (2009) 83–106.
[26] M. P. LaValley, “Logistic regression,” Circulation 117 no. 18, (2008)
2395–2399.
[27] D. Berrar, “Bayes’ theorem and naive bayes classifier,” Encyclopedia of
bioinformatics and computational biology: ABC of bioinformatics 403 (2018)
412.
[28] A. J. Myles, R. N. Feudale, Y. Liu, N. A. Woody, and S. D. Brown, “An
introduction to decision tree modeling,” Journal of Chemometrics: A
Journal of the Chemometrics Society 18 no. 6, (2004) 275–285.
[29] L. Breiman, “Random forests,” Machine learning 45 (2001) 5–32.
[30] J. H. Friedman, “Stochastic gradient boosting,” Computational statistics &
data analysis 38 no. 4, (2002) 367–378.
124[31] S. Suthaharan and S. Suthaharan, “Support vector machine,” Machine
learning models and algorithms for big data classification: thinking with
examples for e!ective learning (2016) 207–235.
[32] W. S. Noble, “What is a support vector machine?” Nature biotechnology 24
no. 12, (2006) 1565–1567.
[33] R. Bro and A. K. Smilde, “Principal component analysis,” Analytical
methods 6 no. 9, (2014) 2812–2831.
[34] M. Mohammed, M. B. Khan, and E. B. M. Bashier, Machine learning:
algorithms and applications. Crc Press, 2016.
[35] L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of
machine learning research 9 no. 11, (2008) .
[36] L. McInnes, J. Healy, and J. Melville, “Umap: Uniform manifold
approximation and projection for dimension reduction,” arXiv preprint
arXiv:1802.03426 (2018) .
[37] M. E. Celebi and K. Aydin, Unsupervised learning algorithms, vol. 9.
Springer, 2016.
[38] F. Murtagh and P. Contreras, “Algorithms for hierarchical clustering: an
overview,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge
Discovery 2 no. 1, (2012) 86–97.
[39] E. Schubert, J. Sander, M. Ester, H. P. Kriegel, and X. Xu, “Dbscan
revisited, revisited: why and how you should (still) use dbscan,” ACM
Transactions on Database Systems (TODS) 42 no. 3, (2017) 1–21.
[40] M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al., “A density-based algorithm
for discovering clusters in large spatial databases with noise,” in kdd, vol. 96,
pp. 226–231. 1996.
125[41] D. A. Reynolds et al., “Gaussian mixture models.” Encyclopedia of
biometrics 741 no. 659-663, (2009) .
[42] T. Chari and L. Pachter, “The specious art of single-cell genomics,” PLOS
Computational Biology 19 no. 8, (2023) e1011288.
[43] X. Dong, Z. Yu, W. Cao, Y. Shi, and Q. Ma, “A survey on ensemble
learning,” Frontiers of Computer Science 14 (2020) 241–258.
[44] Y. Freund, R. E. Schapire, et al., “Experiments with a new boosting
algorithm,” in icml, vol. 96, pp. 148–156, Citeseer. 1996.
[45] R. Maclin and D. Opitz, “An empirical evaluation of bagging and boosting,”
AAAI/IAAI 1997 (1997) 546–551.
[46] A. Natekin and A. Knoll, “Gradient boosting machines, a tutorial,”
Frontiers in neurorobotics 7 (2013) 21.
[47] T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in
Proceedings of the 22nd acm sigkdd international conference on knowledge
discovery and data mining, pp. 785–794. 2016.
[48] G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T.-Y.
Liu, “Lightgbm: A highly e#cient gradient boosting decision tree,” Advances
in neural information processing systems 30 (2017) .
[49] A. V. Dorogush, V. Ershov, and A. Gulin, “Catboost: gradient boosting
with categorical features support,” arXiv preprint arXiv:1810.11363 (2018) .
[50] L. Breiman, “Bagging predictors,” Machine learning 24 (1996) 123–140.
[51] B. Jenkins and A. Tanguay, “Handbook of neural computing and neural
networks,” 1995.
126[52] J. Li, J.-h. Cheng, J.-y. Shi, and F. Huang, “Brief introduction of back
propagation (bp) neural network algorithm and its improvement,” in
Advances in Computer Science and Information Engineering: Volume 2,
pp. 553–558, Springer. 2012.
[53] X. Xu, L. Zuo, and Z. Huang, “Reinforcement learning algorithms with
function approximation: Recent advances and applications,” Information
Sciences 261 (2014) 1–31.
[54] A. G. Chidambaram and M. Josephson, “Clinical research study designs:
The essentials,” Pediatric investigation 3 no. 04, (2019) 245–252.
[55] L. M. McCowan, F. Figueras, and N. H. Anderson, “Evidence-based national
guidelines for the management of suspected fetal growth restriction:
comparison, consensus, and controversy,” American journal of obstetrics and
gynecology 218 no. 2, (2018) S855–S868.
[56] S. Chauhan, M. Rice, W. Grobman, J. Bailit, U. Reddy, R. Wapner,
M. Varner, J. Thorp Jr, K. Leveno, S. Caritis, et al., “Msce, for the eunice
kennedy shriver national institute of child health and human development
(nichd) maternal-fetal medicine units (mfmu) network. neonatal morbidity of
small-and large-for-gestational-age neonates born at term in uncomplicated
pregnancies,” Obstet Gynecol 130 no. 3, (2017) 511–519.
[57] A. WEISSMANN-BRENNER, M. J. Simchen, E. Zilberberg, A. Kalter,
B. Weisz, R. Achiron, and M. Dulitzky, “Maternal and neonatal outcomes of
large for gestational age pregnancies,” Acta obstetricia et gynecologica
Scandinavica 91 no. 7, (2012) 844–849.
[58] T. Henriksen, “The macrosomic fetus: a challenge in current obstetrics,”
Acta obstetricia et gynecologica Scandinavica 87 no. 2, (2008) 134–145.
127[59] T. Kiserud, G. Piaggio, G. Carroli, M. Widmer, J. Carvalho,
L. Neerup Jensen, D. Giordano, J. G. Cecatti, H. Abdel Aleem, S. A.
Talegawkar, et al., “The world health organization fetal growth charts: a
multinational longitudinal study of ultrasound biometric measurements and
estimated fetal weight,” PLoS medicine 14 no. 1, (2017) e1002220.
[60] T. Kiserud, A. Benachi, K. Hecher, R. G. Perez, J. Carvalho, G. Piaggio,
and L. D. Platt, “The world health organization fetal growth charts:
concept, findings, interpretation, and application,” American journal of
obstetrics and gynecology 218 no. 2, (2018) S619–S629.
[61] G. M. B. Louis, J. Grewal, P. S. Albert, A. Sciscione, D. A. Wing, W. A.
Grobman, R. B. Newman, R. Wapner, M. E. D’Alton, D. Skupski, et al.,
“Racial/ethnic standards for fetal growth: the nichd fetal growth studies,”
American journal of obstetrics and gynecology 213 no. 4, (2015) 449–e1.
[62] K. L. Grantz, M. L. Hediger, D. Liu, and G. M. B. Louis, “Fetal growth
standards: the nichd fetal growth study approach in context with
intergrowth-21st and the world health organization multicentre growth
reference study,” American journal of obstetrics and gynecology 218 no. 2,
(2018) S641–S655.
[63] T. Todros, E. Ferrazzi, U. Nicolini, C. Groli, S. Zucca, L. Parodi, M. Pavoni,
and A. Zorzoli, “Fitting growth curves to head and abdomen measurements
of the fetus: a multicentric study,” Journal of clinical ultrasound 15 no. 2,
(1987) 95–105.
[64] A. T. Papageorghiou, E. O. Ohuma, D. G. Altman, T. Todros, L. C. Ismail,
A. Lambert, Y. A. Ja!er, E. Bertino, M. G. Gravett, M. Purwar, et al.,
“International standards for fetal growth based on serial ultrasound
128measurements: the fetal growth longitudinal study of the intergrowth-21st
project,” The Lancet 384 no. 9946, (2014) 869–879.
[65] W. Wosilait, R. Luecke, and J. Young, “A mathematical analysis of human
embryonic and fetal growth data.” Growth, development, and aging: GDA
56 no. 4, (1992) 249–257.
[66] B. Gompertz, “Xxiv. on the nature of the function expressive of the law of
human mortality, and on a new mode of determining the value of life
contingencies. in a letter to francis baily, esq. frs &c,” Philosophical
transactions of the Royal Society of London no. 115, (1825) 513–583.
[67] A. K. Laird, “Dynamics of tumour growth,” British journal of cancer 18
no. 3, (1964) 490.
[68] C. Frenzen and J. Murray, “A cell kinetics justification for
gompertz’equation,” SIAM Journal on Applied Mathematics 46 no. 4, (1986)
614–629.
[69] K. M. Tjørve and E. Tjørve, “The use of gompertz models in growth
analyses, and new gompertz-model approach: An addition to the
unified-richards family,” PloS one 12 no. 6, (2017) e0178691.
[70] R. H. Luecke, W. D. Wosilait, and J. F. Young, “Mathematical modeling of
human embryonic and fetal growth rates.” Growth, development, and aging:
GDA 63 no. 1-2, (1999) 49–59.
[71] M. J. Shepard, V. A. Richards, R. L. Berkowitz, S. L. Warsof, and J. C.
Hobbins, “An evaluation of two equations for predicting fetal weight by
ultrasound,” American journal of obstetrics and gynecology 142 no. 1, (1982)
47–54.
129[72] F. P. Hadlock, R. Harrist, R. S. Sharman, R. L. Deter, and S. K. Park,
“Estimation of fetal weight with the use of head, body, and femur
measurements—a prospective study,” American journal of obstetrics and
gynecology 151 no. 3, (1985) 333–337.
[73] J. Villar, L. Cheikh Ismail, C. G. Victora, E. O. Ohuma, E. Bertino, D. G.
Altman, A. Lambert, A. T. Papageorghiou, M. Carvalho, Y. A. Ja!er, et al.,
“International fetal and newborn growth consortium for the 21st century
(intergrowth-21st). international standards for newborn weight, length, and
head circumference by gestational age and sex: the newborn cross-sectional
study of the intergrowth-21st project,” Lancet 384 no. 9946, (2014) 857–68.
[74] J. Milner and J. Arezina, “The accuracy of ultrasound estimation of fetal
weight in comparison to birth weight: A systematic review,” Ultrasound 26
no. 1, (2018) 32–41.
[75] C. W. Kong and W. W. K. To, “Comparison of the accuracy of
intergrowth-21 formula with other ultrasound formulae in fetal weight
estimation,” Taiwanese Journal of Obstetrics and Gynecology 58 no. 2,
(2019) 273–277.
[76] S. Hiwale, H. Misra, and S. Ulman, “Fetal weight estimation by ultrasound:
development of indian population-based models,” Ultrasonography 38 no. 1,
(2019) 50.
[77] J. Tao, Z. Yuan, L. Sun, K. Yu, and Z. Zhang, “Fetal birth weight prediction
with measured data by a temporal machine learning method,” BMC Medical
Informatics and Decision Making 21 no. 1, (2021) 1–10.
[78] R. L. Deter, W. Lee, J. Kingdom, and R. Romero, “Second trimester growth
velocities: assessment of fetal growth potential in sga singletons,” The
Journal of Maternal-Fetalw & Neonatal Medicine 32 no. 6, (2019) 939–946.
130[79] I. K. Rossavik and R. L. Deter, “Mathematical modeling of fetal growth: I.
basic principles,” Journal of Clinical Ultrasound 12 no. 9, (1984) 529–533.
[80] L. Xu, M. Skoularidou, A. Cuesta-Infante, and K. Veeramachaneni,
“Modeling tabular data using conditional gan,” Advances in neural
information processing systems 32 (2019) .
[81] Z. Li, Y. Zhao, and J. Fu, “Sync: A copula based framework for generating
synthetic data from aggregated sources,” in 2020 International Conference
on Data Mining Workshops (ICDMW), pp. 571–578, IEEE. 2020.
[82] L. Xu et al., Synthesizing tabular data using conditional GAN. PhD thesis,
Massachusetts Institute of Technology, 2020.
[83] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel,
M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, et al., “Scikit-learn:
Machine learning in python,” the Journal of machine Learning research 12
(2011) 2825–2830.
[84] L. Mouselimis, ClusterR: Gaussian Mixture Models, K-Means,
Mini-Batch-Kmeans, K-Medoids and A”nity Propagation Clustering, 2023.
https://CRAN.R-project.org/package=ClusterR. R package version 1.3.1.
[85] A. Stukalov and D. Lin, “Clustering. jl,” Julia Statistics. Available online at:
https://github. com/JuliaStats/Clustering. jl (accessed September 30, 2021)
(2021) .
[86] D. J. MacKay, “Bayesian interpolation,” Neural computation 4 no. 3, (1992)
415–447.
[87] A. A. Neath and J. E. Cavanaugh, “The bayesian information criterion:
background, derivation, and applications,” Wiley Interdisciplinary Reviews:
Computational Statistics 4 no. 2, (2012) 199–203.
131[88] A. Gelman and X.-L. Meng, “Simulating normalizing constants: From
importance sampling to bridge sampling to path sampling,” Statistical
science (1998) 163–185.
[89] N. Lartillot and H. Philippe, “Computing bayes factors using
thermodynamic integration,” Systematic biology 55 no. 2, (2006) 195–207.
[90] M. A. Newton and A. E. Raftery, “Approximate bayesian inference with the
weighted likelihood bootstrap,” Journal of the Royal Statistical Society
Series B: Statistical Methodology 56 no. 1, (1994) 3–26.
[91] W. Xie, P. O. Lewis, Y. Fan, L. Kuo, and M.-H. Chen, “Improving marginal
likelihood estimation for bayesian phylogenetic model selection,” Systematic
biology 60 no. 2, (2011) 150–160.
[92] S. Van Buuren and C. G. Oudshoorn, “Multivariate imputation by chained
equations,” 2000.
[93] K. P. Murphy, “Conjugate bayesian analysis of the gaussian distribution.”
https://www.cs.ubc.ca/~murphyk/Papers/bayesGauss.pdf. Accessed:
2023-10-05.
[94] L. Hubert and P. Arabie, “Comparing partitions,” Journal of classification 2
(1985) 193–218.
[95] M. Gagolewski, “genieclust: Fast and robust hierarchical clustering,”
SoftwareX 15 (2021) 100722.
[96] N. Patki, R. Wedge, and K. Veeramachaneni, “The synthetic data vault,” in
2016 IEEE international conference on data science and advanced analytics
(DSAA), pp. 399–410, IEEE. 2016.
[97] D. Dua, C. Gra!, et al., “Uci machine learning repository, 2017,” URL
http: // archive. ics. uci. edu/ ml 7 no. 1, (2017) 62.
132[98] M. Gagolewski, “A framework for benchmarking clustering algorithms,”
SoftwareX 20 (2022) 101270.
[99] M. A. Levin, H.-M. Lin, J. G. Castillo, D. H. Adams, D. L. Reich, and G. W.
Fischer, “Early on–cardiopulmonary bypass hypotension and other factors
associated with vasoplegic syndrome,” Circulation 120 no. 17, (2009)
1664–1671.
[100] S. Omar, A. Zedan, and K. Nugent, “Cardiac vasoplegia syndrome:
pathophysiology, risk factors and treatment,” The American journal of the
medical sciences 349 no. 1, (2015) 80–88.
[101] L. Buitinck, G. Louppe, M. Blondel, F. Pedregosa, A. Mueller, O. Grisel,
V. Niculae, P. Prettenhofer, A. Gramfort, J. Grobler, et al., “Api design for
machine learning software: experiences from the scikit-learn project,” arXiv
preprint arXiv:1309.0238 (2013) .
[102] J. H. Friedman, “Greedy function approximation: a gradient boosting
machine,” Annals of statistics (2001) 1189–1232.
[103] P. Meyer and J. Saez-Rodriguez, “Advances in systems biology modeling: 10
years of crowdsourcing dream challenges,” Cell Systems 12 no. 6, (2021)
636–653.
Show full item record