My Research Interests
I am interested in applying machine learning in all sciences. I have published papers applying machine learning in various fields: medicine (cardiology and COVID-19 Pneumonia detection), physics, sensor science and food technology. Additionally I published papers in statistics and theory of machine learning. I find every problem that has to do with machine learning and that have some kind of impact interesting. Here you can find a brief overview of my actual research interests.
Machine Learning in Science
Due to my background in Physics, I am naturally attracted to applying AI to sciences, as Physics, Biology or Chemistry. Those fields have quite different challenges than classical AI problems (as image recognition) and requires often very specific algorithms and data processing methods. One of my research focus is to develop methods and algorithms that will give AI in science a much more solid theoretical foundation. Methods encompass communication, variability analysis, robustness, etc. For example how can we give better estimates of metrics used with scientific data (from physics measurements for example) that keep into account measurement errors? What is the effect of labels (or target variables) errors on performance metrics? I am also particularly interested in uncertainty estimation in neural networks and Bayesian Networks.
Interests
Developing methods based on statistics and in particular extreme value theory, to estimate extreme model error distributions, to estimate errors on models output.
Develop model-agnostic feature selection approaches for scientific datasets.
Develop feature selection approaches for spectral data, where features are spectral intensities at various wavelengths. Close wavelengths are strongly correlated, making standard and known feature selection algorithms useless.
Develop methods to assess how errors on labels and inputs affects model behaviour and outputs.
Example of publications
Michelucci, U., Venturini, F. Deep learning domain adaptation to understand physico-chemical processes from fluorescence spectroscopy small datasets and application to the oxidation of olive oil. Nature Sci Rep 14, 22291 (2024). https://doi.org/10.1038/s41598-024-73054-y
Michelucci, U., Fundamental Mathematical Concepts for Machine Learning in Science (2024), Springer Nature, https://link.springer.com/book/10.1007/978-3-031-56431-4
Michelucci, U., & Venturini, F. (2023). New metric formulas that include measurement errors in machine learning for natural sciences. Expert Systems with Applications, 224, 120013. https://www.sciencedirect.com/science/article/pii/S0957417423005158
Michelucci, U., Fluri, S., Baumgartner, M., & Venturini, F. (2023, March). Deep learning super resolution for high-speed excitation emission matrix measurements. In AI and Optical Data Sciences IV (Vol. 12438, pp. 127-137). SPIE. Journal Link
Venturini, F., Sperti, M., Michelucci, U., Gucciardi, A., Martos, V. M., & Deriu, M. A. (2023). Extraction of physicochemical properties from the fluorescence spectrum with 1D convolutional neural networks: Application to olive oil. Journal of Food Engineering, 336, 111198.
Venturini, F., Michelucci, U., Sperti, M., Gucciardi, A., & Deriu, M. A. (2023, March). Understanding the learning mechanism of convolutional neural networks applied to fluorescence spectra. In AI and Optical Data Sciences IV (Vol. 12438, pp. 178-184). SPIE.
Venturini, F., Michelucci, U., Sperti, M., Gucciardi, A., & Deriu, M. A. (2022, May). One-dimensional convolutional neural networks design for fluorescence spectroscopy with prior knowledge: explainability techniques applied to olive oil fluorescence spectra. In Optical Sensing and Detection VII (Vol. 12139, pp. 326-333). SPIE.
Explainable AI (XAI) in Science
Explainable artificial intelligence (XAI) a discipline that study how to comprehend and therefore trust the results and output created by machine learning algorithms. In particular I am interested in the possibilities of understanding machine learning models when applied to science. How can we use the know-how that a neural networks has learned, to better understand physical phenomena? How can we use, for example, machine learning as scientific tool properly? Can we link how a ML model work with how a phenomena works?
Example of publications
Michelucci, U., Venturini, F. Deep learning domain adaptation to understand physico-chemical processes from fluorescence spectroscopy small datasets and application to the oxidation of olive oil. Nature Sci Rep 14, 22291 (2024). https://doi.org/10.1038/s41598-024-73054-y
Michelucci, U., & Venturini, F. (2024). Intepretative Deep Learning using Domain Adaptation for Fluorescence Spectroscopy. arXiv preprint arXiv:2406.10031.
Venturini, F., Michelucci, U., Sperti, M., Gucciardi, A., & Deriu, M. A. (2023, March). Understanding the learning mechanism of convolutional neural networks applied to fluorescence spectra. In AI and Optical Data Sciences IV (Vol. 12438, pp. 178-184). SPIE.
Venturini, F., Michelucci, U., Sperti, M., Gucciardi, A., & Deriu, M. A. (2022, May). One-dimensional convolutional neural networks design for fluorescence spectroscopy with prior knowledge: explainability techniques applied to olive oil fluorescence spectra. In Optical Sensing and Detection VII (Vol. 12139, pp. 326-333). SPIE.
Statistics and Deep Learning Theory
Deep Learning algorithms have been shown to be incredibly efficient in solving very complex problems that before were only possible for humans. Unfortunately it is not yet clear, from a theoretical point of view, why large neural networks work so well and are so efficient. Many strange effects have been observed but are not understood. Double Descent is one example of such effects that gives indication that much have to be understood. Statistics is at the basis of deep learning and is a fascinating field to study, to better understand how, for example, machine learning algorithms work.
Research in this field is extremely exciting and due of its fundamental nature both complex and open ended.
Example of publications
Michelucci, U., & Venturini, F. (2023). New metric formulas that include measurement errors in machine learning for natural sciences. Expert Systems with Applications, 224, 120013.
Michelucci, U. (2022). On the High Symmetry of Neural Network Functions. arXiv preprint arXiv:2211.06603.
Michelucci, U. (2022). An introduction to autoencoders. arXiv preprint arXiv:2201.03898.
Michelucci, U., Sperti, M., Piga, D., Venturini, F., & Deriu, M. A. (2021). A model-agnostic algorithm for bayes error determination in binary classification. Algorithms, 14(11), 301.
Philosophy of Science
I dedicate a special section to Philosophy of Science, a special passion of mine. I have an insatiable need to understand how AI fits into the science paradigm. Is AI only a tool, or is something more? How can we do science with AI? How does it fit in philosophy of science? Is really an example of inductivism, as described by Bacon? Can we use as a falsification tool, as described by Popper? All those questions are not answered but are fundamental in understanding the role that AI will have in our society in the future.
An example of why this is important is the question if algorithms that allow facial recognition can be free of bias, or if this is a fundamental part of algorithms. How to decide what algorithm is scientific and which one is not (due to the strong stochastic nature of machine learning)? What role plays reproducibility in the use of AI, since the results have a strong statistical nature?