Skip to main content

Francesco Pinto is a Postdoctoral Fellow at the Data-Science Institute, University of Chicago, with expertise in AI Security and Trustworthiness. His research focuses on enhancing the safety and capabilities of Multimodal Agentic Large Language Models. As a member of the Secure Learning Lab, he works with Professor Bo Li on the development of safer Multiagent Multimodal Systems. Additionally, he serves as an advisor to the Lapis Lab at UIUC and Torr Vision Group at the University of Oxford, where he mentors students on independent research projects aligned with his area of expertise.

Since humans develop their intelligence by processing multimodal environmental stimuli and interacting with other humans, he believes machine intelligence can be achieved by processing multimodal data and interacting with other agents and environments. In order to foster their safe development, his research focuses on how to use training and inference algorithms, data synthesis techniques and multi-agent interaction flows to: (1) reduce hallucinations and biases in predictions, especially in previously unseen conditions (distribution-shift); (2) prevent the leakage of memorized data for privacy and copyright protection; and (3) mitigating societal harm by blocking harmful requests and steering models behavior.

Francesco earned his PhD from the University of Oxford, where he was advised by Prof. Philip Torr, Atılım Güneş Baydin, Victor Prisacariu and informally advised by Dr. Puneet Dokania. Many of his projects are developed in collaboration with industry, and have covered topics such as the evaluation, auditing and development of Multimodal Large Language Models (Meta FAIR/GenAI, Google, BBC), and the development of reliable models for autonomous driving systems (FiveAI/Bosch) and satellite collision management procedures (Trilium Tech/European Space Agency). During his PhD he also visited the Statistical Machine Learning group at ETH Zurich, where he collaborated with Prof. Fanny Yang and Amartya Sanyal to produce better Privacy Preserving training algorithms.

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallclosefacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass