AI has achieved impressive success in a wide variety of domains, ranging from medical diagnosis to creative image generation. This success provides rich opportunities for AI to address important societal challenges, but there are also growing concerns about the bias and harm that AI systems may cause. This conference brings together diverse perspectives to think about the best way for AI to fit into society and how to develop the best AI for humans.
View agenda and speaker information below.
The organizing committee for the Human + AI Conference is Chenhao Tan, Sendhil Mullainathan, and James Evans. This event is made possible by generous support of the Stevanovich Center for Financial Mathematics.
Friday, October 28, 2022
Aligning Algorithms with Consumers’ Prediction Preferences
What do you wish to see in Human+AI (in five years)?
Decision Science in the Age of Augmented Cognition
Toward a Unifying Framework for Combining Complementary Strengths of Humans and ML toward Better Predictive Decision-Making
What are the next steps to realize the wishes?
Marc Berman is an Associate Professor in the Department of Psychology and is involved in the Cognition, Social and Integrative Neuroscience programs. Understanding the relationship between individual psychological and neural processing and environmental factors lies at the heart of my research. In my lab we utilize brain imaging, behavioral experimentation, computational neuroscience and statistical models to quantify the person, the environment and their interactions. Marc received his B.S.E. in Industrial and Operations Engineering (IOE) from the University of Michigan and his Ph.D. in Psychology and IOE from the University of Michigan. He received post-doctoral training at the University of Toronto's Rotman Research Institute at Baycrest. Before arriving to Chicago he was an Assistant Professor of Psychology at the University of South Carolina.
Dr. Alexandra Chouldechova is the Estella Loomis McCandless Assistant Professor of Statistics and Public Policy at Carnegie Mellon University's Heinz College of Information Systems and Public Policy. Her research investigates questions of algorithmic fairness and accountability in data-driven decision-making systems, with a domain focus on criminal justice and human services. Her work has been supported through funding from organizations including the Hillman Foundation, the MacArthur Foundation, and the NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon. She is a member of the executive committee for the ACM Conference on Fairness, Accountability and Transparency (FAccT), and previously served as a Program Committee co-Chair for the conference.
Dr. Chouldechova is a 2020 Research Fellow with the Partnership on AI, where she is working on understanding factors that drive racial bias in algorithmic risk assessment tools being developed for use in pre-trial, parole and sentencing contexts. She is also a member of the Pittsburgh Task Force on Public Algorithms.
Dr. Chouldechova received her PhD in Statistics from Stanford University and an H.B.Sc. in Mathematical Statistics from the University of Toronto.
Berkeley J. Dietvorst
Berkeley Dietvorst’s research focuses on understanding how consumers and managers make judgments and decisions, and how to improve them. His main focus, thus far, has been when and why forecasters fail to use algorithms that outperform human forecasters, and explores prescriptions that increase consumers’ and managers’ willingness to use algorithms.
Dietvorst’s other research looks at such topics as people’s ability to ignore information, consequences of performance expectations, and consumers’ reactions to corporate experiments. His research has been published in the Journal of Experimental Psychology: General, Psychological Science, Marketing Science, and Management Science as well as other journals. His work has been referenced in such media outlets as the Financial Times, Harvard Business Review, The New York Times, and The Boston Globe.
Dietvorst earned both a BS in economics and a PhD in decision processes from The Wharton School, University of Pennsylvania.
I am a Gordon McKay Professor of Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. I lead the Intelligent Interactive Systems Group at Harvard. Currently, my group has the following focus areas:
- Principles and applications of intelligent interactive systems. How do we build AI-powered systems such that they are well suited for the strengths and limitations of human cognition, perception and behavior?
- Behavioral research at scale. We are building platforms (e.g., Lab in the Wild and Hevelius) that allow us to conduct behavioral research with tens of thousands of participants. We are interested both in developing new tools and methods, and in doing novel science that such tools enable.
- Design for equity and social justice. Design impacts human behavior (seriously). Some of our design practices may inadvertently exacerbate inequalities. Thoughtful design can also improve support more equitable and more socially just outcomes. This interest originates from our work on accessible computing, but has expanded to include implicit bias, equitable access to healthcare, and more.
In the past, I made contributions in the following areas (which are still of some interest, just not the main focus of my current work): accessible computing, adaptive user interfaces, creativity support, crowd computing.
I am an Assistant Professor at Carnegie Mellon University with joint appointments in the Machine Learning Department and the Institute for Software, Systems, and Society. I am also affiliated with CyLab and Block Center at CMU, and I co-lead the university-wide Responsible AI Initiative.
I am broadly interested in the Societal Aspects of Artificial Intelligence and Machine Learning. For more information, please take a look at my bio, CV (last updated Aug 2022), Google Scholar profile, and the Research section of this page.
I currently advise the following doctoral students: Michael Feffer (co-advised with Zack Lipton), Keegan Harris (co-advised with Steven Wu).
My work has been generously supported by the NSF Program on Fairness in AI in Collaboration with Amazon, PwC, CyLab, Meta, and J. P. Morgan.
There is an awful lot of information in the world. Some of it is useful; most of it is not. How do people determine what information to use when making decisions, what is worth learning, how to search for the information they need, and what do people do when different pieces of information conflict and suggest different conclusions? Professor Oppenheimer’s research investigates these basic questions as well as how the answers impact real world outcomes in policy, business, and education.
He has also done research on psychometric assessment, charitable giving, people’s understanding of randomness/stochastic systems, the psychological underpinnings of democracy, helicopter parenting, metacognition, and the best local ice cream stores.
Jenn Wortman Vaughan
Jenn Wortman Vaughan is a Senior Principal Researcher at Microsoft Research, New York City. She currently focuses on Responsible AI—including transparency, interpretability, and fairness—as part of MSR's FATE group and co-chair of Microsoft’s Aether Working Group on Transparency. Jenn's research background is in machine learning and algorithmic economics. She is especially interested in the interaction between people and AI, and has often studied this interaction in the context of prediction markets and other crowdsourcing systems. Jenn came to MSR in 2012 from UCLA, where she was an assistant professor in the computer science department. She completed her Ph.D. at the University of Pennsylvania in 2009, and subsequently spent a year as a Computing Innovation Fellow at Harvard. She is the recipient of Penn's 2009 Rubinoff dissertation award for innovative applications of computer technology, a National Science Foundation CAREER award, a Presidential Early Career Award for Scientists and Engineers (PECASE), and a variety of best paper awards. Jenn co-founded the Annual Workshop for Women in Machine Learning (WiML), which has been held each year since 2006, and recently served as Program Co-chair of NeurIPS 2021.