Keynote Speakers


Ed H. Chi

Ed H. Chi

   Google DeepMind

Monday, October 7th, 9:30 - 10:30

Title: The Future of Discovery Assistance

Bio: Ed H. Chi is a Distinguished Scientist at Google DeepMind, leading machine learning research teams working on large language models (from LaMDA leading to launching Bard/Gemini), and neural recommendation agents. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media. As the Research Platform Lead, he helped launched Bard/Gemini, a conversational AI experiment, and delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >950 product landings and ~$9B in annual revenue since 2013.

Prior to Google, he was Area Manager and Principal Scientist at Xerox Palo Alto Research Center's Augmented Social Cognition Group in researching how social computing systems help groups of people to remember, think and reason. Ed earned his 3 degrees (B.S., M.S., and Ph.D.) in 6.5 years from University of Minnesota. Inducted as an ACM Fellow and into the CHI Academy, he also received a 20-year Test of Time award for research in information visualization. He has been featured and quoted in the press, including the Economist, Time Magazine, LA Times, and the Associated Press. An avid golfer, swimmer, photographer and snowboarder in his spare time, he also has a blackbelt in Taekwondo.

Abstract: Our field has shifted from traditional machine learning techniques that are mostly based on pattern recognition to sequence-to-sequence models. The future of universal personal assistance for discovery and learning is upon us. How will multimodality image, video, and audio understanding, and reasoning abilities of large foundation models change how we build these systems? I will shed some initial lights on this topic by discussing some trends: First, the move to a single multimodal large model with reasoning abilities; Second, the fundamental research on personalization and user alignment; Third, the combination of System 1 and System 2 cognitive abilities into a single universal assistant.


Leman Akoglu

Leman Akoglu

   Carnegie Mellon University

Tuesday, October 8th, 9:30 - 10:30

Title: Toward Foundation Models for Unsupervised Learning

Bio: Leman Akoglu is the Heinz College Dean's Associate Professor of Information Systems at Carnegie Mellon University. She holds courtesy appointments in the Computer Science Department (CSD) and Machine Learning Department (MLD) of School of Computer Science (SCS). She has also received her Ph.D. from CSD/SCS of Carnegie Mellon University in 2012.

Dr. Akoglu's research interests broadly span machine learning and data mining, and specifically graph mining, pattern discovery and anomaly detection, with applications to fraud and event detection in diverse real-world domains. At Heinz, Dr. Akoglu directs the Data Analytics Techniques Algorithms (DATA) Lab.

Dr. Akoglu is a recipient of the SDM/IBM Early Career Data Mining Research award (2020), National Science Foundation CAREER award (2015) and US Army Research Office Young Investigator award (2013). Her early work on graph anomalies has been recognized with the The Most Influential Paper (PAKDD 2020), which was awarded the Best Paper (PAKDD 2010). Dr. Akoglu's other research awards include the Best Research Paper (SIAM SDM 2019), Best Student Machine Learning Paper Runner-up (ECML PKDD 2018), Best Paper Runner-up (SIAM SDM 2016), Best Research Paper (SIAM SDM 2015), Best Paper (ADC 2014), and Best Knowledge Discovery Paper (ECML PKDD 2009). She holds 3 U.S. patents filed by IBM T. J. Watson Research Labs. Her research has been supported by the NSF, US ARO, DARPA, Adobe, Capital One Bank, Facebook, Northrop Grumman, PNC Bank, PwC, and Snap Inc.

Abstract: Learning machines are equipped with hyperparameters (HPs) that control their bias-variance trade-off and consequently generalization performance. Thus, careful model selection is of utmost importance to learn "good" models. The supervised ML community has focused on Auto-ML toward effective algorithm selection and hyper-parameter optimization. Yet, automating unsupervised learning remains significantly under-studied.

In the first part of my talk, I will present vignettes of our research toward unsupervised model selection, specifically in the context of anomaly detection (AD). Given the current popularity of deep learning based models with a long list of HPs, and the attractiveness of self-supervised learning objectives for unsupervised AD, I will demonstrate that effective model selection becomes ever so critical, presenting both challenges as well as opportunities. In the latter part, I will describe our most recent efforts toward foundation models for AD, unlocking zero-shot inference, and hence bypassing model selection altogether. Foundation models offer a fast, labor-free solution for practitioners and a paradigm shift in the AD literature, opening up a series of novel research questions.


Yan Liu

Yan Liu

   University of Southern California

Wednesday, October 9th, 9:30 - 10:30

Title: Frontiers of Foundation Models for Time Series

Bio: Yan Liu is a Professor in the Computer Science Department and the Director of the Machine Learning Center at the University of Southern California. She received her Ph.D. degree from Carnegie Mellon University. Her research interest is machine learning for time series and its applications to climate science, health care, and sustainability. She has received several awards, including NSF CAREER Award, Okawa Foundation Research Award, New Voices of Academies of Science, Engineering, and Medicine, Best Paper Award in SIAM Data Mining Conference. She serves as general chair for KDD 2020 and ICLR 2023, and program chairs for WSDM 2018, SDM 2020, KDD 2022 and ICLR 2022.

Abstract: Recent development in deep learning has spurred research advances in time series modeling and analysis. Practical applications of time series raise a series of new challenges in science applications, such as multi-resolution, multimodal, missing value, distributeness, and interpretability. In this talk, I will discuss possible paths to foundation models for time series data and future directions for time series research.