Organizers: Jerry Savage, Abhishek Gupta, Maya Cakmak, Josh Smith
Abstract: Foundation models (e.g. large language models) create exciting new opportunities in our longstanding quests to produce open-ended and AI-generating algorithms, wherein agents can truly keep innovating and learning forever. In this talk I will share some of our recent work harnessing the power of foundation models to make progress in these areas. I will cover three of our more recent papers: (1) OMNI: Open-endedness via Models of human Notions of Interestingness, (2) Video Pre-Training (VPT), and (3) Thought Cloning: Learning to Think while Acting by Imitating Human Thinking.
Biography: Jeff Clune is an Associate Professor of computer science at the University of British Columbia, a Canada CIFAR AI Chair at the Vector Institute, and a Senior Research Advisor at DeepMind. Jeff focuses on deep learning, including deep reinforcement learning. Previously he was a research manager at OpenAI, a Senior Research Manager and founding member of Uber AI Labs (formed after Uber acquired a startup he helped lead), the Harris Associate Professor in Computer Science at the University of Wyoming, and a Research Scientist at Cornell University. He received degrees from Michigan State University (PhD, master’s) and the University of Michigan (bachelor’s). Since 2015, he won the Presidential Early Career Award for Scientists and Engineers from the White House, had two papers in Nature and one in PNAS, won an NSF CAREER award, received Outstanding Paper of the Decade and Distinguished Young Investigator awards, received two test of time awards, and had best paper awards, oral presentations, and invited talks at the top machine learning conferences (NeurIPS, CVPR, ICLR, and ICML). His research is regularly covered in the press, including the New York Times, NPR, the New Yorker, CNN, NBC, Wired, the BBC, the Economist, Science, Nature, National Geographic, the Atlantic, and the New Scientist. More on Jeff’s research can be found at JeffClune.com or on Twitter (@jeffclune).
Abstract: The past few years have seen an exponential growth in the progression of AI, particularly generative AI (Gen AI). Of particular note are text generation, image generation, and recently, video generation. While most of the research in these fields is happening in a vacuum, it's also interesting to explore the intersection between Gen AI and other hard disciplines, such as robotics and medicine. This talk will focus on mainly three topics: a brief background on Gen AI (specifically large language models, or LLMs), noteworthy and promising applications of GAI in other fields, and the ethical implications of general artificial intelligence.
Biography: I'm co-founder and CTO at Carbon, a small Seattle-based startup. We focus on providing a managed vector database solution to simplify the processing, storage, and lookup of information from disparate sources for our customers. Previously, I was a senior engineer at e-commerce startup, Italic. I hold a BS in Chemical Engineering from the University of Illinois, Urbana-Champaign, and an MS in computational mathematics from the University of California, San Diego. As is the case with most Seattleites, I enjoy spending time outdoors in my free time, whether it's running, hiking, or skiing.