From Neural Networks to Reinforcement Learning to Game Theory
The New York Academy of Sciences (the Academy) hosted the 15th Annual Machine Learning Symposium. This year’s event, sponsored by Google Research and Cubist Systematic Strategies, included keynote addresses from leading experts, spotlight talks from graduate students and tech entrepreneurs, and opportunities for networking.
Exploring and Mitigating Safety Risks in Large Language Models and Generative AI
Pin-Yu Chen, PhD, a principal research scientist at IBM Research, opened the symposium with a keynote lecture about his work examining adversarial machine learning of neural networks for robustness and safety.
Dr. Chen presented the limitations and safety challenges facing researchers in the realm of foundation models and generative AI. Foundation models “mark a new era of machine learning,” according to Dr. Chen. Data sources, such as text, images, and speech, help to train these foundation models. These foundation models are then adapted to perform tasks ranging from answering questions to object recognition. ChatGPT is an example of a foundation model.
“The good thing about foundation models is now you don’t have to worry about what task you want to solve,” said Dr. Chen. “You can spend more effort and resources to train a universal foundation model and fine-tune the variety of the downstream tasks that you want to solve.”
While a foundation model can be viewed as an “one for all” solution, according to Dr. Chen, generative AI is on the other side of the spectrum and takes an “all for more” approach. Once a generative AI model is effectively trained with a diverse and representative dataset, it can be expected to generate reliable outputs. Text-to-image and text-to-video platforms are two examples of this.
Dr. Chen’s talk also brought in examples of government action taken in the United States and in European Union countries to regulate AI. He also discussed “hallucinations” and other bugs occurring with current AI systems, and how these issues can be further studied.
“Lots of people talk about AGI as artificial general intelligence. My view is hopefully one day AGI will mean artificial good intelligence,” Dr. Chen said in closing.
Towards Generative AI Security – An Interplay of Stress-Testing and Alignment
The event concluded with a keynote talk from Furong Huang, PhD, an associate professor of computer science at the University of Maryland. She recalled attending the Academy’s Machine Learning symposium in 2017. She was a postdoctoral researcher for Microsoft Research at the time, and had the opportunity to give a spotlight talk and share a poster. But she said she dreamt of one day giving a keynote presentation at this impactful conference.
“It took me eight years, but now I can say I’m back on the stage as a keynote speaker. Just a little tip for my students,” said Prof. Huang, which was met by applause from those in attendance.
Her talk touched on large language models (LLMs) like ChatGPT. While other popular programs like Spotify and Instagram took 150 days and 75 days, respectively, to gain one million users, ChatGPT was able to achieve this benchmark in just five days. Furthermore, Prof. Huang pointed out the ubiquity of AI in society, citing data from the World Economic Forum, which suggests that 34% of business products are produced using AI, or augmented by AI algorithms.
Click HERE to read the full article
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.