Beyond LLMs and Chatbots – the Journey to Generative AI at the Edge: Catch up on YouTube and Discord

October 17-18, 2024
Online
https://cms.tinyml.org/wp-content/uploads/summit2024/Copy-of-EDGE-GEN-AI-16x9-Video.jpg
Beyond LLMs and Chatbots – the Journey to Generative AI at the Edge: Catch up on YouTube and Discord

In March 2024, the tinyML Foundation organized one of the most successful and impactful events in its history. The subject was the rise of Generative AI on the Edge.

Since then, new innovations have appeared, new models have been released publicly, paving the way for new use cases. The more the use cases are refined the better the opportunity to focus the perimeters of the Large Generative Model becomes concrete – thus offering new support to every business and person in everyday operations. Now more than ever, questions about how to deploy foundation models to the edge and what use cases become possible call to the machine learning community to act and share their latest progress.

This two-day “second edition” forum addressed the state-of-the-art and  witness the transformative impacts that foundation models can bring to the edge pushing hardware, software, tooling, applications, and AI design methodologies at the next level.

Join us on YouTube and Discord for an incredible collection of talks from industry experts – with live Q&A!


 

Day One – October 17, starting at 8am PST:

  • Danilo Pau of STMicroElectronics, our Conference Chair
  • Dave McCarthy of IDC, on how LLMs and Transformer models will accelerate Edge Computing
  • Robert Morabito of EURECOM, on ways to consolidating TinyML Lifecycle with Large Language Models
  • Chen Lai of Meta, on deploying GenAI on Edge with ExecuTorch
  • Alok Ranjan of BOSCH, on Learnings and Challenges from Automotive Domain with Small Language Models
  • Seonyeong Heo of Kyung Hee University, on new ways of optimizing memory for GenAI models
  • Anirban Bhattacharjee from Wipro, on using AI PCs to generate GenAI based Custom Code
  • Alberto Ancilotto of Fondazione Bruno Kessler, on Neural Style transfer on STM32 MCUs

Day Two – October 18, starting at 8am PST:

  • Hamza Yous of Technology Innovation Institute, on LLM Compression: A Quick Review and Use Case – featuring the Falcon Mamba
  • Victor Jung of ETH Zurich, on Energy-Efficient Generative AI on Open-Source RISC-V Heterogeneous SoCs
  • Davis Sawyer of NXP, on secure and private LLM fine-tuning for deployment on MPUs
  • Ronan Naughton of Arm, on GenAI at the edge with ARM processors
  • Marek Poliks of Particle.io, on key factors for success to deploy Gen AI on 5G Edge platforms
  • Wassim Kezai of Innovation Academy, on TinyRAG: A Case Study on tinyML Foundation
  • Jose Cano of the University of Glasgow, on SECDA-LLM: Designing Efficient LLM Accelerators for Edge Devices

Day One and Day Two recap and analysis by Danilo Pau of ST Microelectronics, Davis Sawyer of NXP and Pete Bernard of the the EDGE AI FOUNDATION.