Abstract: As machine learning applications expand into real-world environments, the demand for efficient edge computing solutions has grown significantly. Edge devices—ranging from smartphones and IoT sensors to autonomous systems—must process data locally with limited computational resources, low latency, and strict energy constraints. Designing algorithms and architectures that meet these requirements is critical for enabling scalable, real-time intelligence at the edge.
In this talk, Dr. Ranjitha Prasad and Dr. Saurav Prakash from IIT Madras will present strategies for building efficient machine learning solutions tailored to edge computing. The session will highlight innovations in lightweight models, optimization techniques, and hardware-aware designs that make advanced ML feasible outside centralized cloud infrastructures.
Key themes will include:
Challenges of deploying ML models on resource-constrained edge devices.
Techniques for model compression, quantization, and pruning to reduce computational overhead.
Hardware-software co-design for maximizing efficiency and performance.
Applications in healthcare monitoring, smart cities, autonomous systems, and industrial IoT.
Future directions for federated learning and privacy-preserving edge intelligence.
The presentation will emphasize both theoretical approaches and practical case studies, showing how efficient ML solutions at the edge can unlock new opportunities for real-time, scalable, and secure AI applications. By bridging algorithmic innovation with hardware constraints, the speakers will illustrate how edge computing is becoming a cornerstone of next-generation intelligent systems.