Tiny functions at large scale : new systems for interactive computing
- Interactive computing has redefined the way that humans approach and solve complex problems. However, the slowdown of Moore's law, coupled with the massive increase in data volume and sophistication, has prevented some applications from running interactively on microcomputers. Tasks such as video processing, software compilation & testing, 3D rendering, simulations, and data analytics have turned into batch jobs running on large clusters, limiting our ability to tinker, iterate, and collaborate. Meanwhile, by offering an ocean of heterogeneous computing resources, cloud computing has provided us with a unique opportunity to bring interactivity to such applications. This dissertation presents my experiences in creating new systems and abstractions for large-scale interactive computing, where users can execute a wide range of resource-intensive tasks with low latency. My thesis is that commodity cloud platforms can be utilized as an accessible supercomputer-by-the-second for interactive execution of large jobs. By leveraging granular cloud services, users can burst to tens of thousands of parallel computations on demand for short periods of time. I will discuss my experience building such applications for massively burst-parallel video processing, 3D path tracing, software builds, and other tasks. First, I describe ExCamera, a system for low-latency video processing using thousands of tiny threads. ExCamera's core contribution is a video encoder intended for fine-grained parallelism that allows computation to be split into thousands of tiny tasks without harming compression efficiency. Next, I discuss R2E2, a highly scalable path tracer for 3D scenes with high complexity. R2E2's main contribution is an architecture for performing low-latency path-tracing of terabyte-scale scenes using serverless computing nodes in the cloud. This design allows R2E2 to leverage the unique strengths of hyper-elastic cloud platforms (e.g., availability of many CPUs/memory in aggregate) and mitigates their limitations (e.g., low per-node memory capacity and high latency inter-node communication). Finally, drawing on the experience of building burst-parallel applications like ExCamera, I describe gg, a framework designed to facilitate the implementation of burst-parallel algorithms on serverless platforms. gg specifies an intermediate representation that allows a diverse class of applications to be abstracted from the computing and storage platform, and leverage common services for dependency management, straggler mitigation, and scheduling. Using gg IR, the developers express their applications in terms of the relationships between code and data, while the framework carries the burden of efficiently executing them.
|Type of resource
|electronic resource; remote; computer; online resource
|1 online resource.
|Degree committee member
|Degree committee member
|Stanford University, Computer Science Department
|Statement of responsibility
|Submitted to the Computer Science Department.
|Thesis Ph.D. Stanford University 2021.
- © 2021 by Sadjad Fouladighaleh
- This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).
Also listed in
Loading usage metrics...