Building marimo.io
PhD in Electrical Engineering,
Stanford University
MS, BS in Computer Science,
Stanford University
akshayka@cs.stanford.edu
Github /
Google Scholar /
Twitter /
LinkedIn /
Blog
I'm currently building marimo, a new kind of reactive notebook for Python that's reproducible, git-friendly (stored as Python files), executable as a script, and deployable as an app.
I'm both a researcher, focusing on machine learning and optimization, and an engineer, having contributed to several open source projects (including TensorFlow, when I worked at Google). I have a PhD from Stanford University, where I was advised by Stephen Boyd (as well as a BS and MS in computer science from Stanford).
marimo is a next-generation reactive Python notebook that's reproducible, git-friendly (stored as Python files), executable as a script, and deployable as an app.
I enjoy speaking with people working on real problems. If you'd like to chat, don't hesitate to reach out over email.
I have industry experience in designing and building software for machine learning (TensorFlow 2.0), optimizing the scheduling of containers in shared datacenters, motion planning and control for autonomous vehicles, and performance analysis of Google-scale software systems.
From 2017-2018, I worked on TensorFlow as an engineer on Google Brain team. Specifically, I developed a multi-stage programming model that lets users enjoy eager (imperative) execution while providing them the option to optimize blocks of TensorFlow operations via just-in-time compilation.
I honed my technical infrastructure skills over the course of four summer internships at Google, where I:
conducted fleet-wide performance analyses of programs in shared servers and datacenters;
analyzed Dapper traces for the distributed storage stack and uncovered major performance bugs;
built a simulator for solid-state drives and investigated garbage reduction policies;
wrote test suites and tools for the Linux production kernel.
I spent seven quarters as a teaching assistant for the following Stanford courses:
EE 364a: Convex Optimization I. Professor Stephen Boyd. Spring 2016-17, Summer 2018-2019.
CS 221: Artificial Intelligence, Principles and Techniques. Professor Percy Liang. Autumn 2016-17.
CS 109: Probability for Computer Scientists. Professor Mehran Sahami and Lecturer Chris Piech. Winter 2015-16, Spring 2015-16, Winter 2016-17.
CS 106A: Programming Methodology. Section Leader. Lecturer Keith Schwarz. Winter 2013-14.
Paths to the Future: A Year at Google Brain. January 2020.
A Primer on TensorFlow 2.0. April 2019.
Learning about Learning: Machine Learning and MOOCs. June 2015.
Machines that Learn: Making Distributed Storage Smarter. Sept. 2014.
Separation Theorems. Lecture notes on separation theorems in convex analysis. A. Agrawal. 2019.
A Cutting-Plane, Alternating Projections Algorithm for Conic Optimization Problems. A. Agrawal. 2017. [code]
Cosine Siamese Models for Stance Detection. A. Agrawal, D. Chin, K. Chen. 2017. [code]
Xavier : A Reinforcement-Learning Approach to TCP Congestion Control. A. Agrawal. 2016. [code]
B-CRAM: A Byzantine-Fault-Tolerant Challenge-Response Authentication Mechanism. A. Agrawal, R. Gasparyan, J. Shin. 2015. [code]