Post0525

05/23

In the past couple of days, I listened to two podcasts by Lex Fridman: one with Richard Dawkins concerning evolution and the other with Roger Penrose concerning artificial intelligence. I also listened to some comedy clips by Ricky Gervais. Today, I was recommended this video: https://www.youtube.com/watch?v=lFupAV1_xWU, which combines all three topics: artificial intelligence, evolution, and Ricky Gervais. How good has YouTube's recommender system gotten?

05/25

I have been thinking a lot lately about what I would like my summer to look like. Above all, I would really like two things: 1) to contribute and learn as much as I can about TinyML and 2) to set myself up for a solid grad school application. I have been contemplating graduating a year early for a while now and I think I should do it. One thing that intimidates me about applying to grad school is the sheer competence of the application pool: among the contenders for the limited number of positions are people who are much older and much more mature, people with master's degrees, people who have worked full-time jobs, and people who have many papers already published.

I am confident that I can put in the effort required to prove my high academic standing through classes and standardized testing, but I am worried about how I can demonstrate my capacity to come up with and follow through with new ideas. Some ideas I have to address these shortcomings include: reading many, many papers and books on TinyML, formulating my own arguments and ideas about future directions in TinyML, improving my communication skills by writing articles and giving talks, finishing my current research project and writing a paper, and building connections with professors in TinyML in prominent universities.

If I choose to apply to graduate school this fall, which is my current plan, application season will almost be over six months from now. Six months is not long; six months ago seems pretty recent in my memory and I suspect six more months will fly by. I have not made any demonstrable novel contribution in my research in the past six months, so I must be careful to document and evaluate my progress more closely. I was looking forward to relaxing this summer, reading novels, practicing kickboxing, and playing the piano. But if it means I have a good chance of getting into the MIT PhD program, I am willing to sacrifice much of these comforts and leisure activities for the next six months to attain this goal. I'll plan to take the next summer off and enjoy these activities.

I don't know how favorable my chances are of getting into a top-tier graduate program, but I think they are high enough to justify a few months of discomfort, especially given the positive lifelong impact that getting into such a program would have.

From the articles I've read and the videos I've watched, the following statements are frequently repeated:

  • TinyML is a field that applies ML algorithms to constrained devices (constrained by power consumption, memory availability, network connection/bandwidth, limited hardware acceleration, and lack of an OS).
  • Processing the data generated by microcontrollers on the edge is useful because it is not dependent on a network connection, is privacy-preserving by nature, and does not incur network latency.
To add to the second statement,  I believe another advantage is decentralization because unlike a model that involves sending data to the cloud to be processed (centralized models), edge ML models scale well with the number of available edge devices.
I hope to find some quantitative measures that illustrate the disadvantages of centralized ML models (energy consumption, latency); this would help me understand the case for TinyML better.
The issue seems to be that the microcontrollers and edge devices demand the deployment of accurate models that run in a reasonable amount of time without consuming more memory than available, and not draining the available power supply too quickly. Memory is limited because microcontrollers can only have a limited amount of memory due to small size and the low cost of microcontrollers.

Partly from prior knowledge and partly out of my imagination, I believe the following areas are areas of research in TinyML:
  • Designing specialized hardware and ISAs for machine learning models
  • Designing specialized neural network architectures designed to fit in small space
  • Investigating tradeoff between ultra-low precision models and accuracy
https://electronics.stackexchange.com/questions/134496/why-do-microcontrollers-have-so-little-ram

Comments

Popular posts from this blog

My Summer

Thoughts on LLMs and Modeling

A Realization