Post0605

One of my professors is a theoretical computer scientist. Looking at his past publications, which all are very mathematical in nature, I started to doubt whether I should really go into TinyML, a field that is very high-level for a computer science field. I also started to think about different skills that are required to contribute to these fields; I guess theoretical computer science requires a mathematically-oriented mind, while the majority of ML requires an engineering-oriented mind.

I also was thinking about the nature of software engineering. It is exciting to come up with a new idea or improvement that can do good in the world--improve the performance or quality of a tool or introduce a new digital product. For example, it is exciting to propose new OS philosophies, research ways to make networks and distributed systems more efficient, and develop new hardware accelerators. But, it seems to me that at the heart of most of today's software is the repetitive implementation of bland ideas that have been thought of before. I wouldn't want to build a web app or the backend infrastructure behind a social media app, because the software I would write wouldn't be remarkable. My ideas wouldn't matter; I would merely be implementing and debugging a tool already known to be possible. Research pushes the frontiers of knowledge; the prospect of understanding new ideas makes it inherently exciting.

I listened to the beginning of Yann Lecun's second podcast interview with Lex Fridman. This made me curious about topics such as the nature of intelligence, the measurement of intelligence, non-neural network methods of machine learning, the cognitive-science basis of neural networks, and fundamental ways in which neural network training can be transformed. This curiosity motivated me to read more, listen more, and absorb more ideas related to these topics because there is so much I don't know. If I knew more, I could propose higher-quality ideas. There is one aspect of dedicating myself to this passion that I wouldn't like: the opportunity cost would be spending my time on more activities that excite me more, such as kickboxing and playing the piano. But, I know that pursuing these activities is not so valuable as pursuing knowledge and ideas in TinyML.

I believe that NAS and self-supervised learning are both challenges that need to be overcome on the path to generalizable machine learning that can learn faster and better. Ian Goodfellow mentioned in a podcast appearance with Lex Fridman that he looks forward to a future where artificial intelligence has advanced to the point where given only a dataset, it can create its own model that performs well on human tasks (I may be distorting his words a little, I don't remember exactly what he said). NAS, especially if fully automated, and AutoML, in general, are useful because they remove humans from the loop of ML model creation, which may be able to spawn an explosion in the quality and targeted-ness (ability to adapt to different hardware constraints) of machine learning models. If this happens, it will outperform all human-created methods of neural network design, supplanting an entire domain of ML research.

Comments

Popular posts from this blog

My Summer

Thoughts on LLMs and Modeling

A Realization