Quantcast
Channel: I’m a bandit
Viewing all articles
Browse latest Browse all 10

2020

$
0
0

My latest post on this blog was on December 30th 2019. It seems like a lifetime away. The rate at which paradigm shifting events have been happening in 2020 is staggering. And it might very well be that the worst of 2020 is ahead of us, especially for those of us currently in the USA.

When I started communicating online broadly (blog, twitter) I promised myself to keep it strictly about science (or very closely neighboring topics), so the few lines above is all I will say about the current worldwide situation.

In other news, as is evident from the 10 months hiatus in blogging, I have taken elsewhere (at least temporarily) my need for rapid communication about theorems that currently excite me. Namely to youtube. Since the beginning of the pandemic I have been recording home videos of what would have been typically blog posts, with currently 5 such videos:

  1. A law of robustness for neural networks : I explain the conjecture we recently made that, for random data, any interpolating two-layers neural network must have its Lipschitz constant larger than the squareroot of the ratio between the size of the data set and the number of neurons in the network. This would prove that overparametrization is *necessary* for robustness.
  2. Provable limitations of kernel methods : I give the proof by Zeyuan Allen-Zhu and Yuanzhi Li that there are simple noisy learning tasks where *no kernel* can perform well while simple two-steps procedures can learn.
  3. Memorization with small neural networks : I explain old (classical combinatorial) and new (NTK style) construction of optimally-sized interpolating two-layers neural networks.
  4. Coordination without communication : This video is the only one in the current series where I don’t talk at all about neural networks. Specifically it is about the cooperative multiplayer multiarmed bandit problem. I explain the strategy we devised with Thomas Budzinski to solve this problem (for the stochastic version) without *any* collision at all between the players.
  5. Randomized smoothing for certified robustness : Finally, in the first video chronologically, I explain the only known technique for provable robustness guarantees in neural networks that can scale up to large models.

The next video will be about basic properties of tensors, and how it can be used for smooth interpolation (in particular in the context of our law of robustness conjecture). After that, we will see, maybe more neural networks, maybe more bandits, maybe some non-convex optimization ….

Stay safe out there!


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images