7 Comments
User's avatar
Mahesh's avatar

Thanks for covering my work :)

Expand full comment
Gaurav Chakravorty's avatar

Thanks for the amazing work. Apologies if we misinterpreted it!

If you are interested I’d love to collaborate on a future post with you :)

Expand full comment
Aldo Charles's avatar

That's a vid id like to see

Expand full comment
Aldo Charles's avatar

Oh, Hey Mahesh :)

Expand full comment
Blondel's avatar

When you flatten the ids, now the level3 of item1 is followed by level1 of the item2, is that what we want the transformer to learn? For pure next item prediction shouldnt we keep the item ids in array and not flatten it?

Expand full comment
Andrew Dodd's avatar

Great talk! Thanks for sharing. This was in my videos to watch backlog :)

Expand full comment
Andrew Dodd's avatar

This reminds me... In the past I worked with Multiplying Matrices Without Multiplying (MMWM, https://arxiv.org/abs/2106.10860), where quantization was used to efficiently multiply large matrices with a lookup table. It's fascinating to see this in Generative Recommenders too.

Expand full comment