[HN Gopher] Music recommendation system using transformer models
___________________________________________________________________
Music recommendation system using transformer models
Author : panarky
Score : 34 points
Date : 2024-08-19 19:28 UTC (3 hours ago)
(HTM) web link (research.google)
(TXT) w3m dump (research.google)
| naltroc wrote:
| when did google get a TLD
| Zambyte wrote:
| A decade ago https://en.wikipedia.org/wiki/.google
| incognito124 wrote:
| dns.google has been with us for a long time
| warkdarrior wrote:
| Shouldn't that be dns.squarespace now?
| janalsncm wrote:
| Other than stating there was one, they didn't show a benefit of
| this over something like a Wide and Deep model, DCNv2 model, or
| even a vanilla NN. Transformers make sense if you need to use
| something N items ago as context (as in text) where N is large.
| But in their example, any model which takes the last ~5 or so
| interactions should be able to quickly understand contextual user
| preferences.
|
| A transformer may also be larger than their baseline, but you
| still need to justify how those parameters are allocated.
| disposition2 wrote:
| It's interesting the amount of research listed in the article and
| IMHO the recommendation engine/ algorithm used by Rdio in the
| late aughts and early 2010s eclipses anything I've encountered to
| date.
|
| Seems like folks are reinventing the wheel, and trying to deduce
| what folks want to engage in with data and "AI", rather than
| providing sufficient tools to allow the user to drive the
| narrative.
| tulsidas wrote:
| It's all very nice but if they end up "altering" the results
| heavily to play you the music they want you to listen for X or Y
| reason then it's pointless.
|
| I would like to be able to run this model myself and have a
| pristine and unbiased output of suggestions
| vagabund wrote:
| It may just be my perception, but I seem to have noticed this
| steering becoming a lot more heavy handed on Spotify.
|
| If I try to play any music from a historical genre, it's only
| about 3 or 4 autoplays before it's queued exclusively
| contemporary artists, usually performing a cheap pastiche of
| the original style. It's honestly made the algorithm unusable,
| to the point that I built a CLI tool that lets me get
| recommendations from Claude conversationally, and adds them to
| my queue via api. It's limited by Claude's relatively shallow
| ability to retrieve from the vast library on these streaming
| services, but it's still better than the alternative.
|
| Hoping someone makes a model specifically for conversational
| music DJing, it's really pretty magical when it's working well.
| atum47 wrote:
| All this research to create an apparently awesome recommendation
| system only for the sales department forces the recommendation of
| what they want to promote.
| drdaeman wrote:
| It doesn't seem that this approach "knows" the actual music. The
| article doesn't seem to explain how track embedding vectors are
| produced, but it mentions that user-action signals are of the
| same length, which makes me doubt track embeddings have any
| content-derived (rather than metadata-derived) information. Maybe
| I'm wrong, of course.
|
| I doubt that any recommendation system is capable of providing
| meaningful results in absence of the "awareness" about the actual
| content (be it music, books, movies or anything else) of what
| it's meant to recommend.
|
| It's like a deaf DJ that uses the charts data to decide what to
| play, guessing and incorporating listeners' profiles/wishes. It's
| better than a deaf DJ who just picks whatever's popular without
| any context (or going by genre only), but it's not exactly what
| one looks forward to when looking for a recommendation.
| dr0p wrote:
| why ? Why wasting resources and energy on something that no one
| needs. The because we can mentality is what will break this
| bubble.
___________________________________________________________________
(page generated 2024-08-19 23:00 UTC)