Post AyNltGsnDtU3MLE3XM by kate@federatedfandom.net
(DIR) More posts by kate@federatedfandom.net
(DIR) Post #AyNkL9CDCDgnieTPLk by foone@digipres.club
2025-09-20T01:25:57Z
0 likes, 0 repeats
I'm in the slightly weird position of needing to make a program more or less automated. Either will do.
(DIR) Post #AyNkPltLGi7qtEqtHc by foone@digipres.club
2025-09-20T01:26:50Z
0 likes, 0 repeats
so I wrote a tool to get all the episodes of a podcast I subscribe to (Kill James Bond), and crucially: rename the files and set the ID3 tags.
(DIR) Post #AyNl4soWR7qEERCXA0 by foone@digipres.club
2025-09-20T01:34:09Z
0 likes, 0 repeats
I wanted to have all the episodes on a device, but the official metadata was bleh. so I wrote a thing that got the episodes and applied some rules to the titles (and hardcoded a bunch of weird edge cases like there being two episode 11s or something), I have a program that does what I want
(DIR) Post #AyNl8rz1T00IoX0uJs by foone@digipres.club
2025-09-20T01:34:59Z
0 likes, 0 repeats
the problem is that I wrote it to1. download all the files of every episode ever2. process that slurry into precisely named fileswhich is fine, the first time I run it. two weeks later, another episode comes out, and it wants to download them all again
(DIR) Post #AyNlIG4pFySa94ClAO by foone@digipres.club
2025-09-20T01:36:35Z
0 likes, 0 repeats
so either I make it less automatic, ie, "just get the one episode pointed at by a command line URL"or more automatic, and make make it know to just grab the latest episode... and maybe put it on a cronjob, or make it watch my emails...
(DIR) Post #AyNlLJppjxMWfaeTB2 by foone@digipres.club
2025-09-20T01:37:08Z
0 likes, 0 repeats
(alternatively just make it act smarter when run over an existing collection of files: don't redownload stuff you already had. this is tricky, for Reasons)
(DIR) Post #AyNlQINqZ1ZtcGvyxE by foone@digipres.club
2025-09-20T01:38:04Z
0 likes, 0 repeats
but anyway I was overly amused by how I need to nudge this program either up or down on the automatedness scale. I found a local minimum in the usefulness vs automation graph!
(DIR) Post #AyNlTVof4wGm940hsW by Kiloku@burnthis.town
2025-09-20T01:38:27Z
0 likes, 0 repeats
@foone can you store a "history" of downloaded/processed episodes to skip?
(DIR) Post #AyNlkjGYKFZYmYGXzM by sluttymayo@jorts.horse
2025-09-20T01:41:46Z
0 likes, 0 repeats
@foone put the url in the bond wiggler and see if you can shake the metadata out of it
(DIR) Post #AyNltGsnDtU3MLE3XM by kate@federatedfandom.net
2025-09-20T01:43:18Z
0 likes, 0 repeats
@foone IM SO TEMPTED TO DO THIS FOR ALL OF MY DOWNLOADED PODCASTS it bugs me that the tags are so irregular
(DIR) Post #AyNoNCuCm5UVcVHwmG by mcgrew@dice.camp
2025-09-20T02:11:03Z
0 likes, 0 repeats
@foone what I've generally done for cases like this is store some peice of identifying information for each file/URL/whatever in sqlite and store that DB with the files. Then you can reference that to see what's new.
(DIR) Post #AyNogxNyBeZzHGd06a by Argonel@dice.camp
2025-09-20T02:14:41Z
0 likes, 0 repeats
@foone if you were evil you could do it the ai company way, spin up 10 bots to each download the entire catalog every 10 minutes and reprocess the files into very slightly different slurry by including the download timestamp as part of the processing.