Posts by glaurungo@niu.moe
(DIR) Post #A0T1R5aJ3nrT4j1PVI by glaurungo@niu.moe
2020-10-23T22:25:26Z
0 likes, 0 repeats
@wolf480pl @nik the point is I'm unsure whether same notice applies, and same timing standards do, as if they were identifying a copyrighted work
(DIR) Post #A0T1fUxvUWMNqZwNnc by glaurungo@niu.moe
2020-10-23T22:26:46Z
0 likes, 0 repeats
@wolf480pl @nik It's feasible that such argument could be taken to court. But does it justify expeditious removal of the work, even if there is no evidence supplied of copyrights being violated?
(DIR) Post #A0T29xpOkIwu39TkGW by glaurungo@niu.moe
2020-10-23T22:31:32Z
0 likes, 0 repeats
@wolf480pl @nik yeah... Well, it is likely still against yt terms of use. So it would be a hard case, and focused mostly on that /these/ people shouldn't be able to do it /this/ way.And youtube-dl will likely just rehost elsewhere
(DIR) Post #A0WvmZn2vjC4QO0uiu by glaurungo@niu.moe
2020-10-25T19:31:32Z
0 likes, 0 repeats
@wolf480pl @nik Alternatively, lets have someone else go through it for ushttps://www.youtube.com/watch?v=wZITscblMBA
(DIR) Post #A0ai431kktKiSaRPua by glaurungo@niu.moe
2020-10-27T13:54:59Z
0 likes, 0 repeats
@wolf480pl And so, new update in youtube-dl, one of companies who forked/used it as a dependency suedhttps://www.youtube.com/watch?v=RCrJM-MrKyIAlso, who would have thought, youtube-dl is frese software and is "widely used by journalists" instead of "having clear purpose of circumventing copyright"https://freedom.press/news/riaa-github-youtube-dl-journalist-tool/
(DIR) Post #A0aiawo8tsZuj7lLv6 by glaurungo@niu.moe
2020-10-27T15:32:01Z
0 likes, 0 repeats
@wolf480pl which ones? xdFun fact is, that we're so deep in great "customization algorithms", that it's likely everyone is seeing a different set of comments(since comments aren't even displayed in chronological order)
(DIR) Post #A0aov4KJFqTnYwmols by glaurungo@niu.moe
2020-10-27T16:42:52Z
0 likes, 0 repeats
@wolf480pl Oh, yeah. But that's actually fairly subtle. Because it's easy to imagine a world where youtube just says "you must use the web UI" and said UI has a login page, and everything that doesn't use the web UI is circumventing a technological measure.Even if youtube were to have a clear and documented API enabling you to do so.But this is specifically about DRM and their videos. Especially since there exist journalist usage and people downloading their own movies, and people recommending to their fans to use youtube-dl.And the time-shifting case that was quoted.
(DIR) Post #A11qNrZyP69Z9u6oaG by glaurungo@niu.moe
2020-11-09T17:36:31Z
0 likes, 0 repeats
@lis elixir is kind of goodIt introduces FP concepts (and agent oriented programming) without need for deep understanding of monadsIt has reasonable error messages (but a lot of error messages will be runtime ones, so you get some you lose some)In terms of object lifetimes, you're supposed to run code in short loops of "processes" of which you spawn thousands because they are cheap enough.Any of those processes can be paused independently for a gc sweep, and so actual gc time is minuscule and pretty much negligible. I don't know how often it actually runs, but it could easily fire off after each message/loop part, removing just the few variables that you introduced in the meantime.Seems like a good compromise between managing memory manually, and having a gc overhead.
(DIR) Post #A1PjNIIWiUMci82vB2 by glaurungo@niu.moe
2020-11-20T23:37:13Z
2 likes, 0 repeats
@newt you should show her https://rms.sexy/It existsSome steamy fiction alsoexists
(DIR) Post #A1cMw3HImOEgN9Yb9U by glaurungo@niu.moe
2020-11-27T07:58:54Z
0 likes, 1 repeats
Welcome to the future, lolhttps://pbs.twimg.com/media/EnstKaCUYAAdrIV.jpg:largehttps://pbs.twimg.com/media/EnstKZ_VEAMZ34y.jpg:large
(DIR) Post #A1cbsYyMUrIypEUarA by glaurungo@niu.moe
2020-11-27T11:18:18Z
0 likes, 0 repeats
@wolf480pl to be fair, it would be much stranger if this happened to you on linux
(DIR) Post #A1p4JKJIFRlxmbLu0u by glaurungo@niu.moe
2020-12-03T11:33:23Z
0 likes, 0 repeats
@wolf480pl That's actually really hard to do well. Different ISP have different quality of service in different buildings, let alone in different cities. You can't do such review well, if you were to just "buy that connection yourself and check it out"Quality of cabling in and outside of building (which could be offset by distributing tests over large enough populace), concurrent network conditions (which could be offset by doing tests over longer periods of time), the servers you are trying to connect tothey all matter.
(DIR) Post #A1p5uxpoMN372D4amW by glaurungo@niu.moe
2020-12-03T11:51:20Z
0 likes, 0 repeats
@wolf480pl Well, I do think it would be more useful if you could have an app that would do all of these test, that people then could download and run in background, for lets say a monthOr a website that would do the same thing (which would likely be a lot harder to make, but easier to "install", not so easy on the "month" part though)And then you could aggregate results and have something pretty usefulBut making people actually take part in research as such would be non-trivial.
(DIR) Post #A1p6iV7NOypmRzuU1Q by glaurungo@niu.moe
2020-12-03T11:57:41Z
0 likes, 0 repeats
@wolf480pl TBH, the best metric would be having a group of websites that already measure clients connection quality, like netflix, youtube, games and suchand have them publish such results in anonymized way and then aggregate it. Or even better access statistics on the level of your browsers (or do a browser add-on collecting those statistics)Technically speaking if a browser or a website does collect that kind of data about you and doesn't anonymize it immediately, you could ask about all data they have on you using RODO, and would receive that data too.
(DIR) Post #A1p71vgmpqYSMTjlNA by glaurungo@niu.moe
2020-12-03T12:03:52Z
0 likes, 0 repeats
@wolf480pl Well, there is no reason why clients of one ISP would have misconfigured wifis in statistically significant larger segment than clients of another ISP (unless ISP delivers you a shitty router, but that is something that should still be included in the benchmark)But yeah, ideally you would to measure it on a cable connection
(DIR) Post #A1p7ilPavCx2wI7XJw by glaurungo@niu.moe
2020-12-03T12:11:37Z
0 likes, 0 repeats
@wolf480pl you might not be able to get exact packet loss, but you still could get latency (which would likely be influenced by packet loss)Of course, the lower you go the more you can collect, but I think it's sane to just say "let's collect what we can on this level", and if someone wants to run more specific benchmark (that interferes more deeply with the OS) all the power to them. But if something useful could be collected on a higher level, it would be more likely that more people would be willing to participate (making results more valuable).
(DIR) Post #A1r2qgeqN0ML3a8EnA by glaurungo@niu.moe
2020-12-04T10:26:24Z
0 likes, 0 repeats
@wolf480pl @ayo at some point optimizer decides that a full scan is better than doing N index searches (especially if those indexes aren't adjacent/relatively close to each other)though, that point being 11 sounds pretty dumb
(DIR) Post #A1zVvWP0NXeKdK3oGG by glaurungo@niu.moe
2020-12-08T12:29:54Z
0 likes, 0 repeats
@wolf480pl glorious
(DIR) Post #A2249csWsmVnWxgfDM by glaurungo@niu.moe
2020-12-09T16:52:27Z
0 likes, 0 repeats
@ayo Well, technicallyif they save those queries, you could predict that they're going to use them, and keep a prepared cached response for the time that they doNothing faster than that.Technically.
(DIR) Post #A2j5OZIqRgLeLgFQMy by glaurungo@niu.moe
2020-12-30T12:09:21Z
0 likes, 0 repeats
@wolf480pl what about letting it crash with full stacktrace and a non-informed error?That's the most common way of handling I've seenBut yeah, I'd just use whatever is provided by framework I'm using. I wouldn't go out of the way to make it excellent.Though 400 with descriptive error is likely the superior way. And obligatory if it's actually an API endpoint and not just a site.