Subj : Re: Windows vs Linux To : tenser From : boraxman Date : Tue Apr 26 2022 00:38:42 te> With /etc/passwd? Nope. That doesn't work. Because te> the parser is are built into a library. And even on te> systems where I can hack the library, I might use te> something like LDAP or NIS, or even shell scripts and te> rsync or rdist to copy those to machines where I can't te> hack the library for some reason. So no, that doesn't te> work for /etc/passwd. te> Treat '/' as an escape character. Always. Thats it. te> Or did you mean for delimited text formats generally? te> In which case, don't CSV files support quoted strings? te> Yes, and quoted strings suck, because they may have within themselves, quotes. The parser has to know whether "," is a delimiter or not. You may have ," and ",/ Quote are not universally used either. The one I had to write was using CSV data where spaced strings may or may not be quoted, within the same file. That a fault of implementation of whatever wrote that data, than the CSV format itself, but such differences are more likely. te> If one is going to appeal to authority or personal te> experience, it's best if one checks one's priors. te> te> I learned compilers from Al Aho, and I've written te> parsers for full programming languages with context te> sensitive grammars. Some of them are Internet te> facing and used daily by millions of users. So I te> think I speak with some authority when I say that te> CSV is not significantly harder than simple te> delimited lines of text, which are themselves te> trivial to parse. te> te> However, neither is very extensible. Consider what te> happens when one needs to add a new field. To go te> back to the /etc/passwd example, when this last te> happened with both Linux and BSD, they had to invent te> a new file format that lived in a separate file next te> to the legacy V7 format file, and they had to develop te> specialized tools to keep these in sync. te> te> Delimited lines of text are great because they're te> simple to use and easy to get going. They work well te> in Unix pipelines because most filters were evolved te> to work best with that kind of textual data. te> te> They're not so great because they generally don't te> evolve gracefully: too much is implicit in the format te> itself ("field 3 is always an integer and it's always te> the user ID number"). There are no universally te> agreed upon formats to represent the full range of te> representable data expressible on modern machines. te> te> This is schematized structured formats are useful, te> though they are harder to get started with. However, te> once you start using those, informally specified te> things like Unix filters start to break down because te> they don't understand the structured format. te> te> This naturally led to the rise of things like PowerShell, te> which attempt to fit a much richer data model into the te> filter paradigm. Things like nushell, or even things like te> Michael Greenberg's work on formally shell specifications te> and smoosh are more recent advances. te> And that is one definite improvement that powershell brings. Over time, I'm sure Windows will have the same composability, it just is at the moment perhaps not being used to its full extent, because using windows that way is something relatively new. te> You missed the point. Microsoft invested heavily in the te> developer experience for Windows, and developers wanted to te> use Windows. te> Of course they do, but it doesn't change the deficiencies. Developers however cannot solve all problem domains. They can't do it with shrink wrapped software, and although most software developed is bespoke, such software is generally built as its own system. What you don't see, is developers leveraging existing tools, existing capabilties, and stringing them together to solve problems. We get closed box solutions, usually a web based app. te> More pragmatic than what, exactly? te> te> The interesting thing about a research system is that te> it is designed to solve problems that are interesting te> in some place at some point in time. Unix is one of te> those very rare systems indeed where the research te> interests coincided with commercial interests in such te> a way that it could _successfully_ make the jump from te> research to commercial development. te> te> However, that doesn't mean that the system doesn't owe te> its origins -- not to mention its major design te> principles -- to the research context it was created te> in. The point is that Unix wasn't designed as a pragmatic te> solution to production data processing problems as much te> as it evolved to answer interesting research questions. te> te> What's even more interesting is that every system since te> has similarly had the benefit of that research. To bring te> this back to the original point -- again -- you may prefer te> Linux, but truly, there's very little in there that cannot te> be implemented on just about any other base system. te> I'm sure a better operating system could be born, but a commercial operating system has as its primary problem, the enrichment of the company. Large tech companies tend to impose their own vision, and consider their internal vision to be what the rest of the world needs as a solution. In part, driven by feedback, but this can be misleading. Users may not need functionality X in the desktop at home, but it may be important, or of value, to a company wanting to provide computers which enable people to efficiently and correctly do their work. I call it as I see it. The current computing paradigm is broken, error prone, and these are errors that I deal with professionally. Transcription errors, wrong information on a specification, unclear status of documents, these are errors which result in costly rejects. When data is poorly managed, when it is difficult to consolidate, to query, to ratify, mistakes happen. The opacity between the different tools provides points of failure. As I said, errors come about from people incorrectly typing data, data that has already been entered and verified. It has to be typed because the tools don't allow the machine to do the transcription. Tools that can exchange data, tools which don't necessarily belong to a singular commercial suite, a means to exchange that data, to access the workflows and business logic, would solve these problems. A system which provides functions that can be strung together, that can pass data and generate documents could do this. Maybe Windows can, but if so, its now doing it late in its development, and it will take a culture shift. --- Mystic BBS v1.12 A47 2021/12/24 (Linux/64) * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101) .