[HN Gopher] Getting back into C programming for CP/M
       ___________________________________________________________________
        
       Getting back into C programming for CP/M
        
       Author : AlexeyBrin
       Score  : 148 points
       Date   : 2024-08-18 11:24 UTC (11 hours ago)
        
 (HTM) web link (kevinboone.me)
 (TXT) w3m dump (kevinboone.me)
        
       | Rochus wrote:
       | Interesting; but why actually C and not PL/M?
        
         | katzinsky wrote:
         | Yeah C does not work well on these odd 8-bit ISAs. Pascal,
         | basic, and PL/M (and fortran?) seem to have been way more
         | common and Pascal environments on these were really on the edge
         | of what the contemporary hardware could handle.
        
           | PaulHoule wrote:
           | My take it was the other away around. In its own strange way
           | C was portable to machines with unusual word sizes like the
           | DEC PDP-10 with 36 bit words. I used C on Z-80 on CP/M and on
           | the 6809 with Microware's OS-9.
           | 
           | In the 1980s there were books on FORTRAN, COBOL and PASCAL. I
           | know compilers for the first two existed for micros but I
           | never saw them, these were mainly on minicomputers and
           | mainframes and I didn't touch them until I was using 32-bit
           | machines in college
           | 
           | There were academics who saw the popularity of BASIC as a
           | crisis and unsuccessfully tried to push alternatives like
           | PASCAL and LOGO, the first of which was an unmitigated
           | disaster because ISO Pascal gave you only what you need to
           | work leetcode problems, even BASIC was better for "systems
           | programming" because at least you had PEEK and POKE though
           | neither language would let you hook interrupts.
           | 
           | Early PASCALs for micros were also based on the atrociously
           | slow UCSD Pascal. Towards the end of the 1980s there was the
           | excellent Turbo Pascal for the 8086 that did what
           | NiklausWirthDont and I thought was better than C but I
           | switched to C because it portable to 32-but machines.
           | 
           | I'd also contrast chips like the Z-80 and 6809 which had
           | enough registers and address modes to compile code for and
           | others like the 6502 where you are likely to resort to
           | virtual machine techniques right away, see
           | 
           | https://en.wikipedia.org/wiki/SWEET16
           | 
           | I saw plenty of spammy books on microcomputers in the late
           | 1970s and early 1980s that seemed to copy press releases from
           | vendors and many of these said a lot about PL/M being a big
           | deal although I never saw a compiler, source code, or knew
           | anybody who coded it.
        
             | pvg wrote:
             | _Towards the end of the 1980s there was the excellent Turbo
             | Pascal for the 8086_
             | 
             | TurboPascal was released at the tail end of 1983 targeting
             | CP/M and the Z80. It was hugely popular on the platform.
        
               | PaulHoule wrote:
               | Correct.
               | 
               | My own experience in Turbo Pascal started with (I think)
               | version 4 when I got an 80286 machine in 1987. In that
               | time frame Borland was coming out with a new version
               | every year that radically improved the language, it got
               | OO functionality in 5.5, inline assembly in 6, etc. I
               | remember replacing many of the stdlib functions such as
               | move and copy w/ ones that were twice as fast because the
               | used 16 bit instructions that were faster on the 80286.
               | With the IDE and interactive debugger it was one my
               | favorite programming environments ever.
        
             | adrian_b wrote:
             | Microsoft had FORTRAN and COBOL compilers for CP/M. I have
             | used them on both Intel 8080 and Zilog Z80.
             | 
             | The MS FORTRAN compiler was decent enough. It could be used
             | to make programs that were much faster than those using the
             | Microsoft BASIC interpreter.
             | 
             | Even if you preferred to write some program in assembly, if
             | that program needed to do some numeric computations it was
             | convenient to use the MS Fortran run-time library, which
             | contained most of the Fortran implementation work, because
             | the Fortran compiler generated machine code which consisted
             | mostly of invocations of the functions from the run-time
             | library.
             | 
             | However, for that you had to reverse-engineer the library
             | first, because it was not documented by Microsoft.
             | Nevertheless, reverse-engineering CP/M applications was
             | very easy, because an experienced programmer could read a
             | hexadecimal dump almost as easy as the assembly language
             | source code. Microsoft used a few code obfuscation tricks,
             | but those could not be very effective in such small
             | programs.
        
           | flyinghamster wrote:
           | I don't ever recall seeing PL/M compilers advertised by
           | anyone back in the day. I have a feeling that the few that
           | existed were offered at "meet our sales guy at the golf
           | course" pricing.
        
         | guestbest wrote:
         | PL/M is a less transferable skill and a 'dead' language. I
         | think Gary Kildall promotes PL/M but honestly C is the best for
         | portability and popularity followed by Forth and Pascal.
        
       | pjmlp wrote:
       | A very good example how C wasn't as portable and as high
       | performance back in the 1980's, as many nowadays think it was.
        
         | tyingq wrote:
         | Not on 8 bit machines, no. Look at what Perl did with C in that
         | timeframe though, and it's true for a different subset of
         | machines.
         | 
         | Within Perl 1.0, part of the Configure script, you can see the
         | list of machines it could build on here:
         | https://github.com/kaworu/perl1/blob/ba165bbde4eef698ff9cc69...
         | attrlist="mc68000 sun gcos unix ibm gimpel interdata tss os
         | mert pyr"       attrlist="$attrlist vax pdp11 i8086 z8000 u3b2
         | u3b5 u3b20 u3b200"       attrlist="$attrlist ns32000 ns16000
         | iAPX286 mc300 mc500 mc700 sparc"
        
           | pjmlp wrote:
           | Not even on 16 bit home machines, which is why any serious
           | game, or winning demoscene entries, were written in Assembly,
           | until we reached the days of 486 and DOS extenders.
           | 
           | As a read through the Amiga and PC literature of the time
           | will show.
        
             | kelsey98765431 wrote:
             | The problem here is much more the unix wars and a lack of
             | confidence in BSD under legal fire rather than a lack of
             | ability. The principal concern of unix vendors of the early
             | PC era was to maintain their market share in mini and
             | mainframe product sectors rather than growth into the
             | consumer market. This spurred a rewrite of BSD fragments
             | tied to the legacy unix codebase which fully portablized C
             | and the GCC downstream projects which ended up benefiting
             | the weird hobby OS linux disproportionately, and had it not
             | had to be written from scratch we may have ended up with a
             | wonderful 286-BSD rather than a 486-BSD, which at the time
             | was still not fully clean room foss and unburdened. This
             | was a time when large customers of OS products were trying
             | to squeeze all the performance juice out of the existing
             | systems instead of looking at new paradigms. We have things
             | like the full SAST and warn-free release of sunOS around
             | this time, where Sun was focused on getting a rock stable
             | platform to then optimize around rather than efforts to
             | produce products for the emerging Micro market. We can see
             | that the concept of a portable unix system and c library as
             | early as Xenix on the Apple Lisa in 1984. That's only 3
             | short years after the IBM collaboration for PC-DOS, showing
             | even a rookie uncoordinated and low technical skill team
             | such as microsoft (Paraphrasing Dave Cutler, chief NT
             | kernel lead - Zachary, G. Pascal (2014). Showstopper!: The
             | Breakneck Race to Create Windows NT and the Next Generation
             | at Microsoft. Open Road Media. ISBN 978-1-4804-9484-8).
        
               | pjmlp wrote:
               | Xenix was my introduction to UNIX, I wouldn't claim it
               | would win any performance price, specially when
               | considering graphics programming.
               | 
               | Also my first C book was "A book on C", which had a type
               | in listing for RatC dialect, like many others in those
               | early 1980's, which were nothing more than a plain macro
               | assembler without opcodes, for all practical purposes.
               | 
               | Compiler optimizations in those 8 and 16 bit compilers
               | were what someone nowadays would do in a introduction to
               | compilers, as the bare minimum, like constant propagation
               | and peephole optimizations.
        
             | tyingq wrote:
             | A fair amount of non-game Amiga scene, including the OS,
             | was either BCPL or C.
        
               | pjmlp wrote:
               | Sure, when performance didn't matter.
               | 
               | Just like on MS-DOS side, I did plenty of stuff on Turbo
               | BASIC, Turbo Pascal, Turbo C (quickly replaced by Turbo
               | C++), and Clipper, until Windows 3.x and OS/2 came to be.
               | 
               | Small utilities, or business applications, without big
               | resources demands.
        
               | tyingq wrote:
               | Or, where portability did matter. That was still true
               | much later...web servers were often mostly C, then inline
               | ASM for the SSL parts.
        
         | dboreham wrote:
         | As I experienced that era, C wasn't really a practical language
         | choice on 8-bit systems. Ok yes you could _get_ a C compiler
         | but it would typically need overlays hence be very slow.
         | Assembler was pretty much where it was at on that generation of
         | systems, or special-purpose languages such as BASIC and PL /M.
         | 
         | C worked ok on a pdp-11/45, but that had 256K of memory and 10s
         | of MB of fixed disk. That level of hardware didn't appear for
         | micro systems until the 68k generation, or I suppose IBM PC,
         | but I don't remember the PC being too important in C coding
         | circles until the 386, much later.
        
           | pjmlp wrote:
           | Yeah, that was indeed the case, while I did some C and C++
           | even on MS-DOS, it was Assembly, Turbo BASIC, Turbo Pascal
           | and Clipper where I spent most of my time.
           | 
           | Even during my early days coding for Windows 3.x, I was doing
           | Turbo Pascal for Windows, before eventually changing into
           | Turbo C++ for Windows, as writing binding units for Win16
           | APIs, beyond what Borland provided, was getting tiresome, and
           | both had OWL anyway.
        
           | zabzonk wrote:
           | I did a lot of C programming on an IBM XT - 8088, 10mb hard
           | disk, WordStar and DeSmet C. All worked very well.
        
       | jart wrote:
       | > Many programmers, including myself, have gotten out of the
       | habit of doing this on modern systems like Linux, because a
       | malloc() call always succeeds, regardless how much memory is
       | available.
       | 
       | Not if you use setrusage().
        
         | fuhsnn wrote:
         | >setrusage()
         | 
         | Is it old version of setrlimit()? Couldn't locate it in any of
         | the man.*bsd.org.
        
           | jart wrote:
           | My bad, that's what I intended to say.
        
             | fuhsnn wrote:
             | I did found plenty of docs and books mentioning setrusage()
             | though, like a proper Mandela Effect.
        
         | jmclnx wrote:
         | Not me :)
         | 
         | What I develop I always make sure I test on NetBSD and OpenBSD.
         | That keeps me honest and those systems will find issues that
         | Linux does not care about. I found many issues by testing on
         | those systems.
         | 
         | Also, ignoring malloc() returns is dangerous if you want to
         | port your application to a UNIX like AIX.
        
           | antirez wrote:
           | Ignoring failures is a bad idea, but in many applications
           | quitting on malloc() retiring NULL is the most sensibile
           | thing to do. Many, but not all kinds of applications.
        
         | guenthert wrote:
         | > Not if you use setrusage().
         | 
         | Or if memory overcommit is disabled or an 'unreasonable' amount
         | of memory was requested. So, no, malloc() doesn't always
         | succeed.
        
       | ghuysmans wrote:
       | Nice article, thanks!
       | 
       | Minor nitpick: PUN is not a device in Windows 11 (I haven't
       | tested on previous versions). > echo hello>pun: > type pun hello
       | 
       | In the section about paging, are there actual systems working in
       | the megabyte range?
        
         | layer8 wrote:
         | PUN wasn't even in MS-DOS. However, later versions of CP/M used
         | AUX instead of PUN, which DOS adopted and still exists in
         | Windows.
        
         | anonymousiam wrote:
         | It's been a lot of years, but as I recall, the raw I/O devices
         | had one set of names, and the logical devices had another. So
         | things like STAT PUN:=PTP: (if I remembered the syntax
         | correctly) would set the logical "punch" device to be the
         | physical paper tape punch, which was the default. I may also be
         | confusing CP/M I/O redirection syntax (which only worked if
         | your BIOS supported it), with DEC RT-11 syntax. It has been
         | over 40 years since I have used either one.
        
       | nils-m-holm wrote:
       | Funny how most of the article reads (to me) "back in the days
       | things were done in the obvious way, while now everything is
       | weird". In other words I still program like in the 1980's. :)
       | 
       | CP/M programming is a lot of fun, even these days! I have a
       | growing collection of retro machines running CP/M, my latest
       | compiler has a CP/M backend, and I have even written a book about
       | the design of a CP/M compiler: http://t3x.org/t3x/0/book.html
        
         | devjab wrote:
         | There has been an incredible amount of principles and practices
         | added to our profession. Most of which are silly. Like Clean
         | Code which is just out right terrible in terms of causing CPU
         | cache misses as well as getting you into vtable for your class
         | hierarchies. Most modern developers wouldn't know what a L1
         | cache is though, so they don't think too much about the cost.
         | What is worse is that people like uncle Bob haven't actually
         | worked in programming for several decades. Yet these are the
         | people who teach modern programmers how they are supposed to
         | write code.
         | 
         | I get it though, if what you're selling is "best practices"
         | you're obviously going to over complicate things. You're likely
         | also going to be very successful in marketing it to a
         | profession where things are just... bad. I mean, in how many
         | other branches of engineering is it considered natural that
         | things just flat out fail as often as they do in IT? So it's
         | easy to sell "best practices". Of course after three decades of
         | peddling various principles and strategies and so on, our
         | business is in even worse state than it was before.
         | 
         | In my country we've spent a literal metric fuck ton of money
         | trying to replace some of the COBOL systems powering a lot of
         | our most critical financial systems. From the core or our tax
         | agency to banking. So far no one have been capable of doing it,
         | despite various major contractors applying all sorts of
         | "modern" strategies and tools.
        
           | ImHereToVote wrote:
           | The issue is that there is a vast chasm between software
           | written by some competent guy and software written by a
           | development team.
        
             | steveBK123 wrote:
             | Yes and "software written by some competent dev" is a thing
             | that stops scaling after an org reaches 100s or 1000s of
             | devs.
             | 
             | Management then moves to a model of minimizing outlier
             | behavior to reduce risk of any one dev doing stupid things.
             | However this process tends to squeeze the "some competent
             | dev" types out as they are outliers on the positive side of
             | the scale..
        
             | devjab wrote:
             | True, but maybe we should utilise principles which don't
             | suck. Things like onion architecture, SOLID, DRY and
             | similar don't appear to scale well considering software is
             | still a mess. Because not only can't your hardware find
             | your functions and data, your developers can't either.
             | 
             | It's a balancing act of course, but I think a major part of
             | the issue with "best practices" is that there are no best
             | practices for every thing. Clean Code will work well for
             | somethings. If you're iterating through a list of a
             | thousand objects it's one and a half time slower than a
             | flat structure. If you were changing 4 properties in every
             | element it might be 20 times less performant though. So
             | obviously this wouldn't be a good place to split your code
             | out into four different files in 3 different projects. On
             | the flip side something like the single responsibility
             | principle is completely solid for the most part.
             | 
             | Maybe if people like Uncle Bob didn't respond with "they
             | misunderstood the principle" when faced with criticism we
             | might have some useful ways to work with code in large
             | teams. I'd like to see someone do research which actually
             | proves that the "modern" ways work as intended. As far as
             | I'm aware, nobody has been able to prove that something
             | like Clean Code actually works. You can really say the same
             | thing for something like by the book SCRUM or any form of
             | strategy. It's all a load of pseudo science until we have
             | had evidence that it actually makes the difference it
             | claims to do.
             | 
             | That being said. I don't think it's unreasonable to expect
             | that developers know how a computer works.
        
           | bobmcnamara wrote:
           | The issue here is all these caches. Back in my day we didn't
           | have caches and memory access time was deterministic - and
           | expensive! We kept things in our 4-8 registers and we were
           | happy with it. Programs larger than that weren't meant to be
           | fast!
        
             | NikkiA wrote:
             | In reality those caches are going to be relatively
             | meaningless except for short bursts of speed, because the
             | 100,000 API calls and user/kernel switches, that windows
             | does because of absurd abstractions, that happen in the
             | time slice your program isn't running will destroy any
             | cache coloring you attempt to code for.
        
       | zabzonk wrote:
       | I did a shedload of programming in CP/M back in the 80s, and
       | frankly I'd rather do it in Z80 assembler (assuming we were
       | targeting Z80-based systems) than the rather poor compilers (not
       | just C compilers) that were available. Using a compiler/linker on
       | a floppy-based CP/M machine was quite a pain, as the compiler
       | took up a lot more space than an assembler, and was typically
       | much slower.
       | 
       | And I like writing assembler!
        
         | julian55 wrote:
         | Yes, I agree. I did write some C software for Z80 but mostly I
         | used assembler.
        
       | PaulHoule wrote:
       | Personally I see the use of a cross-compiler and other dev tools
       | on a bigger machine as even more retro than running them in an
       | 8-bit micro because it is what many software vendors did at the
       | dawn of the microcomputer age.
       | 
       | Also if you like the Z80 you should try
       | 
       | https://en.wikipedia.org/wiki/Zilog_eZ80
       | 
       | Which is crazy fast not to mention the only 8-bit architecture
       | that got extended to 24-bit addressing in a sane way with index
       | registers. (Sorry the 65816 sucks)
        
         | whartung wrote:
         | > because it is what many software vendors did at the dawn of
         | the microcomputer age.
         | 
         | They really didn't have any choice if they wanted to actually
         | accomplish something.
         | 
         | The 8-bit machines of the day, CP/M running 1-2MHz 8080s,
         | 2-4MHz Z80, with no memory, and glacial disk drives (with not a
         | lot of capacity).
         | 
         | Go ahead and fire up a CP/M simulator that lets you change the
         | clock rate, and dial it down to heritage levels (and even then
         | it's not quite the same, the I/O is still too fast). Watching
         | the clock tick by as you load the editor, load the file, make
         | your changes, quit the editor, load the compiler, load the
         | linker, test the program, then back to the editor. There is
         | friction here, the process just drrraaagggsss.
         | 
         | Turbo Pascal was usable for small programs. In memory editor,
         | compiling to memory, running from memory. Ziinng! Start writing
         | things to disk, and you were back to square one. The best thing
         | Turbo did was eliminate the linking step (at the cost of having
         | to INCLUDE and recompile things every time).
         | 
         | It was a different time.
         | 
         | As someone who lived through that, we simply didn't know any
         | better. Each generation got incrementally faster. There were
         | few leaps in orders of magnitude.
         | 
         | But going back, whoo boy. Amazing anything got accomplished.
        
           | pjmlp wrote:
           | Yeah, all my learnings were from books, bought or local
           | library, computer magazines, and occasional demoscene
           | meetings.
           | 
           | I was able to connect to BBS only during the summer
           | internship I did, at the end of my vocational school training
           | in computer programming.
           | 
           | When I afterwards arrived into the university, Gopher was
           | still a thing.
           | 
           | Lots of paper based programming, and wild guessing, there was
           | no Stack Overflow to help us out.
        
           | andyjohnson0 wrote:
           | Android development with an emulator on my 8-core 64Gb
           | desktop system feels a bit like this nowadays
        
           | PaulHoule wrote:
           | I got paid to develop some software for a teacher at my
           | school and wrote it in some kind of basic (GWBASIC?) for my
           | IBM PC AT clone, then found out she had a CP/M machine.
           | 
           | I had just read in Byte magazine that there was a good CP/M
           | emulator that ran several times faster than any real CP/M
           | system already in 1988 or so. So I used that software to run
           | a CP/M environment and port the code to some BASIC variant
           | there.
        
       | jmclnx wrote:
       | >The Aztec C compiler would have originally be distributed on
       | floppy disks, and is very small by moden standards.
       | 
       | If I remember correctly, Aztec C was from Mark Williams. It was
       | also the basis for the c Compiler that came with Coherent OS.
       | 
       | But yes, things were far easier in the 80s, even on Minis which I
       | worked on back then. These days development is just a series of
       | Meetings, Agile Points, Scrums with maybe 2 hours of real work
       | per week. Many people now tend to do their real work off-hours, a
       | sad situation.
       | 
       | But I am looking for 1 more piece of hardware, then I can set up
       | a DOS Machine to play with myself :)
       | 
       | >The Aztec compiler pre-dates ANSI C, and follows the archaic
       | Kernigan & Ritchie syntax
       | 
       | I still do not like ANSI C standards after all these years.
        
         | nils-m-holm wrote:
         | > If I remember correctly, Aztec C was from Mark Williams. It
         | was also the basis for the c Compiler that came with Coherent
         | OS.
         | 
         | That would have been "Mark Williams C", also marketed as "Let's
         | C" for MDSOS.
        
           | flyinghamster wrote:
           | Yup. Let's C was the cut-down version of MWC86, with no
           | large-model support. This limited you to 64K code and 64K
           | data. I got a copy of it one Christmas, but never used it
           | much because of this limitation.
        
           | jmclnx wrote:
           | Correct, that was it, "Lets C".
        
         | reaperducer wrote:
         | _These days development is just a series of Meetings, Agile
         | Points, Scrums with maybe 2 hours of real work per week._
         | 
         | Think about early video game development at large companies:
         | One person (maybe two), six months. The company gave them room
         | to practice their art, and the result sold a million copies.
         | 
         | These days everyone wants to cosplay Big Tech and worship
         | abstraction layers, so you can't get all of the "stakeholders"
         | in the same meeting in six months.
        
         | karmakaze wrote:
         | That sounds familiar so I looked it up[0]. I used Mark Williams
         | C compiler on the Atari ST-- _eventually settling on Megamax C
         | as it ran better on my small floppy-based machine._
         | 
         | Computing was a smaller world back then, the company was
         | founded by Robert Swartz (father of Aaron Swartz) and named
         | after his father William Mark Swartz.
         | 
         | [0] https://en.wikipedia.org/wiki/Mark_Williams_Company
        
         | icedchai wrote:
         | Back in the early 90's, before Linux took off, I ran Coherent.
         | It came with incredible documentation, and I still remember the
         | huge book with the shell on it.
         | 
         | And you're absolutely right about all the agile bull...
        
         | YZF wrote:
         | While we're ranting don't forget developers in the 80's didn't
         | sit in a noisy open space!
         | 
         | This was totally me ~15 years ago in a Scrum place with an open
         | floor plan, doing most of my work after everyone left in the
         | evening or on holidays because it was quiet and I could finally
         | get some stuff done. I wrote big pieces of the product by
         | myself.
         | 
         | My first C compiler was on a VAX. I did have some C compiler
         | for my ZX Spectrum at some much later point but I don't
         | remember doing much with it. Then a series of compilers for
         | PCs. One random memory is some sort of REPL C, maybe
         | Interactive-C or something? But pretty quickly it was Microsoft
         | and Borland.
         | 
         | EDIT: On a more serious note re: meetings and such. Part of the
         | difference is that working in much larger teams and projects
         | becomes less efficient and requires more communication. Mature
         | projects also require less of the builder thing and more of the
         | maintainer thing. Software lasts a long time and inevitably
         | maintenance becomes the work most people end up doing.
        
       | stevekemp wrote:
       | I put together a simple CP/M emulator here:
       | 
       | https://github.com/skx/cpmulator/
       | 
       | Alongside that there is a collection of CP/M binaries, including
       | the Aztec C compiler:
       | 
       | https://github.com/skx/cpm-dist/
       | 
       | So you can easily have a stab at compiling code. I added a simple
       | file-manager, in C, along with other sources, to give a useful
       | demo. (Of course I spend more time writing code in Z80 assembler,
       | or Turbo Pascal, rather than C).
       | 
       | The author has a followup post here for thos interested:
       | 
       | * Getting back into C programming for CP/M -- part 2 *
       | https://kevinboone.me/cpm-c2.html
        
       | anonymousiam wrote:
       | Many of the complaints by the author are in the context of
       | differences between C today and C back then, but back when CP/M
       | was in common use, C compilers typically did not do much
       | optimization, and K&R C was all there was.
       | 
       | I did not use Aztec C until a few years after I switched from
       | CP/M to DOS, but I really liked it, and used it for several 68k
       | bare-metal projects. I did poke around with BDS C on CP/M, but
       | was immediately turned off by the lack of standard floating point
       | support. (It did offer an odd BCD float library.)
       | 
       | https://www.bdsoft.com/dist/bdsc-guide.pdf
        
       | mark-r wrote:
       | > There's no obvious way to create a lower-case filename on CP/M
       | 
       | That's because the FAT file system used by CP/M didn't allow
       | lower case letters, at all. In this case "no obvious way" ==
       | "impossible".
       | 
       | The stack problems mentioned were real. The stack size was set at
       | compile time, and there was no way to extend it. Plus the stack
       | was not just used by your software, but also hardware interrupts
       | and their functions.
        
         | blueflow wrote:
         | Not FAT, but something even more rudimentary.
        
           | mark-r wrote:
           | Sorry, my bad.
        
         | whartung wrote:
         | > That's because the FAT file system used by CP/M didn't allow
         | lower case letters, at all.
         | 
         | That's not true.
         | 
         | You could use lowercase file names.
         | 
         | Just fire up MS-BASIC, and save your file as "test.bas". You
         | now have a lower case file name.
         | 
         | The problem is that all of the CCP utilities implicitly
         | upshifted everything from the command line. So, with the stock
         | set, you were out of luck.
         | 
         | You can go back into BASIC and KILL the file, so all was not
         | lost.
         | 
         | But, the file system was perfectly capable of coping with lower
         | case file names, just nothing else was.
        
         | nils-m-holm wrote:
         | > That's because the FAT file system used by CP/M didn't allow
         | lower case letters, at all.
         | 
         | Sure it did. Just start Microsoft BASIC on CP/M, type a program
         | and save it as "hello". It will appear in the directory as
         | "hello.BAS". Of course the CCP, the console command processor,
         | will convert all file names to upper case, so you can neither
         | type nor copy nor erase the file, but still it exists. You can
         | even load it from MBASIC using LOAD.
         | 
         | You can have any characters you like in your CP/M file names.
         | Sometimes I ended up with file names consisting of all blanks.
         | I usually used a disk editor to deal with those, but there were
         | lots of more convenient tools for the job.
        
         | CodeWriter23 wrote:
         | CP/M was FCB/Extents not FAT
        
         | fortran77 wrote:
         | CP/M was not fat.
        
       | jacknews wrote:
       | I would be more interested to see how modern techniques could
       | improve the then-state of the art.
       | 
       | A lot of the modern stack is layers of abstraction, which
       | probably wouldn't be appropriate for such limited machines, but
       | maybe superoptimizers and so on, and just more modern algorithms,
       | etc, could help show what's really possible on these old
       | machines. Sort of retro demoscene, but for useful apps.
        
         | 082349872349872 wrote:
         | I'm pretty sure this:
         | https://ourworldindata.org/grapher/historical-cost-of-comput...
         | has had way more to do with changes in the way we program than
         | anything we've learned about modern techniques.
        
       | dpb001 wrote:
       | This brought back some memories. Back in the day I couldn't
       | afford the Aztec compiler (or it wouldn't fit onto my dual floppy
       | 48K Heathkit H89, can't remember which). I ended up buying Leor
       | Zolman's BDS C compiler. Just looked him up and it looks like
       | he's still around!
       | 
       | https://www.bdsoft.com
        
       | kelsey98765431 wrote:
       | RIP Gary, you had so much more to give.
       | 
       | My next whiskey will be in your honor my man.
       | 
       | See you space cowboy
        
       | kragen wrote:
       | minor 1000x error: 'CP/M systems rarely had more than 64Mb of
       | RAM' should read 'CP/M systems rarely had more than 64 kibibytes
       | of RAM' (because memory addresses were 16 bits and there wasn't
       | much demand for bank-switching in cp/m's heyday, though later
       | 8-bit machines like the nes and the msx did use bank-switching
       | extensively)
       | 
       | (disclaimer, i never programmed in c on cp/m, and although i used
       | to use cp/m daily, i haven't used it for about 35 years)
       | 
       | he's using aztec c, but anyone who's considering this needs to
       | know that aztec c isn't under a free-software license. bds c is a
       | properly open-source alternative which seemed to be more popular
       | at the time (though it wasn't open source then)
       | 
       | https://www.aztecmuseum.ca/docs/az80106d.txt says
       | 
       | > _This compiler is both the MS-DOS cross-compiler and the native
       | mode CP /M 80 Aztec CZ80 Version 1.06d (C) Copyright Manx
       | Software Systems, Inc. and also includes the earlier Aztec CZ80
       | Version 1.05 for native mode CP/M 80. I cannot provide you with a
       | legally licenced copy._
       | 
       | > _I herewith grant you a non-exclusive conditional licence to
       | use any and all of my work included with this compiler for
       | whatever use you deem fit, provided you do not take credit for my
       | work, and that you leave my copyright notices intact in all of
       | it._
       | 
       | > _I believe everything I have written to be correct. Regardless,
       | I, Bill Buckels..._
       | 
       | but https://en.wikipedia.org/wiki/Aztec_C explains that manx
       | software 'was started by Harry Suckow, with partners Thomas
       | Fenwick, and James Goodnow II, the two principal developers (...)
       | Suckow is still the copyright holder for Aztec C.'
       | 
       | so it's not just that the source code has been lost; the
       | licensing situation is basically 'don't ask, don't tell'
       | 
       | bds c comes with some integration with an open-source (?) cp/m
       | text editor whose name i forget, so you can quickly jump to
       | compiler errors even though you don't have enough ram to have
       | both the compiler and the editor in memory at once. other ides
       | for cp/m such as turbo pascal and the f83 forth system do manage
       | this. f83 also has multithreading, virtual memory, and 'go to
       | definition' but it's even more untyped than k&r c
       | 
       | bds c is not quite a subset of k&r c, and i doubt boone's claim
       | that aztec c is a strict subset of k&r c as implemented by gcc
       | 
       | sdcc is another free-software compiler that can generate z80 code
       | https://sdcc.sourceforge.net/doc/sdccman.pdf#subsection.3.3....
       | but it can't run on a z80 itself; it's purely a cross-compiler
       | 
       | a thing that might not be apparent if you're using a modernized
       | system is how constraining floppy disks are. the data transfer
       | rate was about 2 kilobytes per second, the drive was obtrusively
       | loud, and the total disk capacity was typically 90 kilobytes (up
       | to over a megabyte for some 8-inchers). this means that if a
       | person needed data from the disk, such as wordstar's printing
       | overlay, you had to request it and then wait for the disk to find
       | it. so it wasn't a good idea to do this for no user-apparent
       | reason
       | 
       | with respect to                 int elems[5][300];       ...
       | int i, j;       for (i = 0; i < m; i++)         {         for (j
       | = 0; j < n; j++)           {           int elem = elems[i][j];
       | ... process the value ...           }         }
       | 
       | if i wanted efficiency on a compiler that didn't do the strength-
       | reduction for me, i would write it as                 int
       | elems[5][300];       ...       int i, *p, *end, elem;       for
       | (i = 0; i < m; i++) {         end = elems[i+1];         for (p =
       | elems[i]; p != end; p++) {           elem = *p;           ...
       | process the value ...         }       }
       | 
       | this avoids any multiplications in the inner loop while obscuring
       | the structure of the program less than boone's version
       | 
       | cp/m machines are interesting to me as being a good approximation
       | of the weakest computers on which self-hosted development is
       | tolerable. as boone points out, you don't have valgrind, you
       | don't have type-checking for subroutine arguments (in k&r c; you
       | do in pascal), the cpu is slow, the fcb interface is bletcherous,
       | and, as i said, floppy disks are very limited; but the machine is
       | big enough and fast enough to support high-level languages, a
       | filesystem, and full-screen tuis like wordstar, supercalc, turbo
       | pascal, the ucsd p-system, etc.
       | 
       | (second disclaimer: i say 'tolerable' but i also wrote a c
       | program in ed on my cellphone last night; your liver may vary)
       | 
       | on the other hand, if you want to develop on a (logically) small
       | computer, there are many interesting logically small computers
       | available today, including the popular and easy-to-use
       | atmega328p; the astounding rp2350; the popular and astonishing
       | arm stm32f103c8t6 (and its improved chinese clones such as the
       | gd32f103); the ultra-low-power ambiq apollo3; the 1.5C/
       | cy8c4045fni-ds400t, a 48-megahertz arm with 32 kibibytes of flash
       | and 4 kibibytes of sram; and the tiny and simple 1.8C/
       | pic12f-like ny8a051h. the avr and arm instruction sets are much
       | nicer than the z80 (though the ny8a051h isn't), and the hardware
       | is vastly cheaper, lower power, physically smaller, and faster.
       | and flash memory is also vastly cheaper, lower power, physically
       | smaller, and faster than a floppy disk
        
       | nj5rq wrote:
       | It's the first time I ever see the function parameters declared
       | like this:                   int my_function (a, b)         int
       | a; char *b;           {           ... body of function ...
       | }
       | 
       | What do you even call this?
        
         | omerhj wrote:
         | K&R C
        
       | opless wrote:
       | > CP/M systems rarely had more than 64Mb of RAM
       | 
       | True. But I think you meant KB.
       | 
       | Back in the days when CP/M was king, even 20MB Winchester hard
       | drives were rare
        
       ___________________________________________________________________
       (page generated 2024-08-18 23:00 UTC)