[HN Gopher] Why and how COBOL is still used
___________________________________________________________________
Why and how COBOL is still used
Author : rbanffy
Score : 106 points
Date : 2021-11-14 23:44 UTC (23 hours ago)
(HTM) web link (medium.com)
(TXT) w3m dump (medium.com)
| asguy wrote:
| > Any video (sic) processing like encoding WAV-files to MP3,
| writing code for microprocessors like the STM32 or even writing
| an entire operating system with drivers is nothing that COBOL
| would ever be able to achieve.
|
| ... hold my beer
| giardini wrote:
| Of course not - that's what IBM Basic Assembly Language is for!
|
| Having the right tool for the job is always a joy.
| kjs3 wrote:
| Yes. Cobol is not C/C++. Nor does it try to be.
| bitwize wrote:
| BLIS/COBOL has entered the chat.
| js8 wrote:
| I work on mainframe, specifically on a tool for tracing large
| applications running on z/OS systems. I don't completely agree
| with the conclusion.
|
| It's true that MF tech stack (z/OS, COBOL, CICS, DB2, ..) has
| many warts and is not fashionable. But I think you can write
| applications for it in the modern way if you want to; I think the
| backward-compatibility allowed IBM to focus on continually
| improving the stack rather than chasing latest fashion. As
| someone quipped, these little COBOL programs communicating in
| CICS MRO do resemble microservices or lambdas, and JCL/RDO is
| just infrastructure as code. So many of the "modern" concepts
| have been there for decades now, and they let you do transactions
| very reliably.
|
| But I want to point out, interestingly, what gives us real
| headache are the modern applications that are NOT COBOL. They
| tend to call databases with dynamic SQL (basically the plan is
| created during execution), and this makes the undesired
| application changes quite difficult to trace. In old-fashioned
| COBOL applications, you compile your query plans with your
| application, and that makes the performance more predictable.
|
| So I wouldn't be surprised if traceability is a big thing that
| COBOL and MFs have still going for it. There seems to be more
| middleware tooling for that, and things have been standard for
| many years. Unix tools seem to be more fragmented, although I
| have no doubts there is lot of good commercial tools. In cloud,
| everything seems to be even more tied to a single vendor than
| mainframes - I am not even sure if it's possible to run 3rd party
| utility on your deployment. It looks like if you really want
| traceability you have to do it yourself.
|
| Overall, I think MF stack is very decent (although not so shiny
| on the outside) and I wish IBM would promote it more.
| klelatti wrote:
| Presumably cost is the biggest issue preventing MF adoption for
| new users - hasn't the hardware and software been IBM's cash
| cow for a number of years?
| js8 wrote:
| That is true. IBM literally outlived its competitors, though.
| It's somewhat sad to see this technology fading away due to
| greed.
|
| I wonder if the same is going to happen with newer companies,
| the software reinvented again. Had we have copyright lasting
| only for 20 years (like patents), it would be possible to
| legally run first version of z/OS today.
| klelatti wrote:
| I have to admit that I don't fully understand why COBOL is so
| difficult to replace. If the systems are so vital surely there is
| a detailed specification and writing in an easier to maintain
| language would repay the effort?
|
| Is it just that there is so much COBOL and it's not worth the
| effort?
|
| Edit : thanks for all the great replies - HN at its best!
| wongarsu wrote:
| A lot of knowledge is probably institutional knowledge,
| especially about how the different systems interact (over the
| decades everything starts relying on implementation details and
| bugs of everything else). And the people who hold that
| institutional knowledge either left at some point in the last
| 60 years or have no interest in undermining their job security
| by telling you
| jsmith45 wrote:
| Realistically If there were detailed specifications, they were
| hard copy print-outs from the 80s that are probably no longer
| readable, and undoubtedly were thrown away in the intervening
| decades.
|
| Many of these old systems rarely require changes in most
| modules. (As in, of the hundreds of thousands of lines in the
| system, many subsections just never get touched).
|
| This means for the the parts that never get touched, there is a
| negative return on investment for rewriting them, so there has
| to be a correspondingly larger positive return on investment in
| the higher churn parts to make a rewrite worthwhile. Most of
| the times when this is studied, the return on investment from
| the rewrite is either tiny or net negative from this effect.
|
| Ah you say, then just rewrite only the parts that tend to
| change a lot, and leave the other stuff as COBOL. That sounds
| good right? Except that generally the cross language interop
| code needed becomes an absolute nightmare. I've dealt with this
| in an near ideal enviroment, and it was still a royal pain.
|
| This near ideal environment was: COBOL code restricted to just
| a subset used by the company, which was being translated to C
| code by the companies own compiler, which allowed for a
| C-compatible ABI to be defined allowing for surprisingly simple
| and direct C to Cobol interop. Yet even then the code code was
| still a royal pain, just due to how COBOL data structures
| differ so much from the idiomatic data structures for C or C++
| code. My code was actually C# code interacting via C++/CLI with
| some C++ code that interacted with the the COBOL. But the
| interop code was still some of the worst code to touch.
| tablespoon wrote:
| >> I have to admit that I don't fully understand why COBOL is
| so difficult to replace. If the systems are so vital surely
| there is a detailed specification and writing in an easier to
| maintain language would repay the effort?
|
| > Realistically If there were detailed specifications, they
| were hard copy print-outs from the 80s that are probably no
| longer readable, and undoubtedly were thrown away in the
| intervening decades.
|
| Or not kept up to date. My employer has some mainframe
| systems, and when they were built they had technical writers
| and created pretty good documentation, organized in binders.
|
| For the last two decades, at least, these systems have been
| "legacy" and the company stopped employing technical writers.
| During that time, they were still pretty critical, but I'm
| pretty sure no one kept that documentation up to date with
| whatever changes were made.
|
| Also, about a year before the pandemic, we had building
| reorganization, and a lot of those binders were thrown out
| during the move. They were kept on several large, neglected
| shared bookshelves, and I think anything anyone didn't
| specifically claim was discarded by facilities.
|
| The thing with legacy systems is they can be vital, but they
| also can be neglected. The gamble most businesses seem to
| take is: "as long as it works, starve it and hope it doesn't
| blow up on my watch."
| klelatti wrote:
| This is a key point. At a more organisational level many
| big firms saw these systems as just a cost to to be
| minimised and not central to their business and to be
| invested in. Often maintenance would be outsourced. So now
| the documentation and expertise just isn't there.
| rbanffy wrote:
| > writing in an easier to maintain language would repay the
| effort
|
| COBOL is very easy to learn, but, as was pointed out in [1],
| it's not only COBOL, but COBOL and CICS. CICS is an extremely
| optimized transaction manager that's been continuously improved
| and has been coevolving with IBM hardware since the 60's.
|
| There is also the workflows around the very non-Unixy nature of
| zOS. IBM has been doing an excellent job at introducing good
| operations practices and adding tools and functionality to
| facilitate that, but it's a long process.
|
| Also, remember these machines also run Linux incredibly well.
| They are unbelievably fast.
|
| 1- https://news.ycombinator.com/item?id=29229525
| MichaelRazum wrote:
| Basically it very difficult, because no one truly no one
| understands the whole business logic implemented in Cobol. Take
| life insurance. You have contracts running for decades. You
| just don't want to break something. There are no tests and so
| on. So if you would start from scratch you would be faster. So
| it's just very very expensive with a lot of downside risks and
| little upsides.
| chasil wrote:
| Even if you understand your business logic with extreme
| clarity, if you don't know how the math is implemented, then
| your migration will fail.
|
| Too many people learn this specific COBOL fact the hard way.
|
| https://medium.com/the-technical-archaeologist/is-cobol-
| hold...
| worewood wrote:
| IMO One of the reasons is the tight integration with other IBM
| Products and the operating system. In short you can do
| distributed, networked transactions seamlessly with COBOL,
| involving files, database and messaging all at once, without
| the programmer even knowing what distributed transactions are
| and how to implement them "manually" - few do.
| klelatti wrote:
| So it's not really COBOL lock in - it's IBM lock in?
| ihumanable wrote:
| I have retired some COBOL in my life, I can't say that my
| experience is universal, but here's what happened during my
| time consulting for a Fortune 500 company sunsetting COBOL into
| Java (1.4).
|
| > If the systems are so vital surely there is a detailed
| specification
|
| This is hilarious, and wrong. There probably is a specification
| somewhere, but there is drift between the implementation and
| the specification. When that 4am bug pops up and stops
| processing a billion dollars worth of transactions because of a
| weird corner case, someone fixed that to get the system running
| again, did they update the master specification, maybe. Half of
| the job is figuring out how the system really works, all the
| bugs, all the quirks, all the things that other systems have
| come to expect.
|
| Sunsetting is often a bug-for-bug type of conversion, because
| the output of COBOL programs are similar to database tables.
| Other programs are built on top of this data and that consumer
| code introduces a bunch of expectations on how the output of
| your replacement must behave to keep everything working. The
| number of consumer programs that have some expectation on the
| data can range from dozens to thousands.
|
| So you can either read and understand and fix all the consumers
| as you make your replacement behave to spec instead of behaving
| identically, or you do what most sunset teams do and you make
| it produce identical output.
|
| This now means that you are reading the target code base and
| understanding hundreds of thousands of lines of COBOL, you are
| unearthing decades of extensions, bug fixes, hacks, and
| mistakes.
|
| Your job then is to make sure the replacement behaves exactly
| the same way over decades of input. This is the best test of
| course, learn JCL and beg the mainframe operators to let you
| run a massive job that will process historical records with the
| target and replacement programs and compare the results.
|
| Your days begin to look like the following. Get in, get the
| report from last nights run which tells you all the
| discrepencies. For each one you can run your code and the cobol
| locally with some work. Formulate a theory for why some figure
| is wrong, fix it, move onto the next. Do that for 8 hours. At
| the end of the day kick off your massive JCL and do it all
| again tomorrow.
|
| Just do that everyday for a few weeks or months or however long
| it takes until your replacement program behaves the same as the
| COBOL program and then you can swap them without breaking 1000
| other applications.
|
| Then you get the long tail of bug reports. The zipjack flim-
| florp report isn't working. So now you dig into that, the flim-
| florps are processed by the COBOL you replaced, so maybe the
| zipjack system had a requirement you didn't realize. Dig dig
| dig, days go by, your will to live eroding, but finally you
| find it, deep in the CVS history bug-1376892 looks like the
| target program accidentally added an extra hour to the business
| day in Seoul on leap years because the time zone data is wonky
| and they have special code expecting this wonkiness and
| correcting on their side because the mainframe programmers in
| 1994 decided it was too much work to fix on the mainframe side.
|
| Our program uses a different timezone database and it knows how
| to properly calculate this, so just add a data transform at the
| end to detect this scenario and fudge the numbers back to what
| the zipjack (and who knows how many other systems) expects.
|
| The COBOL systems are rarely used directly, they are batch
| processors taking in data and pumping out data. Applications
| are then written on top of their output and bind the behavior
| of the COBOL system with all their expectations. And this is
| where the complexity lies, the specification for how the system
| works is possibly written down somewhere, but the actual
| specification is in the COBOL code and the hundreds or
| thousands of consumer programs and all their expectations. To
| do the job well you have to write something that replaces the
| actual real-world specification encoded in all that other
| software, not the one jotted down in wordstar a few decades
| ago.
| hydroxideOH- wrote:
| This is hilarious and accurate, at least fron my impression,
| though thankfully not experience, of talking with people who
| worked with a COBOL codebase at a large insurance company. At
| that company, they said 10 years ago that they'd replace the
| COBOL in 10 years, and they are still a decade away from
| replacing it.
| fma wrote:
| I've rewritten an old application that was a rewrite of a
| mainframe (prob COBOL I guess). There were quirks that we
| still had to carry over. Luckily there were some old hands
| that can sometimes tell you why and we didn't always have to
| look into the CVS history and find bug-1376892.
|
| For testing you'd run a few weeks of data through and make
| sure they match outputs exactly. In prod we'd run them in
| parallel a bit for each report before decommissioning it.
|
| There'd always be something like..at the end of 3rd quarter
| if the last day is a Friday, some other junk happened and now
| your data looks different.
|
| After a few we got a hang of which things were likely to
| break and was able to have certain parts driven by
| configuration and many could be fixed via an updating a
| config if a bug was found.
| grvdrm wrote:
| >Sunsetting is often a bug-for-bug type of conversion,
| because the output of COBOL programs are similar to database
| tables. Other programs are built on top of this data and that
| consumer code introduces a bunch of expectations on how the
| output of your replacement must behave to keep everything
| working. The number of consumer programs that have some
| expectation on the data can range from dozens to thousands.
|
| > So you can either read and understand and fix all the
| consumers as you make your replacement behave to spec instead
| of behaving identically, or you do what most sunset teams do
| and you make it produce identical output.
|
| So on point. And this problem exists not just in legacy
| systems built in COBOL. Also in much newer systems built with
| SQL in my line of work.
|
| The horror show I regularly encounter is a SQL function used
| in a production system that requires 50+ variables as input.
| 50+ fields as output. Any change in the order of the fields
| screws everything up. Makes me crazy.
| klelatti wrote:
| Thanks for the great reply. I had the pleasure of spending a
| year replacing an APL program on an IBM mainframe may years
| ago so understand a lot of the points you're making.
|
| I think that the missing element in my initial analysis is
| that many firms don't have much in-house expertise left on
| these systems. They've cut to the bone or outsourced.
|
| And for management replacing is huge risk for limited upside
| - even if the economics make sense.
| AnimalMuppet wrote:
| > If the systems are so vital surely there is a detailed
| specification and writing in an easier to maintain language
| would repay the effort?
|
| As others have said, no, there's probably not a complete
| specification. But even if there is, it's not nearly complete
| enough.
|
| That code may have 50 years of bug fixes and corner case fixes
| in it. How many times was the spec changed to reflect those
| fixes? Probably not always. If you rewrite it from the spec,
| how many of those issues are you going to re-introduce?
|
| And, how many _new_ bugs are you going to introduce? The old
| code has 50 years of "getting the bugs out". It may be hard to
| maintain, but for what it does, it probably does it pretty
| solidly. Will the new code be as solid?
|
| Re-writing isn't as easy as it looks...
| tkahnoski wrote:
| I had a brief exposure to a COBOL application and started to
| understand some of the technical hurdles that make this hard
| and largely lead to 'big system replacement' type projects
| which are extremely high-risk.
|
| If you've ever worked a VisualBasic Application, you'll have
| some inkling of what COBOL is like.
|
| One of the core things I saw with COBOl was strong coupling of
| application and data persistence. Further, where modern apps
| take a stateless approach for say a single API call, many COBOL
| application predate this concept. So not only are application
| and persistence coupled, the applications are VERY stateful.
|
| This makes replacing a 'piece' of the application extremely
| difficult as there is a lot of shared state to tease out of the
| system. Nor can you start building a second system that owns
| some piece of the data model, as COBOL's persistence models
| aren't built for shared write activity that a normal DB
| facilitates. You are forced to still write _through_ the COBOl
| data model.
|
| Clearly not impossible, just difficult.
| hindsightbias wrote:
| Here's the original spec in Interleaf v5 format and a few
| thousand pages of notes and emails. We might have a 25 year old
| PC we can boot to print that.
|
| That said, all that code has been running for 3 or 4 decades
| and anything that gets ported will have a shelf life of < 10
| years.
| nzach wrote:
| >If the systems are so vital surely there is a detailed
| specification
|
| In my experience those things not always go together.
|
| I've seen "mission critical" software with basically zero
| formal specification.
|
| Maybe this also happens with COBOL?
| wongarsu wrote:
| I imagine this happened _especially_ in COBOL, a language
| designed to be easy to learn so that business people can
| write their own subroutines instead of solely relying on
| programmers.
| rbanffy wrote:
| It was the same idea of DSL's. If you bring the language
| very close to the business domain, the program and the spec
| are very close to one another.
|
| Didn't quite happen in COBOL's case.
| mywittyname wrote:
| I bet there were numerous failed attempt at modernization with
| these platforms over the years, and after wasting enough
| money/resources on them, management made a blanket decision of
| No More.
| ahofmann wrote:
| I think you greatly overestimate how well software is
| documented. Just one example: When the Y2K problem was
| discussed in the late 90s, one of the big problems was that
| almost nobody understood the decades old code base. They hired
| experts, of course, to make sure there were no difficulties
| when they switched to the year 2000. They made such hair-
| raising discoveries in some code trees that a lot of the
| variables had completely useless names, like "Mickey Mouse".
| SteveMoody73 wrote:
| In the late 90s I was involved in replacing a software for a
| financial system for a company that was written in a basic of
| all languages. It was complex and had almost no
| documentation. All knowledge of the system was in the head of
| the programmer of that system and he wasn't happy that his
| code was being replaced so he was reluctant to help.
|
| The little document that did exist was hopelessly out of
| date. A lot of business was done on that software
| jazzyjackson wrote:
| hah, sounds like a memory palace trick, I think I've heard a
| mathemagician use mickey mouse as a kind of variable name
| when doing mental math out loud.
| rdtsc wrote:
| In many cases these system may process millions or billions of
| dollars of transactions a day. They may be a specification or
| there may not be and there are probably a dozen corner cases
| that are not specified. Nobody wants to be the one who decided
| to replace it with a shiny kube cluster running node js or
| something and then have it all go up in smoke and losing money,
| customer, reputation.
| 0xcoffee wrote:
| Is there any higher level language that compiles down to COBOL?
| Seems like there would be market for such a thing.
| WesleyJohnson wrote:
| The company I work for has a massive COBOL legacy codebase which
| still handles the lion's share of our day-to-day operations. In
| the several years that I've been here, we've started migrating
| some functionality (reporting, etc) to a web-based application,
| but there are very-real tradeoffs when you start looking to
| migrate interactive processes away rom COBOL into a web
| application. Keyboard navigation and raw speed are two things
| that come to mind that we simply can't match.
|
| Curious if anyone has been tasked with replacing COBOL
| applications and what you eventually used to do so? Being of the
| web-programming persuasion, I was thinking Electron or Reactive
| Native for Windows to build something where I could have more
| control over the navigation experience and then focus heavily on
| keyboard navigation in the app.
|
| The business likes more interactivity, more visual UIs, but
| sorely misses what I mentioned above COBOL (or terminal-style)
| applications in general. Curious where a happy medium might land
| in terms of language/framework.
| a2tech wrote:
| You can absolutely make a web interface that is snappier (not
| quite like a terminal application). You need to spend a lot of
| time with your layouts and jettison a lot of your web
| assumptions (like using javascript for a lot of things,
| spending a lot of time on tab progression, and forget about
| responsive designs). My current job when I started maintained a
| web app for a medical research outfit. The person that
| developed it came from a COBOL/RPG background and applied that
| knowledge to this application (which required tons of data
| entry via web forms). It was a python backend, minimal JS on
| the front end, and our data entry folks could fly when they
| were entering data. The endpoint that the data entry screens
| pointed to were implemented with speed in mind so saving a form
| was almost instant (from the user perspective).
| WesleyJohnson wrote:
| Thank you for the anecdote. I'll have to do some more
| research in this area. We're using Django on the backend and
| minimal JS on the frontend already, though we have introduces
| some React in certain areas.
|
| I think, as you stated, we just have to double down on
| layouts, tab progression, etc and rethink what we think we
| know about web applications.
| MeinBlutIstBlau wrote:
| Is COBOL somehow faster than any other back end language? I
| mean there is no way it's faster than C/C++.
| musicale wrote:
| > I would rather run 8,000 docker containers on it than a single
| COBOL application
|
| Making it 1000x worse than something that would already cause me
| headaches and nightmares.
| quietbritishjim wrote:
| That's the point isn't it? e.g. you might say "I would rather
| stub my toe 1000 times than run COBOL once". The comparison
| only makes sense if it's something you consider to be bad.
| VWWHFSfQ wrote:
| haha that was my thought as well. That's just a different way
| of punishing yourself!
| pelasaco wrote:
| I worked in a finance institute, when I was responsible for the
| mainframe and mainframe security. I was 24, and I was surrounded
| by senior engineers, already over their 50s. I had to go back
| home, back to my father, to ask for help. He is still today
| teaching COBOL in the university. Was a pleasure to learn this
| language that to me was a kind of "Cucumber" from the 60s.
| However, looking into the details of the whole workflow from
| COBOL applications, I wasn't that happy looking from the security
| point of view. The code and solutions predates any security best
| practices that we have today.
| amitport wrote:
| In what way does the security best practices of the
| _programming language_ are different?
|
| (I've worked at a security related startup that worked with
| mainframes which used cobol... The security practices of the
| runtime do not seem to me less advanced than what we do in
| other languages.)
| mumphster wrote:
| cobol (on mainframes, no clue if this applies elsewhere) is
| more than just a langauge, its a whole stack and set of tools
| for writing terminal UI programs basically that take inputs
| and do a lot of processing based on said inputs. You can
| easily run into cobol code that assumes (rightfully so in
| most cases) that people will be nice and not do bad inputs or
| try to do the equivalent of sqli. the attitude around safe
| and secure code just wasnt there when a lot of the cobol
| codebases ive ran into were initially written
| pelasaco wrote:
| exactly at least until 2000s, there was no secure copy, but
| anonymous ftp or reading password from a plain text.
| Processed data was being pushed to some public areas from
| where other job would read it..A nightmare.. but worked.
| tuatoru wrote:
| That doesn't match my experience.
|
| Over half of the code I worked with was input validation
| (format validity, value plausibility, authorisations).
|
| Most of the rest was post-facto batch checks for the same
| sorts of things, except more.
|
| Any CRUD code base that has been in use for a while is
| mostly this kind of thing.
| chasil wrote:
| AFAIK, there is a little database embedded in COBOL, known as
| ISAM.
|
| I don't think there are any SQL-style grant/revoke directives
| within COBOL, and mainframe developers in decades past likely
| did not spend too much time worrying about system security
| when the best I/O was a dialup modem.
|
| And now we attach this old code to the internet.
|
| https://en.wikipedia.org/wiki/ISAM
| ElectricalUnion wrote:
| > AFAIK, there is a little database embedded in COBOL,
| known as ISAM.
|
| That is not a COBOL feature, it's a operating system level
| feature. IBM System/390 and newer store data in a indexed,
| record-based manner.
|
| > I don't think there are any SQL-style grant/revoke
| directives within COBOL
|
| I don't think there are any SQL-style grant/revoke
| directives within any language that is not SQL, Ada or
| Rust.
|
| To be fair, code dealing with grant/revoke should be done
| in migrations, not in the application code. If application
| code can do SQL grant/revokes by itself you have bigger
| issues anyways.
| scastiel wrote:
| Another good article about the same topic: "The Code That
| Controls Your Money" by Clive Thompson
|
| https://www.wealthsimple.com/en-ca/magazine/cobol-controls-y...
| tyingq wrote:
| For the most part, it's not the COBOL itself that makes it hard
| to port. It's the features of the surrounding OS for things like
| screen layouts, batch process management (not cron like, but deep
| job dependencies, etc), database management, file transfer, user
| management, etc. So to port the app, you have to know quite a lot
| about what things like RACF, TSO, CICS, JCL, Tivoli,
| Adabase/IDMS/etc, are contributing to the overall app. Porting
| the COBOL is sometimes not even half the effort.
| prajwalshetty wrote:
| True, I started as a COBOL programmer (did a lot of TELON
| online screens, JCL, CICS, DB2...) as my first job in 2009, for
| a biggest retailer - personally it taught me a great deal of
| software engineering - having a background in electronics
| engineering helped I guess.
|
| The whole stack, and the complex business built around it over
| many years makes it so difficult for these large institutions
| to port their solutions on to a new platform in my experience
| too.
|
| For a lot of these companies, these backend systems on
| Mainframes just works and I believe there isn't a proper
| incentive to move on to a new platform anyway.
| joncp wrote:
| https://archive.md/B97v0
| teh_klev wrote:
| > I would rather run 8,000 docker containers on it than a single
| COBOL application
|
| The author surely realises that there's this thing called LPAR
| and z/VM? And that you don't just run a "single COBOL
| application" on one of these boxes?
| SavantIdiot wrote:
| BanaMex's core runs on VMs that still run COBOL.
|
| (Source: I met one of the debuggers while I was in Patagonia.
| He is in his 70s and sails around the world with his wife and
| fixes bugs when he's near WiFi. Either he was telling the
| truth, or he's a really good storyteller.)
| Digit-Al wrote:
| Back in the late 80's I actually had a COBOL compiler on my 8-bit
| CPC-464. The CPC-464 had a port at the back that you could
| connect an EEPROM box to. I had an assembly compiler, a COBOL
| compiler, a BCPL compiler, and a C compiler on it at various
| times. At the time it was awesome to be able to just switch on
| your computer and go straight into your compiler with absolutely
| no loading times at all.
| vajrabum wrote:
| I'm quite surprised to see no mention of CICS which is the event
| driven framework originally for special IBM terminals. This is a
| highly performant transaction monitor and still widely used today
| in large, established companies. By today's standards this
| framework isn't that difficult, but it's not something you're
| going to master in a day either.
| tuatoru wrote:
| For a long time it was also the largest program that had been
| formally verified. Not sure if that is still the case.
___________________________________________________________________
(page generated 2021-11-15 23:01 UTC)