https://retrocomputing.stackexchange.com/questions/27722/why-did-the-motorola-68000-processor-family-fall-out-of-use-in-personal-computer Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange [ ] Loading... 1. + Tour Start here for a quick overview of the site + Help Center Detailed answers to any questions you might have + Meta Discuss the workings and policies of this site + About Us Learn more about Stack Overflow the company, and our products. 2. 3. current community + Retrocomputing help chat + Retrocomputing Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog 4. 5. Log in 6. Sign up Retrocomputing Stack Exchange is a question and answer site for vintage-computer hobbyists interested in restoring, preserving, and using the classic computer and gaming systems of yesteryear. It only takes a minute to sign up. Sign up to join this community [ano] Anybody can ask a question [ano] Anybody can answer [an] The best answers are voted up and rise to the top Retrocomputing 1. Home 2. 1. Public 2. Questions 3. Tags 4. Users 5. Companies 6. Unanswered 3. Teams Stack Overflow for Teams - Start collaborating and sharing organizational knowledge. [teams-illo-free-si] Create a free Team Why Teams? 4. Teams 5. Create free Team Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Why did the Motorola 68000 processor family fall out of use in personal computers in the 21st century? Ask Question Asked 11 days ago Modified 8 days ago Viewed 15k times 39 In the '80s and '90s the Intel x86 and Motorola 68000 families were the two leading microcomputer architectures in the 16-bit/32-bit personal computer scene. The 68000s were even preferred by the purists because of its orthogonal instruction set. The Intel x86 family, although always the market leader, has been criticized for its non-orthogonal instruction set and segmented addressing. When the Macintosh line switched to PowerPC, the Motorola 68000 family began to disappear as a contender for newly designed personal computers outside the "Wintel" ecosystem. New contenders, such as the PowerPC, ARM and MIPS, take the 68000 family's place. I cannot find on the internet the reason why the 68000 family fell out of favor in the personal computer market. Can someone explain this or point me in the direction of the answer? * history * motorola-68000 Share Improve this question Follow edited Sep 26 at 6:58 Toby Speight's user avatar Toby Speight 1,6291313 silver badges3131 bronze badges asked Sep 25 at 2:30 Biff Iam's user avatar Biff IamBiff Iam 2,01933 gold badges1212 silver badges2020 bronze badges 19 * 15 High cost and low volume meant Motorola couldn't afford the engineering to keep making them faster, so by early/mid 90s the x86s were both faster and cheaper. RISC cpus, being much simpler/ cheaper to build took over everything not requiring Windows. - Chris Dodd Sep 25 at 5:07 * 18 The success of x86 is not just the CPU, but the whole PC machine, e.g. BIOS and the peripheral hardware, timers, DMA and interrupt controllers. For a few decades we kept duplicating and emulating the entire original PC machine, and x86 ISA is just part of it. Even if someone designed a computer with a x86 core but with different set of peripherals such that it's not compatible with PC versions of available software, then it won't sell. - user3528438 Sep 25 at 6:04 * 12 Processor architectures come and go. The history of the 68k architecture is perfectly normal. It's the x86 that's anomalous. I believe that is the result of Microsoft's peculiar inability to switch architectures. No other software company seems so stuck: they move opportunistically to whatever architecture suits their immediate needs. Apple has evolved 6502->68k->PPC->x86->ARM. - John Doty Sep 25 at 12:14 * 12 But Microsoft has (or had) the ability to switch architectures. Windows has been available on MIPS, PowerPC, Alpha, Itanium, etc. Why MS no longer supports those is due to application unavailability. Unless 3rd party apps are available for a platform, no-one wants to buy that platform, and therefore OS support is an undue burden on MS. (Granted, the argument here is parallel support, rather than serial changes of ISA). - another-dave Sep 25 at 12:37 * 13 @JohnDoty: Microsoft could and did switch architectures, as another-dave said, at least for their server OSes. The problem with making that commercially relevant for desktop/laptop home users is the 3rd-party ecosystem that's designed around binary compatibility not source, the strong backwards-compat of x86 which is or was its reason for commercial success, and crufty 3rd-party codebases that make all kinds of assumptions that aren't documented but are de-facto true on x86 Windows, like little-endian, stack-args calling conventions, etc. (Even moving to x86-64 was slow for some code.) - Peter Cordes Sep 25 at 13:20 | Show 14 more comments 7 Answers 7 Sorted by: Reset to default [Highest score (default) ] 32 The Apple-IBM-Motorola alliance was created in 1991 to compete with the Windows/Intel market. Its main successes were the creation of the PowerPC instruction set, derived from IBM's POWER architecture, and Apple's Power Macintosh line of computers. IBM originated the idea, having seen that Windows on Intel was out-competing OS/2, and wanting to avoid being dependent on Intel. Apple joined it, seeing the chance to grow out of their existing markets, and Motorola presumably saw it as a successor to 68000, having failed comprehensively with the MC88000. While the 68000 was used in the Macintosh series, Atari STs and Amigas, all the operating systems involved were quite different, so there was no unified software base. That meant there wasn't the sustained demand for 68000 that could have paid for chip development on the scale required to keep it competitive with x86. The engineering workstation market had started with the 68000, but had already switched to RISC before AIM was created. Share Improve this answer Follow edited Sep 26 at 21:36 answered Sep 25 at 6:36 John Dallman's user avatar John DallmanJohn Dallman 11.4k22 gold badges3838 silver badges5252 bronze badges 4 * 2 Good answer, however I have to object to "home computers, plus the Macintosh". The "home computers" (I assume you mean the Amiga and Atari ST) were on the same level as the 68k Macs - ok, the Amiga was held back by only having "TV-compatible" video output (so its high-resolution mode was interlaced and flickering), and the Atari ST was derided as the "Jackintosh", but still, both of them were equally capable as the 68k Macs and also saw professional use. - rob74 Sep 26 at 14:34 * @rob74: They had the hardware capabilities, but did they have the range and quality of software that the Macs did? See what you think of this edit. - John Dallman Sep 26 at 21:35 * @JohnDallman It feels like one of the long, fruitless discussions we held back in the days :) High regard for Macintoshes was mostly an American thing. In Europe - Amiga and Atari ST were seen as both less expensive and better for professional use. The killer applications were music and DTP for Atari ST (it had built-in MIDI and was the host to the original Digital Audio Workstation - Cubase) and video for Amiga (due to availability of inexpensive Genlock hardware). - fdreger Sep 27 at 13:30 * Bingo. Basic capitalism wins every time. If there's no demand for your product, it will necessarily be supplanted by products that consumers do want (or are willing to pay for). - Ian Kemp Sep 27 at 15:59 Add a comment | 43 The 68k family instruction set, as elegant as it appeared to the casual assembler programmer (been there), had several flaws that made it very difficult to get fast in hardware. Out of order or superscalar execution were very, very difficult to implement. * Over-complex addressing modes, especially the indirect one introduced with 68020: when combined with virtual memory made it theoretical possible to get up to 16 page faults in 1 instruction (move long indirect with displacement and shifted index from an odd address touching 2 pages etc.). These indirect addressing modes were the first to be removed when defining the Coldfire ISA. * Exposure of the pipeline internals on traps and exceptions: the idea was that on a trap, the instruction could be resumed after fixing the error cause. This made it very difficult to get performance out of the kernel as it wrote more and more data to the stack at each generation, and it also limited the internal state that could be saved. x86 was much more pragmatic and just restarted the cancelled instruction from start. * Compatibility between successive family members was not as good as in intel CPUs. If you want to compile a program that runs on 68000 and on any of 680[2346]0 you will lose a lot of features on the side. There is the famous newsgroup post from John Mashey explaining the fundamental issue with the 68k ISA in comparison to other ISAs of that time. Share Improve this answer Follow edited Sep 28 at 8:19 answered Sep 25 at 8:25 Patrick Schluter's user avatar Patrick SchluterPatrick Schluter 3,27911 gold badge1212 silver badges1717 bronze badges 8 * 3 An excellent explanation! I'd like to argue a bit: 1. x86 is also having insns that can create lots of page faults, for example block move insns. There's still a way to execute them slow yet correct, for example from within a microcode engine. 2. Those incompatibilities between families are at the same time a way to move further while dropping lots of no more necessary legacy en route. - lvd Sep 25 at 13:20 * 5 @lvd: x86 rep movs is interruptible, though, so it can make partial progress through those page faults. (Same for AVX2 / AVX-512 gather / scatter instructions) Patrick is talking about the worst-case number of pages that all have to be present for a program to make forward progress. In x86's case, that's 6 (Do x86 instructions require their own encoding as well as all of their arguments to be present in memory at the same time?) for one step of a misaligned rep movsd with the 2-byte instruction spanning a page boundary itself. - Peter Cordes Sep 25 at 13:27 * 1 Or does m68k save partial execution progress ("pipeline internals") on the stack to resume after an exception? x86 interrupts happen strictly on instruction boundaries, so execution restarts from scratch after any fault. - Peter Cordes Sep 25 at 13:30 * 3 Perhaps worth pointing out that 386 made x86 much more orthogonal than before, at least in 32-bit mode. (But if you're writing 16-bit code that cares about compat with 8086 or even just 286, you're in the same boat as 68k in terms of leaving new features unused because 68000 doesn't have them. Also, if the first hypothetical m68k Windows PCs used 68020, that would be the baseline for m68k PC software, not 68000.) - Peter Cordes Sep 25 at 13:34 * 2 @PeterCordes: On the 68000, if memory address 0x1234FFFC holds the value 0x1234FFFE, code performs an indirect store of some 32-bit number through the address held in address 0x1234FFFC, and a page fault could occur when accessessing the least-significant halfword of the destination (0x12350000), that fault wouldn't occur until the 0xFFFE part of the address had been overwritten and the target address didn't exist anywhere in the universe outside the internal CPU state. - supercat Sep 25 at 15:34 | Show 3 more comments 13 Motorola stopped investing in MC68000 family when everyone thought that RISC was the future and that CISC CPUs would be soon non competitive. So it switched to PowerPCs. Even Intel thought this and developed RISC CPUs (i860, i960...). Intel reluctantly continued investing in x86. For Motorola, it was probably true, the last version, MC68060 was competitive with Pentium but it was quickly surpassed because of Intel manufacturing superiority allowing lower dissipation, higher frequencies. Switching to simpler RISC CPUs could allow to stay in the race. Now, the difference between RISC and CISC (eg x86) is less relevant performance-wise due to the possibility of putting 100 times more transistors on a die. Share Improve this answer Follow edited Sep 25 at 7:43 Patrick Schluter's user avatar Patrick Schluter 3,27911 gold badge1212 silver badges1717 bronze badges answered Sep 25 at 7:22 TEMLIB's user avatar TEMLIBTEMLIB 3,3871616 silver badges1818 bronze badges 4 * 2 Before Motorola joined the AIM alliance it had a hand at creating its own RISC architecture, the 88000 line (m88k, consisting of two models shipped over the course of three years, before being discontinued in 1991). - njuffa Sep 25 at 9:05 * Yes, and AMD tried RISC with AMD29K. - TEMLIB Sep 25 at 20:40 * 3 The main difference between CISC and RISC is in the engineering cost and complexity. As long as you have enough volume (sales), that cost can be made up. x86 had the volume and 68K did not. - Chris Dodd Sep 26 at 21:51 * @ChrisDodd: Another detail is that executing RISC code from cache is faster than executing CISC code from cache, but if CISC code is smaller than CISC code, it can be fetched from main memory faster than RISC code. Having a system store CISC code in main memory but convert it to a RISC representation before storing it into the internal cache offers the best of both worlds. - supercat Sep 28 at 16:03 Add a comment | 9 When the Macintosh line switched to Power PC, the Motorola 68000 family begun to disappear It's been rather the other way around. Apple switching was a result of Motorola losing the race. New contenders, such as the Power PC, ARM and MIPS take the 68000 family's place. Not really - also you forget the NS32k family going away at the same time, being maybe less visible but at least as successful as the 68k. I cannot find in the internet the reason why the 68000 family fell out of favor in the personal computer market. Cost on either side: * Motorola wasn't able to keep up investment to improve their CPUs the same way that Intel did * Resulting CPUs were considerably more expensive than Intel's offering. This is not only true at upper end offerings with '060 vs. Pentium but even more for embedded. Basic 80(1)88 based systems could be delivered at considerable lower development and production cost. Share Improve this answer Follow edited Sep 25 at 8:01 Lorraine's user avatar Lorraine 38k1111 gold badges131131 silver badges270270 bronze badges answered Sep 25 at 6:45 Raffzahn's user avatar RaffzahnRaffzahn 209k2121 gold badges586586 silver badges861861 bronze badges 5 * 3 Let me be doubtful about NS32032 success, as at that time I was a fan of NS products, and in the market of MCU/MPU for some time, and I've never seen a single design with it. I knew it had been employed in some high-end (for the time) laser printer, no idea what. MC68K was simply anywhere, its assembly language, high-level language friendly, was unbeatable. Only heavy-weight IBM could have decided by marketing for a technical freak CPU, coming from a company that didn't even believe in microprocessors. - LuC Sep 25 at 16:02 * 1 Well, one example might be Siemens switched (ca. 1985) their Unix line (PC-MX) from Intel to NS. Those systems were way into the late 1990s the best selling Unix systems in Europe, covering a range from single 32032 CPU all the way to 16 way 32532. For market share in that segment 68k was at best a third. - Raffzahn Sep 25 at 16:58 * Well, starting from the mid-eighties I would recall Apple Macintosh, Commodore Amiga, and Atari ST/TT lines, without forgetting the venerable Sinclair QL. I'm quite sure they could have sold a few units more than Siemens' devices. On the industrial side, I've witnessed a few VME bus systems with MC68K, running Sys V too. - LuC Sep 26 at 10:44 * 2 @LuC Sure, but whats your point? You assumed there weren't any NS systems, which he MX family proves different. Same goes for embedded. Especial in high reliability environments. Also, do you really think the National would have poured the money needed to developing the series all the way to the 32641, a 1992 superscalar server CPU - not to mix up with the embedded 32x160 series, developed until 1997 - if it wasn't successful and generating positive ROI? - Raffzahn Sep 26 at 10:59 * I'm still convinced that besides any great innovation in the NS32K series, they didn't have as much market success as you said. Of curiosity, I found a historical fan site mentioning the selling numbers of the Siemens MX 300 as 13000 units over its life span. In the early '90s, I collaborated with a small company (let's say insignificant, compared to Siemens) that was selling that number of 68K systems every month, and then I found strange reading of a similarity in success. - LuC Sep 26 at 12:30 Add a comment | 5 The 68000 was a joy to program (compared to the segmented memory Intel x86), but it simply didn't keep up in the clocking race. Share Improve this answer Follow answered Sep 25 at 2:50 ubfan1's user avatar ubfan1ubfan1 15111 bronze badge Add a comment | 4 Motorola lost the race to 32 bit computing from a simple engineering mistake. In the era when the 68k and x86 were very popular, the shift to 32 bit CPUs was a race to mass production. No question the 68020 was a cleaner design and almost destined to be the no. 1 choice for new machines. Friends of mine paid around $400 at the time for early 68020 chips to build test boards. X86 at the time was hopelessly behind. BUT, the first iteration of the 68020 had pushed the design parameters of the chip process to the point that yields were appalling. Every chip sold at a loss. Motorola then had to redesign all the masks which was an 18 month engineering exercise. 18 months was the window that let x86 get ahead in the market and that was the end of the 68k family's dominance. Shame Share Improve this answer Follow answered Sep 26 at 11:29 Alex Danilo's user avatar Alex DaniloAlex Danilo 4111 bronze badge Add a comment | 2 This was purely economics. By about 1988 the ready availability of IBM AT clones had the effect of pushing the price of support hardware- including cases, discs etc.- down enormously, and even on the '286 there were UNIX variants that demonstrated that such things were possible. The '386, when introduced, exploited that, and from that point onwards it became a race between what Intel- with a growing income- and Motorola et al.- with static incomes- could do with the available semiconductor technology. By about 1995 Intel's price/performance ratio was unassailable, and graphics accelerators which could operate in conjunction with PC hardware were starting to erode the market for specialist workstations. Share Improve this answer Follow answered Sep 27 at 10:39 Mark Morgan Lloyd's user avatar Mark Morgan LloydMark Morgan Lloyd 2,17955 silver badges2121 bronze badges Add a comment | You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged * history * motorola-68000 . * The Overflow Blog * How an algo raver stays in key(boards) sponsored post * Open Discussion: What can be done to reduce infrastructure-as-code complexity? * Featured on Meta * Alpha test for short survey in banner ad slots starting on week of September... * What should be next for community events? * Should we avoid the low rep voting experiment? Related 35 Why did the Bell 103 modem use a data rate of 300 bps? 43 Did any computers use the Z80B? 28 Why did computers use a power supply with a socket? 52 Why did the IBM 650 use bi-quinary? 74 Why did C use the arrow (->) operator instead of reusing the dot (.) operator? 5 Why did extracodes fall out of favour? 8 Why did the Altair use 100-pin edge connectors? 32 Did IBM originally plan to use the 68000 in the PC? 11 Why did the Bell 103 modem specification use 1070Hz, 1270Hz, 2025Hz and 2225Hz? 17 How did the Sun-1 handle page faults despite having the original 68000 processor? Hot Network Questions * Why did Doctor Strange believe the Fantastic Four were related to popular music in the 1960s? * I feel like I'm being pressured to quit from my boss * Perfect Pitch: Are tones recognizable by themselves or only in comparison with another tone? * What did the Democrats have to gain by ousting Kevin McCarthy? * After updating Cocoapods to 1.13.0 it throws error * What would be the Spanish equivalent of using "did" to emphasize a verb in English? * Is a local system on a surface determined by simple closed loops? * What are the blinking rates of the caret and of blinking text on PC graphics cards in text mode? * Problem drawing angle sign without define points * Why is accusative pronoun "te" used in this construction? * Help identifying this LCD * What ship weapon "type" is designed to take down enemy ship shields and health? * Can you polymorph to a creature that is wearing armor? * Is there anyway to formulate the Alexandrov topology algebraically? * Fasteners for electrical work on pressure-treated wood? * Make a super fair number * How to export global variables to child process in bash * Why it is not possible to create a proprietary fork of GPL? -or-later software? * 3D representation of a Boy surface using a mesh of tubes * Shall I trust my recruiter? * Can a private contract obligate someone to exercise their right to a jury trial? * What techniques do experienced commuter cyclists use to avoid scraping the shins and legs on pedals? * Colouring a rug * Why did impact with the Bozeman cause a warp core breach in TNG: Cause and Effect? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. [https://retrocomputi] * Retrocomputing * Tour * Help * Chat * Contact * Feedback Company * Stack Overflow * Teams * Advertising * Collectives * Talent * About * Press * Legal * Privacy Policy * Terms of Service * Cookie Settings * Cookie Policy Stack Exchange Network * Technology * Culture & recreation * Life & arts * Science * Professional * Business * API * Data * Blog * Facebook * Twitter * LinkedIn * Instagram Site design / logo (c) 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2023.10.6.43666 Your privacy By clicking "Accept all cookies", you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings