[HN Gopher] I got almost all of my wishes granted with RP2350
       ___________________________________________________________________
        
       I got almost all of my wishes granted with RP2350
        
       Author : elipsitz
       Score  : 660 points
       Date   : 2024-08-08 13:03 UTC (1 days ago)
        
 (HTM) web link (dmitry.gr)
 (TXT) w3m dump (dmitry.gr)
        
       | elipsitz wrote:
       | Can't find an official announcement or datasheet yet, but
       | according to this post:
       | 
       | * 2x Cortex-M33F * improved DMA * more and improved PIO *
       | external PSRAM support * variants with internal flash (2MB) and
       | 80 pins (!) * 512KiB ram (double) * some RISC-V cores? Low power
       | maybe?
       | 
       | Looks like a significant jump over the RP2040!
        
         | zrail wrote:
         | This is pretty exciting. Can't wait for the datasheet!
        
         | dave78 wrote:
         | Pico 2, using the 2350, seems to be announced:
         | 
         | https://www.raspberrypi.com/products/raspberry-pi-pico-2/
        
           | jsheard wrote:
           | Only $1 more than the original Pico, that's an absolute
           | steal. Although the Pico2 doesn't have PSRAM onboard so
           | there's room for higher end RP235x boards above it.
        
           | HeyLaughingBoy wrote:
           | Make one in an Arduino Uno form factor and double the price
           | and they'd make a killing :-)
           | 
           | I try to dissuade n00bs from starting their arduino journey
           | with the ancient AVR-based devices, but a lot of the
           | peripherals expect to plug into an Uno.
        
             | moffkalast wrote:
             | Well there's the UNO-R4 Renasas I suppose, but this would
             | be much cooler indeed. There's also the 2040 Connect in the
             | Nano form factor with the extra IMU.
        
             | ta988 wrote:
             | Look at the adafruit metro then. They just announced the
             | rp2350 version
        
         | repelsteeltje wrote:
         | ... And RP2354A/B even has 2MB _built in_ flash!
        
           | andylinpersonal wrote:
           | Indeed an in-package winbond flash die though.
        
         | KaiserPro wrote:
         | I'm hoping that its got much better power management. That
         | would be really cool for me.
        
         | dgacmu wrote:
         | Small (0.5 bits effective) improvement to the ADC also, per the
         | datasheet.
        
       | jsheard wrote:
       | Speak of the devil: https://news.ycombinator.com/item?id=41156743
       | 
       | Very nice that the "3" turned out to mean the modern M33 core
       | rather than the much older M3 core. It has a real FPU!
        
         | dmitrygr wrote:
         | Yes, well-guessed
        
       | RA2lover wrote:
       | Is there any info on the analog capabilities compared to the
       | RP2040?
        
         | TomWhitwell wrote:
         | Looks like 4 x ADC channels again, no on-board DAC
        
           | RA2lover wrote:
           | the 80-pin version has 8.
        
       | limpbizkitfan wrote:
       | Is there an exhaustive list of stm32h7 errata? Has anyone
       | compiled a defect list?
        
         | dmitrygr wrote:
         | STM has an inexhaustible list of them, but does not list at
         | least a few QSPI ones that I am aware of. :/
        
           | limpbizkitfan wrote:
           | >:( hoping to play with a pico 2 soon so I can convince my
           | team to move off stm32h7
        
         | uticus wrote:
         | Apart from official ST documentation [0]?
         | 
         | For comparison, RP2350 errata in Appendix E of [1]
         | 
         | [0] https://www.st.com/en/microcontrollers-
         | microprocessors/stm32...
         | 
         | [1]
         | https://datasheets.raspberrypi.com/rp2350/rp2350-datasheet.p...
        
       | zrail wrote:
       | Looks like the SDK got updated a couple hours ago:
       | 
       | https://github.com/raspberrypi/pico-sdk/commit/efe2103f9b284...
        
       | blackkat wrote:
       | Some specs here: https://www.digikey.ca/en/product-
       | highlight/r/raspberry-pi/r...
       | 
       | Based on the RP2350, designed by Raspberry Pi in the United
       | Kingdom
       | 
       | Dual Arm M33s at 150 MHz with FPU
       | 
       | 520 KiB of SRAM
       | 
       | Robust security features (signed boot, OTP, SHA-256, TRNG, glitch
       | detectors and Arm TrustZone for Cortex(r)-M)
       | 
       | Optional, dual RISC-V Hazard3 CPUs at 150 MHz
       | 
       | Low-power operation
       | 
       | PIO v2 with 3 x programmable I/O co-processors (12 x programmable
       | I/O state machines) for custom peripheral support
       | 
       | Support for PSRAM, faster off-chip XIP QSPI Flash interface
       | 
       | 4 MB on-board QSPI Flash storage
       | 
       | 5 V tolerant GPIOs
       | 
       | Open source C/C++ SDK, MicroPython support
       | 
       | Software-compatible with Pico 1/RP2040
       | 
       | Drag-and-drop programming using mass storage over USB
       | 
       | Castellated module allows soldering directly to carrier boards
       | 
       | Footprint- and pin-compatible with Pico 1 (21 mm x 51 mm form
       | factor)
       | 
       | 26 multifunction GPIO pins, including three analog inputs
       | 
       | Operating temperature: -20degC to +85degC
       | 
       | Supported input voltage: 1.8 VDC to 5.5 VDC
        
         | synergy20 wrote:
         | Wow, can't wait. Love the 5V GPIO and security features.
        
           | Daneel_ wrote:
           | 5V GPIO is a huge deal for me - this immediately opens up a
           | huge range of integrations without having to worry about line
           | level conversion.
           | 
           | I can't wait to use this!
        
             | azinman2 wrote:
             | Does tolerant mean ok to do? Or it just won't fry your chip
             | but you should actually run at 3.3?
        
               | tredre3 wrote:
               | It usually means it's clamped so it might result in a
               | small amount of wasted energy/heat but no damage.
               | 
               | So yes it means it's okay but if you can you should go
               | for 3.3.
        
               | murderfs wrote:
               | 5V tolerant means that it'll accept 5V input (and
               | correctly interpret it as high), but output will still be
               | 3.3V.
        
             | HeyLaughingBoy wrote:
             | Be careful with assumptions though. Being 5V tolerant
             | doesn't mean that your 3V output can sufficiently drive an
             | input that expects 0-5V levels correctly.
             | 
             | I ran into this problem using an ESP32 to drive a Broadcom
             | 5V LED dot-matrix display. On paper everything looked fine;
             | in reality it was unreliable until I inserted an LS245
             | between the ESP and the display.
        
               | lloydatkinson wrote:
               | > LS245
               | 
               | Do you think that would be a good IC to drive these with
               | a RP2040? https://www.analog.com/en/products/max7219.html
        
               | HeyLaughingBoy wrote:
               | A better question might be why anyone is using a MAX7219
               | on a new design in 2024. There are so many other choices
               | for displays than a 20 year-old IC from a company that's
               | gone through two changes of ownership since.
               | 
               | Anyway, a 74LS245 isn't a level shifter, it's an octal
               | buffer. It just happened to be the right choice for my
               | needs. In your application, I'd suggest an actual level
               | shifter. You can find level shift breakout boards at
               | Sparkfun and Adafruit.
        
               | irdc wrote:
               | > Being 5V tolerant doesn't mean that your 3V output can
               | sufficiently drive an input that expects 0-5V levels
               | correctly.
               | 
               | It's fine for TTL (like your 74LS245 is), which registers
               | voltages as low as 2V as a logical 1. Being able to
               | directly interface with TTL eases up so many
               | retrocomputing applications.
        
               | HeyLaughingBoy wrote:
               | Which was... exactly the reason I chose it?
        
         | my123 wrote:
         | Hazard3 RTL: https://github.com/Wren6991/Hazard3
        
           | IshKebab wrote:
           | I wonder how well it's been verified.
        
             | pclmulqdq wrote:
             | This is a really big deal. Verifying a core is hard, and if
             | the repo doesn't come with a testbench, I'm very
             | suspicious.
        
               | IshKebab wrote:
               | Even if it does I'm suspicious. The open source RISC-V
               | verification systems are not very good at the moment:
               | 
               | * riscv-arch-tests: ok, but a very low bar. They don't
               | even test combinations of instructions so no hazards etc.
               | * riscv-test: decent but they're hand-written directed
               | tests so they aren't going to get great coverage *
               | TestRig: this is better - random instructions directly
               | compared against the Sail model, but it's still fairly
               | basic - the instructions are completely random so you're
               | unlikely to cover lots of things. Also it requires some
               | setup so they may not have ran it.
               | 
               | The commercial options are much better but I doubt they
               | paid for them.
        
               | my123 wrote:
               | See https://github.com/Wren6991/Hazard3/tree/stable/test
               | for the test harnesses used. I wonder if they did release
               | all they used there.
        
         | moffkalast wrote:
         | > Low-power operation
         | 
         | Low power suspend? In a Pi Foundation product? Impossible.
        
           | thomasdeleeuw wrote:
           | Not sure why this is downvoted but the sleep and dormant pico
           | examples have quite some issues, they are still in "extras"
           | and not in "core", so while documentation of features is my
           | personal favorite aspect of the pico, there is room for
           | improvement here still.
        
             | tssva wrote:
             | It is downvoted because it is a low effort sarcastic
             | comment which provides no real contribution to the
             | discussion. Your comment actually provides real feedback as
             | to where there are currently issues.
        
         | coder543 wrote:
         | I'm having trouble seeing where the datasheet actually says the
         | GPIO pins are 5V tolerant.
         | 
         | EDIT: okay, section 14.8.2.1 mentions two types of digital
         | pins: "Standard Digital" and "Fault Tolerant Digital", and the
         | FT Digital pins might be 5V tolerant, it looks like.
        
           | sowbug wrote:
           | Page 13: "GPIOs are 5 V-tolerant (powered), and 3.3
           | V-failsafe (unpowered)"
        
             | coder543 wrote:
             | Yep, I edited a few minutes ago to mention a reference I
             | found in the datasheet. It's cool, but the reality seems a
             | little more nuanced than that quote would indicate, since
             | that only appears to work for GPIO-only pins, not just pins
             | being used as GPIO. (So, if a pin supports analog input,
             | for example, it will not be 5V tolerant.)
        
         | jayyhu wrote:
         | Edit: See comment below; The RP2350 _can_ be powered by a 5V
         | supply.
        
           | giantg2 wrote:
           | I'd rather have it run on the lower voltage - generally
           | easier to step down than buck up. Either way, the modules are
           | pretty cheap, small, and easy to find.
        
           | skykooler wrote:
           | How much tolerance does that have - can it run directly off a
           | 3.7v lithium ion battery?
        
             | jayyhu wrote:
             | Yep, they explicitly call out that the onboard voltage
             | regulator can work with a single lithium ion cell.
        
               | dvdkon wrote:
               | The regulator can take that, but as far as I can see it's
               | only for DVDD, the core voltage of 1.1 V. You also need
               | at least IOVDD, which should be between 1.8 V and 3.3 V.
               | So you'll need to supply some lower voltage externally
               | anyway.
               | 
               | I suppose the main draw of the regulator is that the DVDD
               | rail will consume the most power. 1.1 V is also much more
               | exotic than 3.3 V.
        
           | Findecanor wrote:
           | To clarify: You can connect a 5V power source by connecting
           | it to the VSYS pin which leads into the on-board voltage
           | regulator.
           | 
           | But the uC itself runs on 3.3V and is not totally 5V-capable.
           | You'd need _level converters_ to interface with 5V.
        
             | jayyhu wrote:
             | You're right, after re-reading the Power section on the
             | datasheet it seems connecting 5V to the VREG_VIN should
             | suffice to power the digital domains, but if you want to
             | use the ADC, you still need a external 3.3V source.
        
               | dvdkon wrote:
               | Maybe not even that:
               | 
               | > A separate, nominally 3.3 V, low noise supply
               | (VREG_AVDD) is required for the regulator's analogue
               | control circuits.
               | 
               | It seems it would be painful trying to run this without
               | 3.3 V.
        
               | snvzz wrote:
               | See section on physical pin gpio electrical tolerances.
               | 
               | The TL;DR is that 3.3v must be fed into IOVDD for 5.5v
               | tolerance to work.
        
               | crote wrote:
               | It's quite a bit more complicated.
               | 
               | The chip needs a) 1.1V to power the cores, b) 1.8V-3.3V
               | to power IO, and c) 3.3V to properly operate USB and ADC.
               | 
               | The chip has one onboard voltage regulator, which can
               | operate from 2.7V-5.5V. Usually it'll be used to output
               | 1.1V for the cores, but it _can_ be used to output
               | anything from 0.55V to 3.3V. The regulator requires a
               | 3.3V reference input to operate properly.
               | 
               | So yeah, you could feed the regulator with 4-5V, but
               | you're still going to need an external 5V->3.3V converter
               | to make the chip actually operate...
        
             | snvzz wrote:
             | >You'd need level converters to interface with 5V.
             | 
             | Part of the GPIOs are CMOS are 5v-tolerant, and TTL
             | considers 2v HIGH, thus it is possible to interface some 5v
             | hardware directly.
        
       | maccam912 wrote:
       | Can anyone speak about plans for a Pico 2 W (or Pico W 2)? I've
       | been playing around recently with mine and even just syncing with
       | the current time over wifi opens up a lot of possibilities.
        
         | coder543 wrote:
         | Jeff Geerling said the Pico 2 W is coming later this year:
         | https://youtu.be/oXF_lVwA8A4?t=445
        
         | MarcScott wrote:
         | It's in this post - https://www.raspberrypi.com/news/raspberry-
         | pi-pico-2-our-new...
        
       | katzinsky wrote:
       | I suppose this isn't the first time a company that started out as
       | a hobbiest board manufacturer produced really amazing micro
       | controllers but man is it insane how far they've knocked the ball
       | out of the park.
        
       | synergy20 wrote:
       | You can pick either ARM cores or RISC-V cores on the same die?
       | Never saw design like this before. Will this impact price and
       | power consumption?
       | 
       | "The Hazard3 cores are optional: Users can at boot time select a
       | pair of included Arm Cortex-M33 cores to run, or the pair of
       | Hazard3 cores. Both options run at 150 MHz. The more bold could
       | try running one RV and one Arm core together rather than two RV
       | or two Arm.
       | 
       | Hazard3 is an open source design, and all the materials for it
       | are here. It's a lightweight three-stage in-order RV32IMACZb*
       | machine, which means it supports the base 32-bit RISC-V ISA with
       | support for multiplication and division in hardware, atomic
       | instructions, bit manipulation, and more."
        
         | bri3d wrote:
         | This "switchable cores" thing has been appearing in some
         | products for a few years now, for example Sipeed SG2002
         | (LicheeRV). The area occupied by the actual instruction core is
         | usually pretty small compared to peripherals and internal
         | memories.
        
           | zer00eyz wrote:
           | The MilkV Duo also has this feature I believe...
           | https://milkv.io/duo
        
             | Teknoman117 wrote:
             | It's the same SoC as the LicheeRV (SG2002)
        
         | geerlingguy wrote:
         | Apparently (this is news to me), you can also choose to run 1+1
         | Arm/RISC-V, you don't have to switch both cores either/or.
         | 
         | Eben Upton: "They're selectable at boot time: Each port into
         | the bus fabric can be connected either to an M33 or a Hazard3
         | via a mux. You can even, if you're feeling obtuse, run with one
         | of each."
         | 
         | Source:
         | https://www.theregister.com/2024/08/08/pi_pico_2_risc_v/
        
           | ravetcofx wrote:
           | But not 2+2? That seems too bad to have each architecture run
           | code based on their strengths for quad core workloads.
        
             | simcop2387 wrote:
             | Yea, i was hoping for 2+2 myself but I suspect it's because
             | the setup doesn't have the ability to mediate peripherals
             | between the cores in a way that'd let that work. I.e.
             | trying to turn on both Risc-v and arm #1 cores means that
             | there'd be bus conflicts. It'd be cool if you could disable
             | the io on the risc-v cores and do all hardware io through
             | arm (or vice versa) so you can use the unconnected ones for
             | just pure compute tasks (say run ws2812b led strips with
             | the arm cores but run python/javascript/lua on the risc-v
             | cores to generate frames to display without interrupting
             | the hardware io).
        
             | nine_k wrote:
             | Why not both: power distribution and cooling? Having to
             | route twice as many wide buses, and put in twice as much of
             | L0 caches?
        
             | ebenupton wrote:
             | We did look at this, but the AHB A-phase cost of putting a
             | true arbiter (rather than a static mux) on each fabric port
             | was excessive. Also, there's a surprising amount of impact
             | elsewhere in the system design (esp debug).
        
           | jaeckel wrote:
           | Would've been cool for safety applications if the second core
           | could be run in lockstep mode.
        
             | 4gotunameagain wrote:
             | afaik that is a whole different rodeo on the silicon level
        
               | KaiserPro wrote:
               | yeah lockstep requires a whole bunch of things to verify
               | and break deadlocks. I suspect you need three processors
               | to do that as well (so you know which one has fucked up.)
        
         | jononor wrote:
         | This seems like a great way to test the waters before a
         | potential full-on transition to RISC-V. It allows to validate
         | both technically and market reception, for a much lower cost
         | than taping out a additional chip.
        
           | MBCook wrote:
           | Fun for benchmarking too.
           | 
           | You're limited to those two exact kinds of cores, but you
           | know every other thing on the entire computer is 100%
           | identical.
           | 
           | It's not SBC 1 vs SBC 2, but they have different RAM chips
           | and this one has a better cooler but that one better WiFi.
        
             | phire wrote:
             | I really hope people don't do this. Or at least not try to
             | sell it as ARM vs RISC-V tests.
             | 
             | Because what you are really testing is the Cortex-M33 vs
             | the Hazard 3, and they aren't equivalent.
             | 
             | They might both be 3 stage in-order RISC pipelines, but
             | Cortex-M33 is technically superscalar, as it can dual-issue
             | two 16bit instructions in certain situations. Also, the
             | Cortex-M33 has a faster divider, 11 cycles with early
             | termination vs 18 or 19 cycles on the Hazard 3.
        
               | snvzz wrote:
               | It'd help to know how much area each core takes within
               | the die.
               | 
               | I would expect the ARM cores to be much larger, as well
               | as use much more power.
        
               | phire wrote:
               | Hard to tell.
               | 
               | If you ignore the FPU (I _think_ it can be power gated
               | off) the two cores should be roughly the same size and
               | power consumption.
               | 
               | Dual issue sounds like it would add a bunch of
               | complexity, but ARM describe it as "limited" (and that's
               | about all I can say, I couldn't find any documentation).
               | The impression I get is that it's really simple.
               | 
               | Something along the line of "if two 16 bit instructions
               | are 32bit aligned, and they go down different pipelines,
               | and they aren't dependant on each other" then execute
               | both. It might be limitations that the second instruction
               | can't access registers at all (for example, a branch
               | instruction) or that it must only access registers from
               | seperate register file bank, meaning you don't even have
               | to add extra read/write ports to the register file.
               | 
               | If the feature is limited enough, you could get it down
               | to just a few hundred gates in the instruction decode
               | stage, taking advantage of resources in later stages that
               | would have otherwise been idle.
               | 
               | According to ARM's specs, the Cortex-M33 takes the exact
               | same area as the Cortex-M4 (the rough older equivalent
               | without dual-issue, and arguably equal to the Hazard3),
               | uses 2.5% less power and gets 17% more performance in the
               | CoreMark benchmark.
        
               | pclmulqdq wrote:
               | That is exactly what the "limited dual issue" is - two
               | non-conflicting pre-decoded instructions (either 16b+16b
               | or if a stall has occurred) can be sent down the
               | execution pipe at the same time. I believe that must be a
               | memory op and an ALU op.
        
               | KaiserPro wrote:
               | The ARM cores are probably much larger, but I don't think
               | that translates into better power efficiency
               | _automatically_.
        
           | GordonS wrote:
           | My thoughts exactly - a risc-free (hurhur) way to get RISC-V
           | in the hands of many, many devs.
        
           | kelnos wrote:
           | I do wonder if the unavailability of some of the security
           | features and -- possibly a big deal for some applications --
           | the accelerated floating point on the RISC-V cores would skew
           | that experiment, though.
        
           | askvictor wrote:
           | Indeed, though I'm curious about the rationale behind it. It
           | is a 'plan B' in case their relationship with ARM sours? It
           | is aiming for cost-cutting in the future (I can't imagine the
           | ARM licences are costing them much given the price of the
           | RP2040, but maybe they're absorbing it to get marketshare)
        
             | snvzz wrote:
             | Embracing RISC-V, the high-quality open-source ISA that is
             | rapidly growing the strongest ecosystem, does make a lot of
             | sense.
        
         | GeorgeTirebiter wrote:
         | Hazard3 pointer https://github.com/Wren6991/Hazard3
         | 
         | I think it's cool as a cucumber that we can choose fully open-
         | source RISC-V if we want. My guess is the RV cores are slower
         | clock-per-clock than the M33 cores; that is benchmark scores
         | for M33's will be better, as Hazard3 is only 3-stage pipeline -
         | but so is M33. Can't wait for the benchmarks.
        
       | numpad0 wrote:
       | https://www.raspberrypi.com/products/rp2350/
       | 
       | 4 variants? "A" and "B" variants in QFN60 and QFN80, "2350" and
       | "2354" variants with and without 2MB Flash. CPU can be switched
       | between dual RISC-V @ 150MHz or dual Cortex-M33 @ 300MHz by
       | software or in one-time programming memory(=permanently).
       | 
       | Datasheet, core switching details, most of docs are 404 as of
       | now; I guess they didn't have embargo date actually written in
       | `crontab`.
       | 
       | e: and datasheet is up!
        
       | doe_eyes wrote:
       | I think it's a good way to introduce these chips, and it's a
       | great project, but the author's (frankly weird) beef with STM32H7
       | is detracting from the point they're trying to make:
       | 
       | > So, in conclusion, go replan all your STM32H7 projects with
       | RP2350, save money, headaches, and time.
       | 
       | STM32H7 chips can run much faster and have a wider selection of
       | peripherals than RP2350. RP2350 excels in some other dimensions,
       | including the number of (heterogenous) cores. Either way, this is
       | nowhere near apples-to-apples.
       | 
       | Further, they're not the only Cortex-M7 vendor, so if the
       | conclusion is that STM32H7 sucks (it mostly doesn't), it doesn't
       | follow that you should be instead using Cortex-M33 on RPi. You
       | could be going with Microchip (hobbyist-friendly), NXP (preferred
       | by many commercial buyers), or a number of lesser-known
       | manufacturers.
        
         | dmitrygr wrote:
         | 1. Nobody has a wider selection of peripherals than a chip with
         | 3 PIOs.
         | 
         | 2. And my beef is personal - I spent months ( _MONTHS_ of my
         | life) debugging the damn H7, only to find a set of huge bugs in
         | the main reason I had been trying to use it (QSPI ram support),
         | showed it to the manufacturer, and had them do nothing. Later
         | they came back and, without admitting i was right about the
         | bugs, said that  "another customer is seeing same issues, what
         | was the workaround you said found?" I told them that i'll share
         | the workaround when they admit the problem. Silence since.
         | 
         | I fully reserve the right to be pissy at shitty companies in
         | public _on my website_!
        
           | doe_eyes wrote:
           | I'm not arguing you can't be angry with them, I'm just saying
           | that to me, it detracts from the point about the new
           | platform. Regarding #1, I'm sure you know that peripherals in
           | the MCU world mean more than just digital I/O. Further, even
           | in the digital domain, the reason PIO isn't more popular is
           | that most people don't want to DIY complex communication
           | protocols.
        
           | uticus wrote:
           | [edit: I retract this, I see you've had secretly in your
           | possession to play with for over a year. You lucky dog. ]
           | 
           | > I have been anti-recommending STM's chips to everyone for a
           | few years now due to STM's behaviour with regards to the
           | clearly-demonstrated-to-them hardware issues.
           | 
           | You certainly reserve the right. However it is unclear to me
           | why the recommendation to complaints over a months-long
           | period is a product that has just been released.
           | 
           | Trying to ask in a very unbiased way since as a hobbyist I'm
           | looking into ST, Microchip, and RP2040. For my part I've had
           | two out of four RP2040 come to me dead on arrival, as part of
           | two separate boards from different vendors - one being Pi
           | Pico from Digilent. Not a ton of experience with Microchip
           | but I hear they have their own problems. Nobody's perfect,
           | the question is how do the options compare.
        
             | limpbizkitfan wrote:
             | I don't think the issue is QA related, ST released a chip
             | that says it can perform X when the reality is it can not
             | perform X.
        
             | naikrovek wrote:
             | they're complaining _now_ because they still feel the pain
             | _now_. while writing the article, they 're thinking of how
             | things would have been different on previous projects if
             | they had had this chip, and that is digging up pain and
             | they felt it should be expressed.
             | 
             | I don't know what's so unclear. Have you never had a strong
             | opinion about someone else's stuff? Man, I have.
        
           | 15155 wrote:
           | > 1. Nobody has a wider selection of peripherals than a chip
           | with 3 PIOs.
           | 
           | NXP FlexIO says "Hello!"
        
             | spacedcowboy wrote:
             | FlexIO is (I think) powerful, however... I'm not sure if
             | it's me or the way they describe it with all the bit-
             | serialisers/shifters interacting - but I grok the PIO
             | assembly a damn sight easier than FlexIO.
             | 
             | Maybe it's just me. Maybe.
        
         | Archit3ch wrote:
         | > STM32H7 chips can run much faster
         | 
         | STM32H7 tops out at 600MHz. This has 2x 300MHz at 2-3 cycles/op
         | FP64. So maybe your applications can fit into this?
        
           | spacedcowboy wrote:
           | I'm seeing several statements of 2x300MHz, but the page [1]
           | says 2x150MHz M33's..
           | 
           | I know the RP2040's overclock _a lot_ but these are
           | significantly more complex chips, it seems less likely they
           | 'll overclock to 2x the base frequency.
           | 
           | [1] https://www.raspberrypi.com/news/raspberry-pi-pico-2-our-
           | new...
        
             | mrandish wrote:
             | TFA states extensive 300Mhz OC with no special effort (and
             | he's been evaluating pre-release versions for a year).
             | 
             |  _" It overclocks insanely well. I've been running the
             | device at 300MHz in all of my projects with no issues at
             | all."_
             | 
             | Also
             | 
             |  _" Disclaimer: I was not paid or compensated for this
             | article in any way. I was not asked to write it. I did not
             | seek or obtain any approval from anyone to say anything I
             | said. My early access to the RP2350 was not conditional on
             | me saying something positive (or anything at all) about it
             | publicly."_
        
               | spacedcowboy wrote:
               | Thanks, I missed that.
        
           | 15155 wrote:
           | The STM32H7 and other M7 chips have caches - performance is
           | night and day between 2x300MHz smaller, cacheless cores and
           | chips with L1 caches (and things like TCM, etc.)
           | 
           | The SRAM in that H7 is running at commensurately-high speeds,
           | as well.
           | 
           | Comparing an overclocked 2xM33 to a non-overclocked M7 is
           | also probably a little inaccurate - that M7 will easily make
           | more than the rated speed (not nearly as much as the RP2040
           | M0+, though.)
        
           | mordae wrote:
           | It's 6 cycles for dadd/dsub, 16 for dmul, 51 for ddiv.
        
             | vardump wrote:
             | > 6 cycles for dadd/dsub
             | 
             | I guess it depends whether you store to X (or Y), normalize
             | & round (NRDD; is it really necessary after each addition?)
             | and load X back every time.
             | 
             | Both X and Y have 64 bits of mantissa, 14 bits of exponent
             | and 4 bits of flags, including sign. Some headroom compared
             | to IEEE 754 fp64 53 mantissa and 11 bits of exponent, so
             | I'd assume normalization might not be necessary after every
             | step.
             | 
             | The addition (X = X + Y) itself presumably takes 2 cycles;
             | running coprocessor instructions ADD0 and ADD1. 1 cycle
             | more if normalization is always necessary. And for the
             | simplest real world case, 1 cycle more for loading Y.
             | 
             | Regardless, there might be some room for hand optimizing
             | tight fp64 loops.
             | 
             | Edit: This is based on my current understanding of the
             | available documentation. I might very well be wrong.
        
           | adrian_b wrote:
           | As other posters have mentioned, this has 2 Cortex-M33 cores
           | @ 150 MHz, not @ 300 MHz.
           | 
           | Cortex-M7 is in a different size class than Cortex-M33, it
           | has a speed about 50% greater at the same clock frequency and
           | it is also available at higher clock frequencies.
           | 
           | Cortex-M33 is the replacement for the older Cortex-M4 (while
           | Cortex-M23 is the replacement for Cortex-M0+ and Cortex-M85
           | is the modern replacement for Cortex-M7).
           | 
           | While for a long time the Cortex-M MCUs had been available in
           | 3 main sizes, Cortex-M0+, Cortex-M4 and Cortex-M7, for their
           | modern replacements there is an additional size, Cortex-M55,
           | which is intermediate between Cortex-M33 and Cortex-M85.
        
         | limpbizkitfan wrote:
         | ST is a zillion dollar company that should be hiring the talent
         | capable of delivering product that match the features in their
         | sales pamphlets. Integration is tricky but a company with STs
         | deep pockets should be able to root cause or at least help
         | troubleshoot an issue, not ask for a fix like some nepotism
         | hire.
        
           | HeyLaughingBoy wrote:
           | They should also be hiring people that can write clearly in
           | their datasheets, but here we are, so...
        
           | doe_eyes wrote:
           | I'm not an ST fanboy and they're not a vendor I use, but they
           | are _very_ popular in the 32-bit Cortex-M space, so they 're
           | clearly doing something right. Meanwhile, companies like
           | Microchip that put effort into accessible documentation and
           | tooling are getting table scraps.
        
       | kaycebasques wrote:
       | Official news post: https://news.ycombinator.com/item?id=41192341
       | 
       | Official product page:
       | https://news.ycombinator.com/item?id=41192269
        
         | kjagiello wrote:
         | Datasheet:
         | https://datasheets.raspberrypi.com/rp2350/rp2350-datasheet.p...
        
         | dang wrote:
         | Thanks! Macroexpanded:
         | 
         |  _Raspberry Pi Pico 2, our new $5 microcontroller board, on
         | sale now_ - https://news.ycombinator.com/item?id=41192341 - Aug
         | 2024 (71 comments)
        
       | jonathrg wrote:
       | Can someone explain the benefit of having essentially 4 cores (2
       | ARM + 2 RISC-V) on the chip but only having 2 able to run
       | simultaneously? Does this take significantly less die space than
       | having all 4 available at all times?
        
         | coder543 wrote:
         | Coordinating access to the memory bus and peripherals is
         | probably not easy to do when the cores weren't ever designed to
         | work together. Doing so could require a power/performance
         | penalty at all times, even though most users are unlikely to
         | want to deal with two completely different architectures across
         | four cores on one microcontroller.
         | 
         | Having both architectures available is a cool touch. I believe
         | I criticized the original RP2040 for not being bold enough to
         | go RISC-V, but now they're offering users the choice. I'll be
         | very curious to see how the two cores compare... I suspect the
         | ARM cores will probably be noticeably better in this case.
        
           | swetland wrote:
           | They actually let you choose one Cortex-M33 and one RISC-V
           | RV32 as an option (probably not going to be a very common use
           | case) and support atomic instructions from both cores.
        
             | coder543 wrote:
             | All of the public mentions of this feature that I've seen
             | indicated it is an either/or scenario, except the datasheet
             | confirms what you're saying:
             | 
             | > The ARCHSEL register has one bit for each processor
             | socket, so it is possible to request mixed combinations of
             | Arm and RISC-V processors: either Arm core 0 and RISC-V
             | core 1, or RISC-V core 0 and Arm core 1. Practical
             | applications for this are limited, since this requires two
             | separate program images.
             | 
             | That is fascinating... so, likely what dmitrygr said about
             | the size of the crossbar sounds right to me:
             | https://news.ycombinator.com/item?id=41192580
        
               | moffkalast wrote:
               | Did Dr. Frankenstein design this SoC? Igor, fetch me the
               | cores!
        
               | ebenupton wrote:
               | It's aliiiiiive!
        
               | geerlingguy wrote:
               | It was also confirmed by Eben Upton in an interview in
               | The Register[1], and I believe Adafruit's livestream also
               | mentioned it.
               | 
               | [1]
               | https://www.theregister.com/2024/08/08/pi_pico_2_risc_v/
        
         | dmitrygr wrote:
         | cores are high bandwidth bus masters. Making a crossbar that
         | supports 5 high bandwidth masters (4x core + dma) is likely
         | harder, larger, and higher power than one that supports 3.
        
           | ebenupton wrote:
           | It's actually 10 masters (I+D for 4 cores + DMA read + DMA
           | write) versus 6 masters. Or you could pre-arbitrate each pair
           | of I and each pair of D ports. But even there the timing
           | impact is unpalatable.
        
             | dmitrygr wrote:
             | Which is even more impressive yet :)
        
         | blihp wrote:
         | Beyond the technical reasons for the limit, it provides for a
         | relatively painless way to begin to build out/for RISC-V[1]
         | without an uncomfortable transition. For those who just want a
         | better next iteration of the controller, they have it. For
         | those who build tools, want to A/B test the architectures, or
         | just do whatever with RISC-V, they have that too. All without
         | necessarily setting the expectation that both will continue to
         | coexist long term.
         | 
         | [1] While it's possible they are envisioning dual architecture
         | indefinitely, it's hard to imagine why this would be desirable
         | long term esp. when one architecture can be royalty free and
         | the other not, power efficiency, paying for dark silicon etc.
        
         | networked wrote:
         | I see a business decision here. Arm cores have licensing fees
         | attached to them. Arm is becoming more restrictive with
         | licensing and wants to capture more value [1]:
         | 
         | > The Financial Times has a report on Arm's "radical shake-up"
         | of its business model. The new plan is to raise prices across
         | the board and charge "several times more" than it currently
         | does for chip licenses. According to the report, Arm wants to
         | stop charging chip vendors to make Arm chips, and instead wants
         | to charge device makers--especially smartphone manufacturers--a
         | fee based on the overall price of the final product.
         | 
         | Even if the particular cores in the RP2350 aren't affected, the
         | general trend is unfavorable to Arm licensees. Raspberry Pi has
         | come up with a clever design that allows it to start
         | commoditizing its complement [2]: make the cores a commodity
         | that is open-source or available from any suitable RISC-V chip
         | designer instead of something you must go to Arm for. Raspberry
         | Pi can get its users accustomed to using the RISC-V cores--for
         | example, by eventually offering better specs and more features
         | on RISC-V than Arm. In the meantime, software that supports the
         | Raspberry Pi Pico will be ported to RISC-V with no disruption.
         | If Arm acts up and RISC-V support is good enough or when it
         | becomes clear users prefer RISC-V, Raspberry Pi can drop the
         | Arm cores.
         | 
         | [1] https://arstechnica.com/gadgets/2023/03/risc-y-business-
         | arm-...
         | 
         | [2] https://gwern.net/complement
        
         | tredre3 wrote:
         | Each arm/riscv set likely share cache and register space (which
         | takes most of the die space by far), resulting in being unable
         | to use them both simultaneously.
        
           | formerly_proven wrote:
           | Considering that these are off-the-shelf Cortex-M designs I
           | doubt that Raspi was able or would be allowed to do that. I'd
           | expect most of the die to be the 512K SRAM, some of the
           | analog and power stuff and a lot of it just bond pads.
        
             | ebenupton wrote:
             | That's correct. The Arm and RISC-V cores are entirely
             | separate, sharing no logic.
        
       | mmmlinux wrote:
       | And still no USB C on the official devboard.
        
         | jsheard wrote:
         | There's plenty of alternatives right out of the gate, at least:
         | 
         | https://www.raspberrypi.com/for-industry/powered-by/product-...
         | 
         | Pimoroni has a maxed-out pin-compatible version with 16MB
         | flash, 8MB PSRAM, and USB-C:
         | 
         | https://shop.pimoroni.com/products/pimoroni-pico-plus-2
        
           | moffkalast wrote:
           | Unless the USB-C connector costs $7-10, these are beyond
           | ridiculously overpriced compared to the official dev board.
           | At least throw in an IMU or something if you plan to sell low
           | volumes at high prices jeez.
        
             | jsheard wrote:
             | The cheapest one I've seen so far is the XIAO RP2350, which
             | is $5, same as the official Pico board. I'm sure there will
             | be more cheap options once more Chinese manufacturers get
             | their hands on the chips, no-name USB-C RP2040 boards are
             | ridiculously cheap.
        
         | naikrovek wrote:
         | > And still no USB C on the official devboard.
         | 
         | Do you live in a universe where micro-USB cables are not
         | available, or something? There's gonna be something or other
         | that needs micro-USB for the next decade, so just buy a few and
         | move on. They're not expensive.
         | 
         | [later edit: I bet it has to do with backwards compatibility.
         | They don't want people to need to rework case designs to use
         | something that is meant as a drop-in replacement for the Pi
         | Pico 1.]
        
           | ewoodrich wrote:
           | Personally I have about three dozen USB-A to USB-C cables
           | lying around and the thought of actually spending money to
           | acquire extra Micro USB cables in 2024 is very unappealing.
           | 
           | I (deliberately) haven't bought a consumer electronic device
           | that still uses Micro USB in years so don't accumulate those
           | cables for free anymore like with USB-C.
           | 
           | Of course ubiquitous USB-C dev boards/breakout boards without
           | 5.1kO resistors for C-C power is its own frustration ... But
           | I can tolerate that having so many extra USB-A chargers and
           | cables. Trigger boards are great because they necessarily
           | support PD without playing the AliExpress C-C lottery.
        
             | naikrovek wrote:
             | > I (deliberately) haven't bought a consumer electronic
             | device that still uses Micro USB in years so don't
             | accumulate those cables for free anymore like with USB-C.
             | 
             | I guess you're not gonna be buying a Pi Pico 2, then. So
             | why are you complaining about something you aren't going to
             | use?
        
               | ewoodrich wrote:
               | I think you misread what I wrote: _consumer electronic
               | device_
               | 
               | Dev boards or niche specialized hardware are about the
               | only thing I've willingly bought with Micro USB in 4+
               | years. As much as I try to avoid it given my preference
               | for USB-C, sometimes I don't have a good alternative
               | available.
               | 
               | > So why are you complaining about something you aren't
               | going to use?
               | 
               | Because it looks like a great upgrade to my RP2040-Zero
               | boards that I would like to buy but I really dislike the
               | choice of connector? What is wrong with that?
        
               | Dylan16807 wrote:
               | Even if you interpreted that sentence right, that's not a
               | reasonable rebuttal. If a feature stops someone from
               | buying a product, then it makes sense to complain about
               | the feature. Their non-purchase doesn't invalidate the
               | complaint. It's only when someone _isn 't interested in
               | the category at all_ that complaints lose their value.
        
           | wolrah wrote:
           | > Do you live in a universe where micro-USB cables are not
           | available, or something? There's gonna be something or other
           | that needs micro-USB for the next decade, so just buy a few
           | and move on. They're not expensive.
           | 
           | I live in a universe where type C has been the standard
           | interface for devices for years, offering significant
           | advantages with no downsides other than a slightly higher
           | cost connector, and it's reasonable to be frustrated at
           | vendors releasing new devices using the old connector.
           | 
           | It's certainly not as bad as some vendors of networking
           | equipment who still to this day release new designs with
           | Mini-B connectors that are actually officially deprecated,
           | but it's not good nor worthy of defending in any way.
           | 
           | > I bet it has to do with backwards compatibility. They don't
           | want people to need to rework case designs to use something
           | that is meant as a drop-in replacement for the Pi Pico 1.
           | 
           | Your logic is likely accurate here, but that just moves the
           | stupid choice back a generation. It was equally dumb and
           | annoying to have Micro-B instead of C on a newly designed and
           | released device in 2021 as it is in 2024.
           | 
           | The type C connector was standardized in 2014 and became
           | standard on phones and widely utilized on laptops starting in
           | 2016.
           | 
           | IMO the only good reason to have a mini-B or micro-B
           | connector on a device is for physical compatibility with a
           | legacy design that existed prior to 2016. Compatibility with
           | a previous bad decision is not a good reason, fix your
           | mistakes.
           | 
           | Type A on hosts will still be a thing for a long time, and
           | full-size type B still makes sense for large devices that are
           | not often plugged/unplugged where the size is actually a
           | benefit, but the mini-B connector is deprecated and the
           | micro-B connector should be.
        
             | bigstrat2003 wrote:
             | Micro-B is fine. This is such an overblown non-issue I am
             | shocked that people are making a big deal of it.
        
               | crote wrote:
               | It's not a huge deal, but it's still a very strange
               | choice on a product released in 2024.
               | 
               | Pretty much everyone has a USB-C cable lying around on
               | their desk because they use it to charge their
               | smartphone. I probably have a Micro-B cable lying around
               | in a big box of cables _somewhere_ , last used several
               | years ago. Even cheap Chinese garbage comes with USB-C
               | these days.
               | 
               | Sure, Micro-B is technically just fine, but why did
               | Raspberry Pi go out of their way to make their latest
               | product more cumbersome to use?
        
       | kvemkon wrote:
       | > 1 x USB 1.1 controller and PHY, with host and device support
       | 
       | Sure, after integrating USB 2.0 HS or 1Gb-Ethernet the
       | pico2-board will cost more than $5. So, integrated high-speed
       | interfacing with PC was not a nice-to-have option (for special
       | chip flavor)?
        
         | solidninja wrote:
         | I think the target here is low-power peripherals rather than
         | speedy peripherals, and the price is very nice for that :)
        
           | kvemkon wrote:
           | Here [1] someone asked to make RP1 I/O chip available, but
           | actually RP2350 is what would fit the purpose.
           | 
           | [1] https://www.raspberrypi.com/news/rp1-the-silicon-
           | controlling...
        
         | rasz wrote:
         | >USB 2.0 HS
         | 
         | 480 Mbps SERDES
         | 
         | > or 1Gb-Ethernet
         | 
         | 1.25 Gbps SERDES
        
           | kvemkon wrote:
           | RP1 I/O chip on RPi5 has so many high-speed interfaces. I've
           | been thinking RP2350 could be some smart I/O chip for a
           | PC/notebook/network attached computers (with only 1 necessary
           | high-speed connection).
        
           | namibj wrote:
           | GigE is actually PAM-4 at ~250 MBaud.
        
             | kvemkon wrote:
             | 1.25 Gbps would be needed for low-pin-count SGMII interface
             | to a PHY chip.
        
       | tecleandor wrote:
       | Not off topic but a bit tangentially...
       | 
       | How difficult would be emulating an old SRAM chip with an RP2040
       | or an RP2350? It's an early 80s (or older) 2048 word, 200ns
       | access time CMOS SRAM that is used to save presets on an old
       | Casio synth. It's not a continuous memory read, it just reads
       | when loading the preset to memory.
       | 
       | I feel like PIO would be perfect for that.
        
         | dmitrygr wrote:
         | I did that, not just SRAM but also ROM, to fool a MC68EZ328
         | successfully. It works well. PIO + DMA does it well.
         | Specifically i replaced rom & ram in an old Palm Pilot with an
         | RP2040:
         | 
         | https://photos.app.goo.gl/KabVe5CrfckqnFEt7
         | 
         | https://photos.app.goo.gl/LGAkp6HoYAJc3Uft7
         | 
         | Edit: I did not yet update the rePalm article but much about
         | that is in the Palm discord. https://discord.gg/qs8wQ4Bf
         | 
         | see #repalm-project channel
        
           | tecleandor wrote:
           | Ooooh, that looks cool and the PCB seems simple (at least
           | from this side). Congrats!
           | 
           | Do you have anything published?
        
         | HeyLaughingBoy wrote:
         | If it's not an academic question and you have an actual need
         | for the SRAM, what's the p/n? I have some old parts stock and
         | may have what you need.
        
           | tecleandor wrote:
           | Oh! Thanks!
           | 
           | I wanted to do a clone or two of said cartridges, that use,
           | IIRC (I'm not in my workshop right now) a couple Hitachi
           | HM6116FP each.
           | 
           | I've also seen some clones from back in the day using a
           | CXK5864PN-15L, that's 8 kilowords, and getting 4 switchable
           | "memory banks" out of it...
        
             | HeyLaughingBoy wrote:
             | Thought I had more than this, but it's been literally
             | decades...
             | 
             | I found (1) HM6116, (4) HM65256's (1) HM6264 and wonder of
             | wonders, a Dallas battery-backed DS1220, although after 20+
             | years the battery is certainly dead. All in DIP packages of
             | course.
             | 
             | And a couple of 2114's with a 1980 date code! that I think
             | are DRAM's.
             | 
             | If any of this is useful to you, PM me an address and I'll
             | pop them in the mail.
        
         | bogantech wrote:
         | For that you could just some FRAM with minimal effort
         | 
         | https://www.mouser.com/ProductDetail/877-FM16W08-SG
        
       | swetland wrote:
       | Lots of nice improvements here. The RISC-V RV32I option is nice
       | -- so many RV32 MCUs have absurdly tiny amounts of SRAM and very
       | limited peripherals. The Cortex M33s are a biiig upgrade from the
       | M0+s in the RP2040. Real atomic operations. An FPU. I'm exited.
        
         | guenthert wrote:
         | Many people seem excited about the FPU. Could you help me
         | understand what hardware floating point support is needed in a
         | MCU for? I remember DSPs using (awkward word-size) fixed point
         | arithmetic.
        
       | fouronnes3 wrote:
       | Curious about the low-power and sleep mode improvements!
        
         | geerlingguy wrote:
         | Me too; I had a little trouble with MicroPython lightsleep and
         | deepsleep in the pre-release version I was testing.
         | 
         | I will re-test and try to get better sleep state in my code
         | either today or tomorrow!
        
       | bschwindHN wrote:
       | Alright, what's the max image resolution/framerate someone is
       | going to pump out with the HSTX peripheral?
        
         | spacedcowboy wrote:
         | Unfortunately the starter example [1] hasn't made it into the
         | public tree (yet ?)
         | 
         | [1] https://github.com/raspberrypi/pico-
         | examples/blob/master/dvi...
        
           | bschwindHN wrote:
           | I wonder what other uses people will find for it. It's one-
           | way data transfer, I wonder if it could be hooked up to a USB
           | 2.0 or USB 3.0 peripheral, or an ethernet PHY, or something
           | else.
        
             | spacedcowboy wrote:
             | Pretty sure I'm going to link it up with an FPGA at some
             | point - as long as the data is unidirectional, this is a
             | promise of 2400 Mbit/sec - which for a $1 microcontroller
             | is _insane_. If it overclocks like the processor, you 're
             | up to 4800 MBit/sec ... _stares into the distance_
             | 
             | I can use PIO in the other direction, but this has DDR, so
             | you'll never get the same performance. It's a real shame
             | they didn't make it bi-directional, but maybe the use-case
             | here is (as hinted by the fact it can do TMDS internally)
             | for DVI out.
             | 
             | If they had make it bidirectional, I could see networks of
             | these little microcontrollers transmitting/receiving at
             | gigabit rates... Taken together with PIO, XMOS would have
             | to sit up straight pretty quickly...
        
               | bschwindHN wrote:
               | Right? Bidirectional capability at those speeds would be
               | incredible for the price of this chip.
               | 
               | Either way, still looking forward to see what people cook
               | up with it, and hopefully I'll find a use for it as well.
               | Maybe combine it with some cheap 1920x1080 portable
               | monitors to have some beautiful dashboards around the
               | house or something...
        
               | vardump wrote:
               | 1920x1080 30 Hz DVI would require running RP2350 at least
               | at 311 MHz ((1920 * 1080 * 30Hz * 10) / 2). Probably a
               | bit more to account for minimal horizontal and vertical
               | blanking etc. Multiplier 10 comes from 8b10b encoding.
               | 
               | To fit in 520 kB of RAM, the framebuffer would need to be
               | just 1 bpp, 2 colors (1920 * 1080 * 1bpp = 259200 bytes).
               | 
               | From PSRAM I guess you could achieve 4 bpp, 16 colors.
               | 24-bit RGB full color would be achievable at 6 Hz refresh
               | rate.
               | 
               | I guess you might be able to store framebuffer as YUV
               | 4:2:0 (=12 bits per pixel) and achieve 12 Hz refresh
               | rate? The CPU might be just fast enough to compute
               | YUV->RGB in real time. (At 1920x1080@12Hz 12 clock cycles
               | per pixel per core @300 MHz.)
               | 
               | (Not sure whether the displays can accept very low
               | refresh rates.)
        
               | bschwindHN wrote:
               | Hmmm yeah, I haven't done the math and was maybe a bit
               | too ambitious/optimistic.
        
       | SethTro wrote:
       | This has 2 of the 3 features (float support, faster clock) + more
       | POI that was keeping me on ESP32. For projects that need wifi,
       | and can tolerate the random interrupts, I'll stick with ESP32.
        
       | jononor wrote:
       | Aha, the 3 is for M33, not Cortex M3 (as some speculated based on
       | the name). That makes a lot more sense! Integrated FPU is a big
       | improvement over the RP2040, and M33 is a modern but proven core.
        
       | gchadwick wrote:
       | This looks awesome a really great step up from the RP2040. I'm a
       | big fan of the original and I'm excited to see all the
       | improvements and upgrades.
       | 
       | I imagine with the new secure boot functionality they've got a
       | huge new range of customers to tempt to.
       | 
       | Also exciting to see them dip their toe into the open silicon
       | waters with the hazard 3 RISCV core
       | https://github.com/Wren6991/Hazard3.
       | 
       | Of course it they'd used Ibex https://github.com/lowrisc/ibex the
       | RISC-V core we develop and maintain at lowRISC that would have
       | been even better but you can't have everything ;)
        
       | rowanG077 wrote:
       | Would the pio now support sane Ethernet using rmii for example?
        
         | rscott2049 wrote:
         | I'm assuming you've looked at the pico-rmii-ethernet library?
         | If so, I feel your pain - I've been fixing issues, and am about
         | halfway done. (This is for the DECstation2040 project,
         | available on github). Look for a release in late aug/early sep.
         | (Maybe with actual lance code? Dmitry??) The RP2350 will make
         | RMII slightly easier - the endless DMA allows elimination of
         | the DMA reload channel(s).
        
           | rowanG077 wrote:
           | I looked at it and dismissed it as too hacky for production.
           | I don't remember the real reason why. I would have to look
           | through my notes. The main question is whether the RP2350
           | will change that. As in it actually possible to do bug free
           | without weird hacks.
        
             | rscott2049 wrote:
             | Agreed. I re-wrote the Rx PIO routine to do a proper
             | interrupt at EOF, and added a DMA driven ring buffer, which
             | eliminated a lot of the hackiness...
        
       | nimish wrote:
       | Gross, the dev board uses micro-USB. It's 2024! Otherwise amazing
       | work. Exactly what's needed to compete with the existing giants.
        
         | sowbug wrote:
         | Perhaps the unfortunate choice of micro USB is to discourage
         | real consumer products from being built with the dev board.
        
           | user_7832 wrote:
           | I wonder if it is more about simply shaving a few cents off.
           | Full USB-C protocol implementation may be much more
           | difficult.
        
             | hypercube33 wrote:
             | USB-C doesn't require anything special USB wise as it's
             | decoupled from the versioned standard. It just has more
             | pins and works with all modern cables. Ideally the cables
             | won't wear out like Mini and Micro and get loosey goosey in
             | the ports.
        
               | ewoodrich wrote:
               | Yep, a USB-C connector is more or less a drop in
               | replacement for MicroUSB if you don't need USB3 or USB-
               | PD. With one aggravating exception: it requires adding
               | two 5.1kO pulldown resistors to be compatible with C-C
               | cables. Thus signaling to a charger that the sink is a
               | legacy non-PD device requesting 5V.
               | 
               | Which is apparently an impossible ask for manufacturers
               | of dev boards or cheap devices in general. It's
               | _slightly_ more understandable for a tried and true dev
               | board that's just been connector swapped to USB-C (and
               | I'll happily take it over dealing with Micro) but
               | inexcusable for a new design.
               | 
               | My hope is Apple going USB-C only on all their charging
               | bricks and now even C-C cables for the iPhone will
               | eventually force Chinese OEMs to build standard compliant
               | designs. Or deal with a 50% Amazon return rate for
               | "broken no power won't charge".
        
               | Brusco_RF wrote:
               | As someone who just picked micro USB over USBC for a
               | development card, there is a significant price and
               | footprint size difference between the two.
        
               | Findecanor wrote:
               | For a device, USB-C requires two resistors that older USB
               | ports don't.
               | 
               | Declaring yourself as a host/device is also a bit
               | different: USB-C hardware can switch. Micro USB has a
               | "On-the-go" (OTG) indicator pin to indicate host/device.
               | 
               | The USB PHY in RP2040 and the RP2350 is actually capable
               | of being a USB host but the Micro USB port's OTG pin is
               | not connected to anything.
        
               | rvense wrote:
               | Hm, I've used mine as a USB host with an adapter? Not
               | sure of the details, I suppose OTG is the online/runtime
               | switching and I was just running as fixed host?
        
           | Findecanor wrote:
           | For the _microcontroller_ however, the use in commercial
           | products is encouraged.
           | 
           | There are one-time programmable registers for Vendor,
           | Product, Device and Language IDs that the bootloader would
           | use instead of the default. It would be interesting to see if
           | those are fused on the Pico 2.
        
           | Teknoman117 wrote:
           | I would assume it's in order to maintain mechanical
           | compatibility with the previous Pico.
        
         | janice1999 wrote:
         | It saves cost and none of the features of USB-C (speed, power
         | delivery etc) are supported. Makes sense.
        
           | str3wer wrote:
           | the price difference from usb to usb-c is less than 2 cents
        
             | refulgentis wrote:
             | You would be surprised at the amount of effort and success
             | $0.01 represents at BigCo. Even when projected sales are in
             | 6 figure range.
        
             | rldjbpin wrote:
             | devil's advocate: cables for an average user is a different
             | story. also not to forget the vast range of cables already
             | existing out there.
             | 
             | also "proper" usb-c support is another can of worms, and
             | maybe sticking to an older standard gives you freedom from
             | all that.
        
               | g15jv2dp wrote:
               | You're confusing USB C and USB 3.1+. USB C is just the
               | physical spec. You can design a cheap device that will
               | only support USB 2 if you just connect ground, Vbus, D+
               | and D- and _gasp_ add two resistors. It will work just as
               | well as the micro-usb plug.
        
               | kmeisthax wrote:
               | A USB-C port that only supports USB2 data and power only
               | needs a few resistors across some pins to trigger legacy
               | modes and disable high current/voltage operation. All the
               | extra bits are the things that jack up the cost.
               | 
               | USB3 and altmodes require extra signal lines and
               | tolerances in the cable.
               | 
               | High-voltage/current requires PD negotiation (over the CC
               | pins AFAIK)
               | 
               | Data and power role swaps require muxes and dual-role
               | controllers.
               | 
               | That's all the stuff that makes USB-C a pain in the ass,
               | and it's all the sort of thing RPi Nanos don't support.
        
           | throwaway81523 wrote:
           | How about connections not becoming flaky after you've plugged
           | in the cable a few times. Micro USB was horribly unreliable.
           | USB-C isn't great either, but it's an improvement. Maybe they
           | will get it right some day.
        
             | guax wrote:
             | I always hear that but I never had a micro usb fully fail
             | on me but my phone's usb-c are lint magnets and get super
             | loose and refuse to work. When that happened on micro it
             | was usually the cable tabs a bit worn but the cable always
             | worked.
        
         | hoherd wrote:
         | FWIW the Pimoroni Tiny 2040 and Tiny 2350 use usb-c, but as
         | mentioned by other commenters, the cost for these usb-c boards
         | is higher.
         | 
         | I love having usb-c on all my modern products, but with so many
         | micro-usb cords sitting around, I don't mind that the official
         | Pico and Pico 2 are micro-usb. At least there are options for
         | whichever port you prefer for the project you're using it in.
        
           | magicalhippo wrote:
           | The Pico 2 Plus[1] has USB-C, and seems quite reasonably
           | priced to me given you get 16MB of Flash and 8MB PSRAM.
           | 
           | [1]: https://shop.pimoroni.com/products/pimoroni-pico-plus-2
        
         | nine_k wrote:
         | USB-C is way more complicated, even if you're not trying to
         | push 4K video or 100W power through it. The interface chip
         | ought to be more complex, and thus likely more expensive.
         | 
         | You can still find a number of cheap gadgets with micro-USB on
         | Aliexpress. Likely there's some demand, so yes, you can build a
         | consumer product directly on the dev board, depending on your
         | customer base.
        
           | nimish wrote:
           | Chinese boards are both cheaper and have usb type c
           | implemented correctly and in spec, so that's no real excuse
           | for raspberry pi
        
           | 15155 wrote:
           | How are they "way more complicated?" You have to add two
           | resistors and short another pair of DP/DM lines?
        
             | nine_k wrote:
             | Yes, indeed, I've checked, and apparently you don't need
             | anything beyond this if you don't want super speed or power
             | delivery (past 5V 3A).
             | 
             | I did not realize how many pins in a USB-C socket are
             | duplicated to make this possible. (For advanced features,
             | you apparently still need to consider the orientation of
             | the inserted cable.)
        
           | Ductapemaster wrote:
           | You can use a USB C connector with standard USB, no interface
           | chip required. It's simply a connector form-factor change.
        
       | ChrisArchitect wrote:
       | Related:
       | 
       |  _Raspberry Pi Pico 2, our new $5 microcontroller board, on sale
       | now_
       | 
       | https://news.ycombinator.com/item?id=41192341
        
       | weinzierl wrote:
       | This is fantastic news. Is there information on power
       | consumption? This is something that precludes a good deal of use
       | cases for the RP2040 unfortunately and any improvements here
       | would be good, but maybe the RP's are just not made for ultra low
       | power?
        
         | ebenupton wrote:
         | Significant improvements to flat-out power (switcher vs LDO)
         | and to idle power (low quiescent current LDO for retention).
         | Still not a coin-cell device, but heading in the right
         | direction.
        
       | TaylorAlexander wrote:
       | This is very exciting! For the last several years I have been
       | developing a brushless motor driver based on the RP2040 [1]. The
       | driver module can handle up to 53 volts at 30A continuous, 50A
       | peak. I broke the driver out to a separate module recently which
       | is helpful for our farm robot and is also important for driver
       | testing as we improve the design. However this rev seems pretty
       | solid, so I might build a single board low cost integrated single
       | motor driver with the RP2350 soon! With the RP2040 the loop rate
       | was 8khz which is totally fine for big farm robot drive motors,
       | but some high performance drivers with floating point do 50khz
       | loop rate.
       | 
       | My board runs SimpleFOC, and people on the forum have been
       | talking about building a flagship design, but they need support
       | for sensorless control as well as floating point, so if I use the
       | new larger pinout variant of the RP2350 with 8 ADC pins, we can
       | measure three current signals and three bridge voltages to make a
       | nice sensorless driver! It will be a few months before I can have
       | a design ready, but follow the git repo or my twitter profile [2]
       | if you would like to stay up to date!
       | 
       | [1] https://github.com/tlalexander/rp2040-motor-controller
       | 
       | [2] https://twitter.com/TLAlexander
        
         | sgu999 wrote:
         | > for our farm robot
         | 
         | That peaked my interest, here's the video for those who want to
         | save a few clicks: https://www.youtube.com/watch?v=fFhTPHlPAAk
         | 
         | I absolutely love that they use bike parts for the feet and
         | wheels.
        
           | HeyLaughingBoy wrote:
           | I have given some thought to a two-wheeled electric tractor
           | for dealing with mud -- horse paddocks turn into basically a
           | 1-foot deep slurry after heavy rain and it can be easier to
           | deal with something small that sinks through the mud, down to
           | solid ground than something using large floatation tires.
           | Additional problem with large tires is that they tend to
           | throw mud around, making everyone nearby even more dirty.
           | 
           | I haven't actually built anything (been paying attention to
           | Taylor's work, though), but I came to the same conclusion
           | that bike wheels & tires would probably be a good choice. It
           | also doesn't hurt that we have many discarded kids' bikes all
           | over the place.
        
             | littlestymaar wrote:
             | Your description fit what I've seen for rice farming, whose
             | machines usually use bike-like tires.
        
               | vintagedave wrote:
               | I'm curious there. I've seen rice paddies plowed in
               | Vietnam and the tractors used wide paddle-like wheels. I
               | saw two varieties: one with what looked like more normal
               | wheels but much wider, and one which was of metal with
               | fins, very much akin to a paddle steamer, though still
               | with some kind of flat surface that must have distributed
               | weight.
               | 
               | Would they be more effective with thin wheels? Both
               | humans and cattle seem to sink in a few inches and stop;
               | I don't know what's under the layer of mud and what makes
               | up a rice paddy.
        
               | littlestymaar wrote:
               | What I saw was in Taiwan, but I guess it must depends on
               | the depth of the mud and the nature of what's below.
        
           | tuatoru wrote:
           | * piqued
        
             | GeorgeTirebiter wrote:
             | yes, piqued. English, so weird! ;-)
             | 
             | (Although, interest peaking is possible!)
        
               | speed_spread wrote:
               | > English, so weird
               | 
               | Borrowed from just-as-weird French "piquer" - to stab or
               | jab.
        
               | teleforce wrote:
               | As other more than 30% of English words [1]:
               | 
               | [1] Is English just badly pronounced French [video]:
               | 
               | https://news.ycombinator.com/item?id=40495393
        
               | funnybeam wrote:
               | No, French is badly pronounced French - the English
               | (Norman) versions are often closer to the original
               | pronunciation
        
               | inanutshellus wrote:
               | All this reminds me of the now-famous quote about English
               | "borrowing" words...
               | 
               | > The problem with defending the purity of the English
               | language is that English is about as pure as a cribhouse
               | whore. We don't just borrow words; on occasion, English
               | has pursued other languages down alleyways to beat them
               | unconscious and rifle their pockets for new vocabulary.
               | 
               | (quote swiped from
               | https://en.wikiquote.org/wiki/James_Nicoll)
        
               | bee_rider wrote:
               | It is kind of funny that both of the incorrect versions,
               | peaked or peeked, sort of make more sense just based on
               | the definitions of the individual words. "Peaked my
               | interest" in particular could be interpreted as "reached
               | the top of my interest."
               | 
               | Way better than stabbing my interest, in a French fashion
               | or otherwise.
        
               | jhugo wrote:
               | Right, but that meaning isn't quite right. To pique your
               | interest is to arouse it, leaving open the possibility
               | that you become even more interested, a possibility which
               | peaking of your interest does not leave open.
        
               | digging wrote:
               | However, in the case where someone means "This interested
               | me so much that I stopped what I was doing and looked up
               | more information," peaked is almost _more_ correct,
               | depending on how one defines  "interest" in this context
               | (eg. "capacity for interest"? probably no; "current
               | attention"? probably yes).
        
               | littlestymaar wrote:
               | > Borrowed from just-as-weird French "piquer" - to stab
               | or jab.
               | 
               | Literally _<<piquer>>_ means "to sting" or "to prick"
               | more than stab or jab, it 's never used to describe
               | inter-human aggression.
               | 
               | And _piquer_ is colloquially used to mean "to steal" (and
               | it 's probably the most common way of using it in French
               | after describing mosquito bites)
               | 
               | Edit: and I forgot to mention that we already use it for
               | curiosity, in fact the sentence "it piqued my curiosity"
               | was directly taken from French _<<ca a pique ma
               | curiosite>>_.
        
         | qdot76367 wrote:
         | Ah, it's good to see you continuing your work with types of
         | robots that start with f.
        
           | TaylorAlexander wrote:
           | Hah thats right. I did get some parts to try to update the
           | other one you are referring to, but given all my projects it
           | has not made it near the top of the queue yet.
        
         | roshankhan28 wrote:
         | i am not a engineer type of person but to even thing that
         | someone is trying to create a motor is really impressive. When
         | i was a kid , i used t break my toy cars and would get motors
         | from it and felt like i really did something. good ol' days.
        
           | throwaway81523 wrote:
           | The motor controller is impressive, but it sounds like a
           | motor controller (as it says), rather than a motor. That is,
           | it's not mechanical, it's electrical, it sends inputs to the
           | motor telling it when to turn the individual magnets on and
           | off. That is a nontrivial challenge since it has to monitor
           | the motor speeds under varying loads and send pulses at
           | exactly the right time, but it's software and electronics,
           | not machinery.
        
         | Rinzler89 wrote:
         | _> I have been developing a brushless motor driver based on the
         | RP2040_
         | 
         | Can I ask why? There's dedicated MCU for BLDC motor control out
         | there that have the dedicated peripherals to get the best and
         | easiest sensored/sensorless BLDC motor control plus the
         | supporting application notes and code samples. The RP2040 is
         | not equipped to be good at this task.
        
           | TaylorAlexander wrote:
           | > dedicated MCU for BLDC motor control
           | 
           | During the chip shortage, specialized chips like this were
           | very hard to find. Meanwhile the RP2040 was the highest
           | stocked MCU at digikey and most other places that carried it.
           | The farm robot drive motors don't need high speed control
           | loops or anything. We just needed a low cost flexible system
           | we could have fabbed at JLCPCB. The RP2040 also has very nice
           | documentation and in general is just very lovely to work
           | with.
           | 
           | Also SimpleFOC was already ported to the RP2040, so we had
           | example code etc too. Honestly the CPU was the easy part. As
           | we expected, getting a solid mosfet bridge design was the
           | challenging part.
        
         | technofiend wrote:
         | Taylor, wow! I think you're the only person I've actually seen
         | implement WAAS to boost GPS precision. So cool!
        
       | brcmthrowaway wrote:
       | Why would I pick this over esp32 if I need to get shit done
        
       | ryukoposting wrote:
       | I can't imagine someone using an RP2040 in a real product, but
       | the RP2350 fixes enough of my complaints that I'd be really
       | excited to give it a shot.
       | 
       | There's a lot going for the 2040, don't get me wrong. TBMAN is a
       | really cool concept. It overclocks like crazy. PIO is truly
       | innovative, and it's super valuable for boatloads of companies
       | looking to replace their 8051s/whatever with a daughterboard-
       | adapted ARM core.
       | 
       | But, for every cool thing about the RP2040, there was a bad
       | thing. DSP-level clock speeds but no FPU, and no hardware integer
       | division. A USB DFU function embedded in boot ROM is flatly
       | undesirable in an MCU with no memory protection. PIO support is
       | extremely limited in third-party SDKs like Zephyr, which puts a
       | low ceiling on its usefulness in large-scale projects.
       | 
       | The RP2350 fixes nearly all of my complaints, and that's really
       | exciting.
       | 
       | PIO is a really cool concept, but relying on it to implement
       | garden-variety peripherals like CAN or SDMMC immediately puts
       | RP2350 at a disadvantage. The flexibility is very cool, but if I
       | need to get a product up and running, the last thing I want to do
       | is fiddle around with a special-purpose assembly language. My
       | hope is that they'll eventually provide a library of ready-made
       | "soft peripherals" for common things like SD/MMC, MII, Bluetooth
       | HCI, etc. That would make integration into Zephyr (and friends)
       | easier, and it would massively expand the potential use cases for
       | the chip.
        
         | petra wrote:
         | For high volume products, given the low cost of this chip, it
         | would make sense to bother with the PIO or it's open-source
         | libraries.
        
         | TaylorAlexander wrote:
         | > My hope is that they'll eventually provide a library of
         | ready-made "soft peripherals"
         | 
         | Perhaps they could be more ready-made, but there are loads of
         | official PIO examples that are easy to get started with.
         | 
         | https://github.com/raspberrypi/pico-examples/tree/master/pio
        
           | ryukoposting wrote:
           | These examples are cute, but this isn't a comprehensive
           | collection. Not even close.
           | 
           | Given that PIO's most compelling use case is replacing legacy
           | MCUs, I find it disappointing that they haven't provided PIO
           | boilerplate for the peripherals that keep those archaic
           | architectures relevant. Namely: Ethernet MII and CANbus.
           | 
           | Also, if RP2xxx is ever going to play ball in the wireless
           | space, then they need an out-of-box Bluetooth HCI
           | implementation, and it needs sample code, and integration
           | into Zephyr.
           | 
           | I speak as someone living in this industry: the only reason
           | Nordic has such a firm grip on BLE product dev is because
           | they're the only company providing a bullshit-free Bluetooth
           | stack out of the box. Everything else about nRF sucks. If I
           | could strap a CYW4343 to an RP2350 with some example code as
           | easily as I can get a BT stack up and running on an nRF52840,
           | I'd dump Nordic overnight.
        
             | TaylorAlexander wrote:
             | Well open source CAN and MII implementations do exist.
             | Perhaps you can help provide a pull request to the official
             | repo that checks in appropriate versions of that code, or
             | file an issue requesting them to do it.
             | 
             | https://github.com/KevinOConnor/can2040
             | 
             | https://github.com/sandeepmistry/pico-rmii-ethernet
             | 
             | My biggest issue with their wireless implementation is that
             | I get my boards made at JLCPCB and Raspberry Pi chose a
             | specialty wireless chip for the Pico W which is not widely
             | available, and is not available at JLCPCB.
        
             | bboygravity wrote:
             | Just feed the boilerplate templates to Claude and ask it to
             | "write a CANbus driver, use boiler plate as example" and
             | done?
        
               | TaylorAlexander wrote:
               | I have never had even the slightest luck getting any of
               | the AI services to generate something as specialized as
               | embedded system drivers.
        
               | defrost wrote:
               | I can't even get them to make me a sandwich :(
        
             | vardump wrote:
             | Just pick whatever fits best in your application. No uC is
             | going to solve everything for everyone.
             | 
             | RP2350 is about the best I can think of for interfacing
             | with legacy protocols. Well, short of using FPGAs anyways.
        
             | pkaye wrote:
             | How much do those Nordic controllers cost? Are the as
             | affordable as the RP2350?
        
           | crote wrote:
           | I feel like the PIO is just _slightly_ too limited for that.
           | You can already do some absolute magic with it, but it 's
           | quite easy to run into scenarios where it becomes really
           | awkward to use due to the limited instruction count, lack of
           | memory, and absence of a direct clock input.
           | 
           | Sure, you _can_ work around it, but that often means making
           | significant sacrifices. Good enough for some hacking, not
           | quite there yet to fully replace hard peripherals.
        
             | vardump wrote:
             | Any concrete examples?
             | 
             | PIO is surprisingly flexible, even more so in RP2350.
        
               | crote wrote:
               | You run into issues if you try to implement something
               | like RMII, which requires an incoming 50MHz clock.
               | 
               | There's an implementation out there which feeds the clock
               | to a GPIO clock input - but because it can't feed the PLL
               | from it and the PIO is driven from the system clock that
               | means your entire chip runs at 50MHz. This has some nasty
               | implications, such as being unable to transmit at 100meg
               | and having to do a lot of postprocessing.
               | 
               | There's another implementation which oversamples the
               | signal instead. This requires overclocking the Pico to
               | 250MHz. That's nearly double the design speed, and close
               | to some peripherals no longer working.
               | 
               | A third implementation feeds the 50MHz clock into the XIN
               | input, allowing the PLL to generate the right clock. This
               | works, except that you've now completely broken the
               | bootloader as it assumes a 12MHz clock when setting up
               | USB. It's also not complete, as the 10meg half duplex
               | mode is broken due to there not being enough space for
               | the necessary PIO instructions.
        
               | TaylorAlexander wrote:
               | Just to clarify, and it sounds like the answer is yes,
               | this is a problem even with an external 50MHz clock
               | signal?
        
               | tomooot wrote:
               | As far as I understood the explanation, the incoming
               | ("external") 50Mhz clock signal is a core requirement of
               | the spec: all of those workarounds are just what is
               | required to meet that spec, and be able to TX/RX using
               | the protocol at all.
        
               | rscott2049 wrote:
               | Almost correct - the third implementation does generate
               | the clock, but it isn't necessary to drive the clock
               | directly from the system clock, as there are m/n clock
               | dividers available. I use a 300 MHz system clock, and
               | divide down to 50 MHz which works well. (I've also
               | addressed a few other shortcomings of this library, but
               | am not done yet...) Haven't looked at the 10 MHz half
               | duplex mode, though.
        
         | my123 wrote:
         | > no hardware integer division
         | 
         | It did have it, but as an out-of-ISA extension
        
           | GeorgeTirebiter wrote:
           | Not only that, the single FP and double FP were provided as
           | optimized subroutines. I was never hampered by inadequate FP
           | performance for simple control tasks.
        
         | wrs wrote:
         | I haven't dug into the RP2xxx but I presumed there would be a
         | library of PIO implementations of the standard protocols from
         | RP themselves. There really isn't?
         | 
         | Edit: I see, there are "examples". I'd rather have those be
         | first-class supported things.
        
         | robomartin wrote:
         | For my work, the lack of flash memory integration on the 2040
         | is a deal breaker. You cannot secure your code. Not sure that
         | has changed with the new device.
        
           | ebenupton wrote:
           | It has: you can encrypt your code, store a decryption key in
           | OTP, and decrypt into RAM. Or if your code is small and
           | unchanging enough, store it directly in OTP.
        
             | XMPPwocky wrote:
             | What stops an attacker from uploading their own firmware
             | that dumps out everything in OTP?
        
               | ebenupton wrote:
               | Signed boot. Unless someone at DEF CON wins our $10k
               | bounty of course.
        
               | phire wrote:
               | Do you have any protection against power/clock glitching
               | attacks?
        
               | geerlingguy wrote:
               | I was reading in I believe the Register article that yes,
               | that's one of the protections they've tested... will be
               | interesting to see if anyone can break it this month!
        
               | ebenupton wrote:
               | Yes, several approaches. More here:
               | https://x.com/ghidraninja/status/1821570157933912462
        
               | colejohnson66 wrote:
               | Unrolled for those without accounts: https://threadreader
               | app.com/thread/1821570157933912462.html
        
             | robomartin wrote:
             | Nice! Thanks for the direct communication BTW.
             | 
             | I guess you are very serious about competing with
             | industrial MCU's.
             | 
             | We had to use a 2040 shortly after it came out because it
             | was impossible to get STM32's. Our customer accepted the
             | compromise provided we replaced all the boards (nearly
             | 1000) with STM32 boards as soon as the supply chain
             | normalized.
             | 
             | I hope to also learn that you now have proper support for
             | development under Windows. Back then your support engineers
             | were somewhat hostile towards Windows-based development
             | (just learn Linux, etc.). The problem I don't think they
             | understood was that it wasn't a case of not knowing Linux
             | (using Unix before Linux existed). A product isn't just the
             | code inside a small embedded MCU. The other elements that
             | comprise the full product design are just as important, if
             | not more. Because of this and other reasons, it can make
             | sense to unify development under a single platform. I can't
             | store and maintain VM's for 10 years because one of the 200
             | chips in the design does not have good support for Windows,
             | where all the other tools live.
             | 
             | Anyhow, I explained this to your engineers a few years ago.
             | Not sure they understood.
             | 
             | I have a project that I could fit these new chips into, so
             | long as we don't have to turn our workflow upside down to
             | do it.
             | 
             | Thanks again.
        
               | ebenupton wrote:
               | It's a fair comment. Give our VSCode extension a go: the
               | aspiration is to provide uniform developer experience
               | across Linux, Windows, and MacOS.
        
               | robomartin wrote:
               | I will when I get a break. I'll bring-up our old RP2040
               | project and see what has changed.
               | 
               | I remember we had to use either VSC or PyCharm (can't
               | remember which) in conjunction with Thonny to get a
               | workable process. Again, it has been a few years and we
               | switched the product to STM32, forgive me if I don't
               | recall details. I think the issue was that debug
               | communications did not work unless we used Thonny (which
               | nobody was interested in touching for anything other than
               | a downloader).
               | 
               | BTW, that project used MicroPython. That did not go very
               | well. We had to replace portions of the code with ARM
               | assembler for performance reasons, we simply could not
               | get efficient communications with MicroPython.
               | 
               | Thanks again. Very much a fan. I mentored our local FRC
               | robotics high school team for a few years. Lots of
               | learning by the kids using your products. Incredibly
               | valuable.
        
             | ryukoposting wrote:
             | You can certainly do that, sure, but any Cortex-M MCU can
             | do that, and plenty of others have hardware AES
             | acceleration that would make the process much less asinine.
             | 
             | Also, 520K of RAM wouldn't be enough to fit a the whole
             | application + working memory for any ARM embedded firmware
             | I've worked on in the last 5 years.
        
               | TickleSteve wrote:
               | 520K RAM is _huge_ for most typical embedded apps. Most
               | micros are typically around the 48K- >128K SRAM.
        
               | ryukoposting wrote:
               | Define "typical."
               | 
               | To my recollection, every piece of Cortex-M firmware I've
               | worked on professionally in the last 5 years has had at
               | least 300K in .text on debug builds, with some going as
               | high as 800K. I wouldn't call anything I've worked on in
               | that time "atypical." Note that these numbers don't
               | include the bootloader - its size isn't relevant here
               | because we're ramloading.
               | 
               | If you're ram-loading encrypted firmware, the code and
               | data have to share RAM. If your firmware is 250K, that
               | leaves you with 270K left. That seems pretty good, but
               | remember that the 2040 and 2350 are dual-core chips. So
               | there's probably a second image you're loading into RAM
               | too. Let's be generous and imagine that the second core
               | is running something relatively small - perhaps a state
               | machine for a timing-sensitive wireless protocol. Maybe
               | that's another 20K of code, and 60K in data. These aren't
               | numbers I pulled out out of my ass, by the way - they're
               | the actual .text and .data regions used by the off-the-
               | shelf Bluetooth firmware that runs on the secondary core
               | of an nRF5340.
               | 
               | So now you're down to 190K in RAM available for your 250K
               | application. I'd call that "normal," not _huge_ at all.
               | And again, this assumes that whatever you 're running is
               | smaller than anything I've worked on in years.
        
               | dmitrygr wrote:
               | > Also, 520K of RAM wouldn't be enough to fit a the whole
               | application + working memory for any ARM embedded
               | firmware I've worked on in the last 5 years.
               | 
               | what are you smoking? I have an entire decstation3100
               | system emulator that fits into 4K of code and 384bytes of
               | ram. I boot palmos in 400KB of RAM. if you cannot fit
               | your "application" into half a meg, maybe time to take up
               | javascript and let someone else do embedded?
        
               | vardump wrote:
               | There are plenty of embedded applications that require
               | megabytes or even gigabytes.
               | 
               | For example medical imaging.
               | 
               | As well as plenty that require 16 bytes of RAM and a few
               | hundred bytes of program memory. And everything in
               | between.
        
               | my123 wrote:
               | If it's in the gigabyte range it's just not an MCU by any
               | stretch. And if it has a proper (LP)DDR controller it's
               | not one either really
        
               | vardump wrote:
               | Yes and no. Plenty of such applications that use a uC +
               | an FPGA. FPGA interfaces with some DDR memory and
               | CMOS/CCD/whatever.
               | 
               | Up to you what you call it.
        
               | misiek08 wrote:
               | So you make hardware costing dozens of thousands of
               | dollars and brag about memory on 5$ chip? That explains a
               | lot why so many medical and industrial (I haven't touch
               | military hardware) are so badly designed, with some sad
               | proprietary protocols and dead few months after warranty
               | passes. Today I've learned!
        
               | vardump wrote:
               | As if that depends on the engineers!
               | 
               | When you make 1M thingies, $5 savings each means $5M CEO
               | comp.
        
               | ryukoposting wrote:
               | I'm smoking multiprotocol wireless systems for
               | industrial, medical, and military applications. To my
               | recollection, the very smallest of those was around 280K
               | in .text, and 180K in .data. Others have been 2-3x larger
               | in both areas.
               | 
               | I would sure hope a decstation3100 emulator is small.
               | After all, it's worthless unless you actually run
               | something within the emulator, and that will inevitably
               | be much larger than the emulator itself. I wouldn't know,
               | though. Believe it or not, nobody pays me to emulate
               | computers from 1978.
        
         | __s wrote:
         | RP2040 shows up in a lot of qmk keyboards, for real product use
        
           | Eduard wrote:
           | > RP2040 shows up in a lot of qmk keyboards
           | 
           | as niche as it gets
        
             | tssva wrote:
             | But real products.
        
         | alex-robbins wrote:
         | > A USB DFU function embedded in boot ROM is flatly undesirable
         | in an MCU with no memory protection.
         | 
         | Are you saying DFU is not useful without an MMU/MPU? Why would
         | that be?
        
           | ryukoposting wrote:
           | It's certainly useful, but having it embedded within the
           | hardware with no way to properly secure it makes the RP2040 a
           | non-starter for any product I've ever written firmware for.
        
             | TickleSteve wrote:
             | it has secure boot and TrustZone.
        
               | crest wrote:
               | Not the RP2040. That chip has no boot security from
               | anyone with physical access to the QSPI or SWD pins.
        
         | anymouse123456 wrote:
         | We're using 2040's in a variety of "real" products for an
         | industrial application.
         | 
         | PIO is a huge selling point for me and I'm thrilled to see them
         | leaning into it with this new version.
         | 
         | It's already as you hoped. Folks are developing PIO drivers for
         | various peripherals (i.e., CAN and WS2812, etc.)
        
           | ryukoposting wrote:
           | Oh, I'm sure it's great for industrial, as long as you can
           | live with the hardware security issues. In college, my first
           | serious task as an intern was to take a Cortex-M0+ and make
           | it pretend to be an 8051 MCU that was being obsoleted.
           | Unsurprisingly, this was for an industrial automation firm.
           | 
           | I mimicked the 16-bit data bus using hand-written assembly to
           | make sure the timings were as close as possible to the real
           | chip. It was a pain in the ass. It would have been amazing to
           | have a chip that was designed specifically to mimic
           | peripherals like that.
           | 
           | It's great that there's a community growing around the RPi
           | microcontrollers! That's a really good sign for the long-term
           | health of the ecosystem they're trying to build.
           | 
           | What I'm looking for is a comprehensive library of PIO
           | drivers that are maintained by RPi themselves. There would be
           | a lot of benefits to that as a firmware developer: I would
           | know the drivers have gone through some kind of QA. If I'm
           | having issues, I could shoot a message to my vendor/RPi and
           | they'll be able to provide support. If I find a bug, I could
           | file that bug and know that someone is going to receive it
           | and fix it.
        
         | nrp wrote:
         | We ship a large quantity of RP2040's in real products, and
         | agreed that the RP2350 looks great too!
         | 
         | Part of the reason we went with RP2040 was the design
         | philosophy, but a lot of it was just easy availability coming
         | out of the chip crunch.
        
         | tliltocatl wrote:
         | >>> extremely limited in third-party SDKs like Zephyr
         | 
         | So is almost any non-trivial peripheral feature. Autonomous
         | peripheral operation, op-amps, comparators, capture/compare
         | timers...
         | 
         | Zephyr tries to provide a common interface like desktop OSes do
         | and this doesn't really work. On desktop having just the least
         | common denominator is often enough. On embedded you choose your
         | platform because you want the uncommon features.
        
         | crest wrote:
         | > no hardware integer division
         | 
         | The RP2040 SIO block contains one hardware divider per CPU
         | core.
        
         | valdiorn wrote:
         | > I can't imagine someone using an RP2040 in a real product
         | 
         | Why not? It's a great chip, even if it has some limitations. I
         | use it in several of my pro audio products (a midi controller,
         | a Eurorack module, and a series of guitar pedals). they are
         | absolutely perfect as utility chips, the USB stack is good, the
         | USB bootloader makes it incredibly easy for customers to update
         | the firmware without me having to write a custom bootloader.
         | 
         | I've shipped at least a thousand "real" products with an RP2040
         | in them.
        
       | jackwilsdon wrote:
       | I'm most excited for the partition and address translation
       | support - partitions can be mapped to the same address for A/B
       | boot slots (and it supports "try before you buy" to boot into a
       | slot temporarily). No more compiling two copies for the A and B
       | slots (at different addresses)!
        
       | vardump wrote:
       | RP2040 had Doom ported to it.
       | https://kilograham.github.io/rp2040-doom/
       | 
       | RP2350 looks very much like it could potentially run _Quake_.
       | Heck, some of the changes almost feel like they 're designed for
       | this purpose.
       | 
       | FPU, two cores at 150 MHz, overclockable beyond 300 MHz and it
       | supports up to 16 MB of PSRAM with hardware R/W paging support.
        
         | chipxsd wrote:
         | While outputting DVI! I wouldn't be surprised.
        
           | rvense wrote:
           | Mouser have 64 megabyte PSRAMs.
           | 
           | I really want a Mac System 7 grade operating system for this
           | chip...
        
             | dmitrygr wrote:
             | No they do not. 64megabit
        
               | rvense wrote:
               | Did you bother to check? It's octal, not QSPI, so I don't
               | know if it's compatible. (edit - and 1.8V, inconvenient)
        
               | andylinpersonal wrote:
               | Octal PSRAM usually cannot fallback to quad mode like
               | some octal flash did.
        
               | rvense wrote:
               | Actually reading the datasheet[1] it doesn't look like
               | it.
               | 
               | [1] https://www.mouser.dk/datasheet/2/1127/APM_PSRAM_OPI_
               | Xccela_...
        
               | dmitrygr wrote:
               | Did you? It doesnt do qspi mode
        
               | rvense wrote:
               | Of course I did. If you also did, you would know that
               | they do in fact have 64 megabyte PSRAMs as I stated. So a
               | helpful comment would have been "they're not compatible,
               | though". Your reply as it stands just makes it sound like
               | you maybe assumed that I don't know the difference
               | between megabits and megabytes.
        
             | geerlingguy wrote:
             | I'd settle for Mac 512K ;)
             | 
             | https://github.com/evansm7/pico-mac/issues/7
        
         | refulgentis wrote:
         | If we can get flutter running on these...
        
       | v1ne wrote:
       | Hmm, it's really nice that they fixed so many complaints. But
       | honestly, reading the errata sheet, I had to chuckle that Dmitry
       | didn't tear this chip to pieces.
       | 
       | I mean, there's erratums about obscure edge cases, about
       | miniscule bugs. Sure, mistakes happen. And then there's this:
       | Internal pull-downs don't work reliably.
       | 
       | Workaround: Disconnect digital input and only connect while
       | you're reading the value. Well, great! Now it takes 3
       | instructions to read data from a port, significantly reducing the
       | rate at which you can read data!
       | 
       | I guess it's just rare to have pull-downs, so that's naturally
       | mitigating the issue a bit.
        
         | colejohnson66 wrote:
         | You can also use external resistors.
        
       | urbandw311er wrote:
       | I absolutely love this guy's enthusiasm.
        
       | kaycebasques wrote:
       | Big day for my team (Pigweed)! Some of our work got mentioned in
       | the main RP2350/Pico2 announcement [1] but for many months we've
       | been working on a new end-to-end SDK [2] built on top of Bazel
       | [3] with support for both RP2040 and RP2350, including
       | upstreaming Bazel support to the Pico SDK. Our new "Tour of
       | Pigweed" [4] shows a bunch of Pigweed features working together
       | in a single codebase, e.g. hermetic builds, on-device unit tests,
       | RPC-centric comms, factory-at-your-desk testing, etc. We're over
       | in our Discord [5] if you've got any questions
       | 
       | [1] https://www.raspberrypi.com/news/raspberry-pi-pico-2-our-
       | new...
       | 
       | [2] https://opensource.googleblog.com/2024/08/introducing-
       | pigwee...
       | 
       | [3] https://blog.bazel.build/2024/08/08/bazel-for-embedded.html
       | 
       | [4] https://pigweed.dev/docs/showcases/sense/
       | 
       | [5] https://discord.gg/M9NSeTA
        
         | dheera wrote:
         | I hate Bazel. A build system for C/C++ should not require a
         | Java JVM. Please keep Java out of microcontroller ecosystem
         | please -__--
        
           | jaeckel wrote:
           | And only discord on top, but maybe I'm simply not hip enough
        
             | kaycebasques wrote:
             | I forwarded your feedback to the team and we are now
             | vigorously debating which other comms channels we can all
             | live with
        
           | kaycebasques wrote:
           | We realize Bazel is not the right build system for every
           | embedded project. The "Bazel for Embedded" post that came out
           | today (we co-authored it) talks more about why we find Bazel
           | so compelling: https://blog.bazel.build/2024/08/08/bazel-for-
           | embedded.html
        
             | actionfromafar wrote:
             | Bazel is great for _some_ Enterprise. Try it somewhere
             | Azure rules and behold the confused looks everywhere.
        
             | bobsomers wrote:
             | In my experience, Bazel is great if you are a Google-sized
             | company that can afford to have an entire team of at least
             | 5-10 engineers doing nothing but working on your build
             | system full time.
             | 
             | But I've watched it be insanely detrimental to the
             | productivity of smaller companies and teams who don't
             | understand the mountain of incidental complexity they're
             | signing up for when adopting it. It's usually because a
             | startup hires an ex-Googler who raves about how great Blaze
             | is without understanding how much effort is spent
             | internally to make it great.
        
               | kaycebasques wrote:
               | Thanks for the discussion. What was the timeframe of your
               | work in these Bazel codebases (or maybe it's ongoing)?
               | And were they embedded systems or something else?
        
           | clumsysmurf wrote:
           | Maybe there is a way to create a native executable with
           | GraalVM...
        
           | TickleSteve wrote:
           | I have to admit, Bazel as a build system would mean it
           | wouldnt even be considered by me, it has to fit in with
           | everything else which typically means Makefiles, like it or
           | not.
           | 
           | TBH, Java + Bazel + Discord makes it seem like its out of
           | step with the embedded world.
        
         | snvzz wrote:
         | Is RISC-V supported?
         | 
         | I am surprised the Pigweed announcement makes no mention of
         | this.
        
         | simfoo wrote:
         | Pretty awesome. I love Bazel and it seems you're making good
         | use of it. It's such a difference seeing everything
         | hermetically integrated with all workflows boiling down to a
         | Bazel command.
        
       | boznz wrote:
       | > I got almost all of my wishes granted with RP2350
       | 
       | I got all mine, these guys really listened to the (minor)
       | criticisms of the RP2040 on their forums and knocked them out of
       | the ball park. Cant wait to get my hands on real hardware. Well
       | done guys
        
         | ebenupton wrote:
         | Thank you. It's been a major effort from the team, and I'm very
         | proud of what they've accomplished.
        
           | vardump wrote:
           | Thanks for a great product!
           | 
           | A (small?) ask. Can we have instruction timings please? Like
           | how many cycles SMLAL (signed multiply long, with accumulate)
           | takes?
           | 
           | Will there be an official development board with all 48 GPIOs
           | exposed?
        
             | ebenupton wrote:
             | Cortex-M33 timings aren't documented, but one of our
             | security consultants has made a lot of progress reverse
             | engineering them to support his work on trace stacking for
             | differential power analysis of our AES implementation. I've
             | asked him to write this up to go in a future rev of the
             | datasheet.
             | 
             | No official 48 GPIO board, I think: this is slightly
             | intentional because it creates market space for our
             | partners to do something.
        
               | vardump wrote:
               | > I've asked him to write this up to go in a future rev
               | of the datasheet.
               | 
               | Thanks!
        
       | begriffs wrote:
       | I see CMSIS definitions for the RP2040 at
       | https://github.com/raspberrypi/CMSIS-RP2xxx-DFP but none for
       | RP2350. Maybe they'll eventually appear in that repo, given its
       | name is RP2xxx? I thought vendors are legally obligated to
       | provide CMSIS definitions when they license an ARM core.
        
       | tibbon wrote:
       | Thanks for making the DEF CON badge! 10000x cooler than last year
        
       | GeorgeTirebiter wrote:
       | What is the process node used? Who is fabbing this for them?
       | Given that the new chip is bigger, my guess is the same (old)
       | process node is being used. RP2040 is manufactured on a 40nm
       | process node.
       | 
       | Whoops, I read the fine print: RP2350 is manufactured on a 40nm
       | process node.
        
       | ckemere wrote:
       | I wish there was a way to share memory with a Pi. The PIO looks
       | great for high speed custom IO, but 100Mb scale interface to/from
       | it is quite hard/unsolved.
        
       | boznz wrote:
       | This is amazing IP; it makes you wonder if the RPi foundation
       | could be a target for Aquisition by one of the big
       | microcontroller manufacturers.
        
         | spacedcowboy wrote:
         | $deity I hope not. Then we lose the price/performance of these
         | little beauties.
        
       | 294j59243j wrote:
       | But still USB-micro instead of USB-C. Raspberry Picos are
       | literally the one and only reason why I still own any older USB
       | cables.
        
       | Taniwha wrote:
       | New BusPirate 5XL&6 also dropping today - use the 3250
       | 
       | https://buspirate.com/
        
       | endorphine wrote:
       | Can someone explain what projects this can be used for?
        
         | vardump wrote:
         | It's a pretty generic uC, especially well suited for projects
         | that require high-speed (GP)I/O. It can replace an FPGA in some
         | cases.
         | 
         | DSP extensions and FPU support are also decent, so good for
         | robotics, (limited) AI, audio, etc.
         | 
         | Also great for learning embedded systems. Very low barrier of
         | entry, just need to download IDE and connect it with an USB
         | cable.
        
       | anyfoo wrote:
       | > > I was not paid or compensated for this article in any way
       | 
       | > However the Raspberry Pi engineer in question WAS compensated
       | for the samples, in the form of a flight over downtown Austin in
       | Dmitry's Cirrus SR22.
       | 
       | Hahah, I've been in that plane. Only in my case, it was a flight
       | to a steak house in central California, and I didn't actually do
       | anything to get "compensated", I was just at the right place at
       | the right time.
       | 
       | Anyway, I am extremely excited about this update, RPi are
       | knocking it out of the park. That there is a variant with flash
       | now is a godsend by itself, but the updates to the PIO and DMA
       | engines make me dream up all sorts of projects.
        
       | hashtag-til wrote:
       | The disclaimer is brutally honest. I love it.
        
       | 1oooqooq wrote:
       | > It overclocks insanely well
       | 
       | says the guy with engineering samples and creme of the creme
       | silicon parts... i expect most that will actually be available
       | when they do to their normal schedule of scraping the literal
       | bottom of the barrel to keep their always empty stocks that will
       | not be the case.
        
       | TheCipster wrote:
       | While I completely agree with the content of the post, I still
       | think that QFN packages in general, and RP2350's in particular,
       | are very hobbyist-averse.
       | 
       | Moving all GND pins to the bottom pad makes this chip usable only
       | by people with a reflow oven. I really hoped to see at least a
       | version released as (T)QFP.
        
         | dgacmu wrote:
         | Isn't the hobbyist solution to just build a board to which you
         | can attach an entire pico board? That does preclude some things
         | and adds $3, but it makes for a pretty easy prototyping path.
        
         | jsfnaoad wrote:
         | Hard disagree. A TQFP package this dense is still quite
         | challenging for a hobbyist. Just use a breakout board, dev
         | board or get the QFN assembled for you at jlcpcb
        
         | mastax wrote:
         | My reflow oven is a $5 hot plate and a $15 toaster oven. I
         | don't know if that is _very_ hobbyist averse.
        
       | amelius wrote:
       | > So, in conclusion, go replan all your STM32H7 projects with
       | RP2350, save money, headaches, and time.
       | 
       | Except the STM32H7 series goes up until 600MHz.
       | 
       | Overclocking is cool, but you can't do that on most commercial
       | projects.
        
       | amelius wrote:
       | How easy is it to share memory between two of these processors?
        
         | sounds wrote:
         | Hmm, a 4-core cluster?
         | 
         | Easiest would be to wire up two chips with bidirectional links
         | and use a fault handler to transfer small blocks of memory
         | across. You're reimplementing a poor man's MESIF
         | https://stackoverflow.com/questions/31876808.
        
           | amelius wrote:
           | This is something I'd like to see an article about on HN :)
        
       | andylinpersonal wrote:
       | In terms of security features, it lacks on-the-fly external
       | memory (flash and PSRAM) encryption and decryption as ESP32 and
       | some newer STM32s did. Decrypting by custom OTP bootloader and
       | running entirely in the internal SRAM maybe sometimes limited for
       | larger firmware.
        
       | ralferoo wrote:
       | I presume the article was edited to change its headline after it
       | was submitted to HN, but it's interesting that it doesn't match
       | up with the HN title. It's still a subjective but positive title,
       | but somehow feels like it has a different tone to the title on
       | HN:
       | 
       | HN: "I got almost all of my wishes granted with RP2350"
       | 
       | Article: "Why you should fall in love with the RP2350"
       | 
       | title tag: "Introducing the RP2350"
        
       | mastax wrote:
       | It's a bit surprising that they put so much effort into security
       | for the second microcontroller from a young consumer-oriented*
       | company. My first instinct was to distrust it's security, simply
       | due to lack of experience. However, the "experienced" vendors'
       | secure micros have lots of known security bugs and, more
       | crucially, a demonstrated desire to sweep them under the rug. Two
       | security architecture audits, a $10k bug bounty, and designing a
       | board for glitching as the DEF CON badge shows a pretty big
       | commitment to security. I'm curious about how the Redundancy
       | Coprocessor works. I still wouldn't be surprised if someone
       | breaks it, at least partially.
       | 
       | * By perception at least. They have been prioritizing industrial
       | users from a revenue and supply standpoint, it seems.
        
       ___________________________________________________________________
       (page generated 2024-08-09 23:01 UTC)