[HN Gopher] The Orange Pi 5 Plus
___________________________________________________________________
The Orange Pi 5 Plus
Author : rcarmo
Score : 88 points
Date : 2024-01-20 18:31 UTC (4 hours ago)
(HTM) web link (taoofmac.com)
(TXT) w3m dump (taoofmac.com)
| user_7832 wrote:
| > Keeping in mind that the i7 runs at nearly six times the
| wattage, this is pretty good3, and the key point here is that the
| Orange Pi 5+ generated the output at a usable pace-slower than
| what you'd get online from ChatGPT, but still fast enough that it
| is close to "reading speed"-whereas the Raspberry Pi 4 was too
| slow to be usable.
|
| > This also means I could probably (if I could find a suitable
| model that fit into 4GB RAM) use the Orange Pi 5+ as a back-end
| for a "smart speaker" for my home automation setup (and I will
| probably try that in the future).
|
| This is pretty interesting for me. I had (wrongly, I suppose)
| assumed that hardware requirements for LLMs were "have a recent
| NVidia GPU" but this proves otherwise.
| rcarmo wrote:
| Hi! Author here. Mind that I had to test with relatively small,
| "dumb" LLMs. I have no doubt I can run whisper.cpp on the
| RK3588 _and_ a tuned LLM to handle intents, but it won't be a
| very smart one (I am hoping to find a good way to run quantized
| Mixtral, but given the RAM constraints on the 4GB board I
| didn't even try).
|
| Edited to add: I'm looking for something like
| https://news.ycombinator.com/item?id=38704982 (LLM in a Flash)
| even if I find something with 16/32GB of RAM, which is why I
| looked at OnnxStream as well (but of course the inference in
| LLMs is different, so I can't leverage the NVMe just yet).
| user_7832 wrote:
| Thanks! How "bad" are these LLMs though, especially for smart
| home-esque basic tasks? I'd imagine them to be alright-ish?
| rcarmo wrote:
| Well, the ones I tried that would fit into 4GB RAM have
| trouble following directions and inconsistent output. I
| can't really tell you (yet) which would fit into, say, 16GB
| RAM and work consistently (and fast enough to, say, turn a
| light off faster than it would take you to get up and reach
| for the switch), but I'll eventually get to it...
| user_7832 wrote:
| Thank you, that makes sense. I think I'll stick with my
| google hub for now haha
| FergusArgyll wrote:
| Dolphin-phi (uncensored tweaked version of msft phi) has
| been pretty good, I've been testing for only a couple
| days. It's 2.7B params, so depending on how much RAM the
| os is using you might be able to run that on 4GB. I run
| it on 8GB windows using wsl2/ollama and the OS takes up
| around 5GB (I think), so maybe....
| timnetworks wrote:
| Inferencing can be done in software entirely (e.g. INT8) but
| it's very slow compared to GPU or APU. nVidia cornered the
| market because everything (tensorflow and everything after) is
| optimized for it, but you can get good results on AMD now, and
| on ARC too in some cases. And slow results entirely in software
| (CPU-RAM), which for personal and non-constant use may be just
| fine too.
| user_7832 wrote:
| Thanks! Do you have any guides/websites/github repos for
| running these models on CPUs?
| rcarmo wrote:
| Ollama will (nearly always) work provided you have enough
| RAM. I was actually pretty surprised that it didn't work on
| my N5105 (which has 16GB) because it relies on AVX
| instructions...
| user_7832 wrote:
| Thanks! Someone else mentioned llama.cpp but it appears
| that ollama is just a gui frontend for llama (which is
| good because I find guis easier). I'll hopefully set it
| up soon!
| FergusArgyll wrote:
| It's not a GUI it's a cli but very easy to use "ollama
| run {model}". you can also `ollama serve` which serves an
| api, and then you can use or build a simple gui.
| nabakin wrote:
| Over the past year or so various projects have made it possible
| to run LLMs on just about anything. Some GPUs are still better
| than others like Nvidia GPUs are still the best for token
| throughput (via TensorRT-LLM), but AMD GPUs are competitive
| (via vLLM) and even CPUs can run LLMs at decent speeds (via
| Llama.cpp).
| user_7832 wrote:
| Thank you!
| 38 wrote:
| is their website dead or something?
|
| http://www.orangepi.org/html/hardWare/computerAndMicrocontro...
| rcarmo wrote:
| Well, it was there when I revised the draft early this
| morning... I'm betting having the link here won't help their
| uptime :)
| 38 wrote:
| I'm not removing it. with all due respect, without a working
| URL the article is kind of useless to me. ultimately I want
| to buy or see the official pages on the hardware, which
| currently doesn't seem possible.
| rcarmo wrote:
| I wasn't asking you to, just pointing out the impact :)
| jerrysievert wrote:
| orange pi's are officially available on amazon or
| aliexpress. they do not have their own online shop: https:/
| /www.amazon.com/stores/OrangepiCreateNewFuture/page/F...
|
| unfortunately, there are too many sellers selling them on
| aliexpress for me to find their store there, so you will
| have to wait until they recover from the hug. prices are
| the same on each, though if you are purchasing from the
| united states.
|
| (edited for clarity)
| politelemon wrote:
| Is it running on an OrangePi perhaps?
| rcarmo wrote:
| https://archive.is/eLIyr
| ComputerGuru wrote:
| I can't take any of the benchmarks seriously when he is using
| very different hardware across the tests. I can _somewhat_
| understand comparing Orange Pi NVMe to the RPi 4 SATA because
| that 's what ships out-of-the-box (but there's an NVMe hat
| available), even though it'll be rate-limited to USB 3.0 speeds.
| But I can't understand comparisons to the u59 micro that are
| actually run on an Intel machine and then _not_ using an NVMe in
| the Intel for comparison.
|
| This abounds across all tests, from the very first I/O tests that
| show the Orange Pi 5+ beating both Intel configurations to the
| OnnxStream test that shows Intel beating the Orange Pi 5+ _even
| though the Intel unit has to load /stream the model from its
| paltry SATA disk_ while the Orange Pi 5+ is outfit with an NVMe
| drive.
| rcarmo wrote:
| Hi, author here. I tested with what I have, and with what I
| currently use to work and test with that is in the general
| price/performance range. If that isn't obvious, then I have to
| apologize and make an explicit note of that.
|
| My u59 ships with SATA SSDs, as it happens.
|
| I do have an Intel i7 13th Gen with PCIe 4.0 NVMes (and several
| modern Macs), but that would be so far off base (and so
| expensive) that it isn't even comparable. The i7-6700 is much
| closer in "value", if you will.
|
| However, you are mis-reading the way the OnnxStream test works.
| It is still CPU-bound for the most part.
| adrian_b wrote:
| I did not understand your complaints, so I have searched the
| specifications of Beelink u59.
|
| This is a small computer with the previous generation of Intel
| Atom CPUs (Jasper Lake) and it happens to support only SATA
| SSDs, so your suggestion of using a NVMe SSD would have been
| impossible.
|
| Even with the current generation of cheap Intel CPUs, i.e.
| Alder Lake N, for instance N100, the CPUs have very few PCIe
| lanes and most cheap computers do not have an M.2 socket that
| works at the full PCIe 3 speed of 32 Gb/s like the SSD of the
| tested OrangePi computer, but they have sockets with only 2
| lanes or only 1 lane, which work at half speed or at quarter
| speed.
|
| Most computers with RK3588 have a full-speed M.2 type M SSD
| socket and this is one of their advantages over most other
| computers in this price range.
|
| Since the OnnxStream performance depends both on SSD and on CPU
| performance, there is no surprise that an Intel Skylake CPU
| using AVX2 instructions is so much faster than Cortex-A76 with
| much lower clock frequency that it wins the benchmark despite
| the slower SSD.
|
| The only benchmarks more informative than these would have
| included comparisons with a computer using the direct
| competitor of RK3588, i.e. Intel N100 (which is faster for CPU-
| limited tasks, but not necessarily for those involving I/O or
| video), but it appears that the author does not have such a
| newer computer.
| rcarmo wrote:
| You are 200% correct. I intended to use the N5105 for
| comparison, but it lacks the right instruction set--and I do
| end the article mentioning the N100 as something I'd like to
| compare with.
|
| The RK3588 designs stood out to me as having a very nice PCIe
| layout (the RK3588s, for instance, doesn't), and that is one
| of the main reasons I wanted to test the Orange Pi 5+.
| jerrysievert wrote:
| i use an orange pi 5+ as a read replica for all of my home logs,
| open telemetry, system metrics (gathered via snmp), and
| environmental data.
|
| i have it configured at 16gb of ram, a 2tb nvme, connected to my
| network at 1gbit, and to my nas to run iscsi at 2.5gbit.
|
| it is a very nice little system, and has been rock steady running
| ubuntu 22.04. i plan on making it my primary database server, but
| that's a later project.
|
| it's been in service for 8 months now, and has been quite
| impressive. highly recommended for those into small compute home
| databases.
| rcarmo wrote:
| Incidentally, one of the LXC containers I have running on mine
| has a copy of my IoT metrics database (2 years, roughly 10GB in
| SQLite, a bit over that when imported into Postgres). Queries
| are lightning fast compared to the Pi, even doing aggregations
| --which is why I mention at the bottom of the article that I
| intend to move my home automation stuff to it.
| applied_heat wrote:
| What is your IOT metrics stack?
| rcarmo wrote:
| Homegrown. Essentially Node-RED and some Go/Python.
| charcircuit wrote:
| There was no mention that this costs >$100 which does limit its
| usage compared to a $35 rpi.
| jsheard wrote:
| At current rates the best Pi you can get for $35 is a Pi4 with
| just 1GB of RAM, which isn't in the same league as the OP5+
| 4GB.
| charcircuit wrote:
| The article mentions that author wanted to see if the orange
| pi 5 plus could be the successor to the Pi4.
| rcarmo wrote:
| Not so much a successor but an alternate path. I didn't
| want to get into pricing because a Pi (regardless of
| number) doesn't ship with a PSU, an NVMe slot, or even two
| 2.5GbE interfaces. It's not something I want to directly
| compare with price-wise, really (although I suspect adding
| up all the bits might be comparable...)
| moffkalast wrote:
| The Pi 5 + heatsink + customs fees also comes up to just over
| $100.
| tonymet wrote:
| I'm a longtime Pi fan but I'm really dubious of the single board
| computer market .
|
| At $5-$10 (pi-zero / W) and $35 (pi3 / pi 4) these little boards
| made a ton of sense.
|
| Pushing into ~ $120-$150 to me doesn't make any sense. You can
| get 8gb/16gb N100 at that price point complete with a case.
|
| I saw his point on the per-watt performance which is valid, but
| are people running a room full of these things? Why spend so much
| to save 10 watts ?
|
| Someone please enlighten me on how this segment still remains
| viable.
| stefan_ wrote:
| But the per-watt performance of these chips is usually
| terrible, made as they are on older silicon processes. Any
| recent-ish laptop CPU will have no problem beating them.
| nic547 wrote:
| The Intel N100 is a Alder-Lake N chip, based on Intel 7. It's
| four of the small cores you find on 12-Gen Laptop processors,
| made with the same process.
|
| It's successor arch, Meteor Lake has only recently launched,
| so the N100 shouldn't be too far off in terms of efficiency.
| beebeepka wrote:
| these small intel cores aren't known for their power
| efficiency
| nic547 wrote:
| Mostly because the high end intel processors are run at
| power limits where they're getting extremely diminishing
| returns, trying to eek out every last bit of performance.
|
| In lower power scenarios the little cores can be more
| efficient than the large ones. And at 6W TDP the N100 is
| a low power scenario.
| https://chipsandcheese.com/2022/01/28/alder-lakes-power-
| effi...
| edvinbesic wrote:
| This resonates with me. I have a few zeros, 3's and 4's for
| random things (nerves projects, dns, bastion host etc) but the
| price point now puts it squarely in the "I better have a good
| use case for this" rather than "I want to tinker around and if
| it ends up in a drawer no big deal" camp.
| rcarmo wrote:
| It's an interesting segment, especially now that Intel N100
| machines are out. But it essentially depends on what you want
| to do, and what use/need you have for ARM hardware.
|
| Me, I write fairly low-level stuff that runs on ARM (for
| kicks), so having one of these as a server/development sandbox
| makes a ton of sense (even though I have Macs and VMs and
| whatnot, some things you can only do with hardware).
|
| I grant that you won't find normal people filling their closets
| with these. But when you walk past, say, a phone exchange, a 5G
| base station, or any other of hundreds of invisible machines
| out there, they'll be running a variation of these boards
| (perhaps slower and dumber, but soon ramping up to this kind of
| thing), because Intel lost the embedded market years back.
| abraae wrote:
| It's a good question. Here's my specific use case, I'm sure
| there are plenty more.
|
| I have a golf simulator running in the cloud (AWS) on a
| g4dn.xlarge ec2 insurance.
|
| At home, I use a raspberry pi 5 as a thin client. It plugs into
| a 4k projector and streams down the display of the cloud PC.
|
| Because it's cheap and reliable, I can leave it in place
| sitting up on the ceiling attached to the projector. I wouldn't
| want to devote a more expensive laptop to the job - the
| raspberry pi 5 is just man enough for the job, powerful enough
| but only just.
| rcarmo wrote:
| I did mostly the same for a while, but with a Pi 4:
| https://taoofmac.com/space/blog/2022/10/23/1700
| abraae wrote:
| Sounds interesting (but a bit above my pay grade). AWS uses
| a streaming tech called NICE DCV which runs in a browser
| (chromium) on my raspberry pi.
|
| I was using a RP 4 before, but the performance is
| noticeably better with the RP 5.
| jhot wrote:
| I recently bought a mini pc with an AMD 5500u (6 cores, 12
| threads, 15W), 16 GB DDR4, and a 512 GB nvme SSD for $225 (on
| sale a bit). I suspect it would run laps around the orange pi
| despite the similar price and wattage.
| plagiarist wrote:
| Which mini PC was that? That sounds great.
| eropple wrote:
| Beelink sells a bunch of them, as do a few other vendors
| (who are all probably rebadging from the same manufacturers
| somewhere). If you're willing to spend more, the ones with
| Ryzen 7840HS processors are particularly impressive.
| SSLy wrote:
| Don't beelink mini PC's have terrible power supply that
| makes them randomly go silent?
| eropple wrote:
| Some of them have something weird on them, most of them
| are a normal barrel jack. I can't speak to the weird one,
| I don't have one of those.
| jhot wrote:
| The brand is GenMachine. Bought from Newegg and shipped
| from china so you can probably just order from their
| website
| eropple wrote:
| I've got a Ryzen 5560U mini-PC in my k8s cluster at home, and
| it's great. It is faster than the OPi5's that are also in the
| cluster; those are around the same speed as a few-year-old
| Celeron or so (edit: originally I said a N5105 but I'm not
| actually _that_ sure). But I have them in my cluster because
| CPU perf isn 't the only axis that matters. They're also
| cheaper, they're fanless, they're physically small, they're
| ARM so I can use them for arm64 builds at speeds faster than
| qemu offers, and they use less power. I guess I'm here for
| heterogeneous hardware.
| varispeed wrote:
| > Why spend so much to save 10 watts ?
|
| That's a wrong mindset to have. How are we going to improve
| environment if we will be careless about energy?
|
| As in the saying > If you look after the pennies, the pounds
| will look after themselves
| jadamson wrote:
| How much energy was spent on raw materials, components,
| assembly, and shipping?
| varispeed wrote:
| Production and shipping is a one time event and most likely
| in the same ballpark regardless how much energy the device
| uses.
|
| Whereas power consumption is recurring and it adds up.
| Multiply the difference by hundreds of thousands or more
| devices and it is no longer is trivial.
| dns_snek wrote:
| They might not consume a lot of power, but they're also
| not doing much work. How does their performance per watt
| compare to a modern low power laptop, an intel NUC, the
| N100, etc.?
| georgyo wrote:
| I used to think this way as well, but then I started doing the
| actual math.
|
| My last power bill was ~$103 for exactly 256kw, or put another
| way about $0.40/kwh. For context this is in NYC. I'm sure other
| people have cheaper power elsewhere.
|
| 0.01kw * 24 hours * 30 days * 0.4 $/kwh = $2.88 a month or
| about $35 a year.
|
| If something is going to be on constantly, the ROI on a 10watts
| savings can quickly out pace the initial investment.
|
| And that is every 10 watts. Something using 100watts
| continuously is 10 times that.
|
| This affected a bunch of my other thinking as well. Having a
| raspberry pi in my home as always on server costs as much a
| small linode instance and much less reliable.
| Macha wrote:
| Similarly, I upgraded the GPU in my server from a decade old
| formerly high end gaming GPU to a modern lower-mid range
| because I wanted new video encoders, and a smoother Linux
| driver experience. But it's idle 99% of the time. The
| difference in idle consumption (80% lower on the new GPU)
| works out to EUR50/year, which means even if I didn't use any
| of the other features of the GPU, it would pay for itself in
| 3 years.
|
| Video encoding power draw is also 86% lower and even if I
| found something to max the new card out, it's still 40% lower
| than maxing out the old card (for a lot more compute power
| than the ten year old card).
| unethical_ban wrote:
| Wow, that's a lot of money. I thought my electricity was high
| because it's gone up a lot to 0.12 in South Texas.
| MuffinFlavored wrote:
| > or put another way about $0.40/kwh.
|
| I pay $0.16/kwh net of everything (all taxes + fees). That's
| insane.
| kornhole wrote:
| If it is running 24/7 for years, watts add up. Running cool and
| quietly is another benefit.
|
| My Orange PI 5 has been running Nextcloud, Mastodon, Jellyfin,
| XMPP, Cryptpad, Vaultwarden, and about a dozen other
| services/sites for about a year. I love it. Some apps only run
| on X86, and I install those on a VPS.
| kristopolous wrote:
| There's some arm-only stuff that's fun to work with like risc-
| os and AOSP. Also if you're trying to fill out a support
| matrix, having an ARM machine is a useful thing to have around.
|
| It's also cheaper and probably easier to work with than Apples
| ARM systems for these purposes (although the used market for
| the m1 will probably cross below a new pi within 2-3 years)
| MuffinFlavored wrote:
| > You can get 8gb/16gb N100 at that price point complete with a
| case.
|
| The last time I read a post like this I immediately rushed to
| buy an N100 that I now have sitting in my living room doing
| literally nothing, lol
|
| It's funny how huge the market is for people (like myself)
| addicted to having the latest and great tech gadgets.
|
| I have multiple of every Raspberry Pi... doing nothing.
| 404mm wrote:
| Pi 5 is the first Pi I did not buy. I was always the early
| adopter of all Pi's but that ended with 4.
|
| For me, the killer was a combo of 3 things: 1. Too expensive
| relative to performance. 2. Availability of quite decent N100
| (and similar) boxes with expandable memory and storage. 3. Not
| interested in Pi that should be actively cooled.
|
| I always wished the Pi had eMMC storage but it never happened for
| the non-compute module versions.
|
| I still have a few Pi's around the house and they are plenty
| powerful for their purpose.
| hammyhavoc wrote:
| Checked out RISC-V yet? That's becoming more interesting than
| ARM for me.
| moffkalast wrote:
| I suspect the Pi Foundation was getting really annoyed at other
| Pi compatible boards capturing the high end ARM SBC niche.
| Their flagship Pi 4's Cortex-A72 is now almost a decade old and
| not even fast enough to run more than a very basic lightweight
| desktop and completely out of the question for the market
| section.
|
| Each Pi so far was about a 30% jump in power consumption, this
| time it's over 130%. They couldn't get the performance they
| needed, so they cranked the Pi 5 TDP beyond what was sensible
| to compensate. I mean 5A over 5V USB-C is borderline non-
| standard and basically maxing out the current port without
| needing a regulator. It's really funny seeing the N100, a CISC
| for fucks sake, get 2-3x the performance while pulling 2 watts
| less under load. This is their AMD Bulldozer moment.
| qwertox wrote:
| I feel like these boards now have only one way to improve, and
| that is by adding AI capabilities. Like being able to load
| Whisper onto a board and use it for transcribing mic input.
|
| Because the bigger-faster-hungrier race is putting them in
| direct competition with x64 boards, where you then ask yourself
| that for a couple of watts more you'll be able to get a real
| PCI slot or two to plug in whatever you want, and use the RAM
| you want.
| nosebear wrote:
| This thing would be great for a low-power NAS, but no mainline
| kernel support, no buy :/
| squarefoot wrote:
| These links might be of interest.
|
| https://forum.armbian.com/topic/33306-trying-to-use-opi5plus...
|
| https://www.armbian.com/orangepi-5/
| rcarmo wrote:
| Yeah, well, I do mention near the end that I would like a
| couple of SATA ports for it. I suppose one could stick a
| controller into the NVMe or Wi-Fi slot...
| roughly wrote:
| The idle power draw at 5W is a little surprising to me, actually
| - the M1 mac mini draws ~7 idle. The max is a whole lot higher,
| but I'd also suspect the max performance you're getting is a
| whole lot higher, too.
|
| I really like the pi zeros for low power budget computing, but I
| think once you're getting into this kind of power envelope,
| you're kicking into "real computer" territory and I'm not sure
| how much benefit the SBCs are giving you.
| eropple wrote:
| You can bring it down to around 3W with a few config tweaks.
| Mine averages about that. (edit: wrong board! I was thinking of
| my OPi5, not the Plus. My OPi5+ looks like it's around the same
| as the author.)
|
| An M1 Mac is more powerful for sure, but for my use cases (my
| OPi5+ is being used as a video capture relay with its onboard
| HDMI capture and my two OPi5's are k8s nodes running Github
| Actions jobs) it would also be a lot more expensive.
| rcarmo wrote:
| Did you change the CPU governor, or were there any more
| tweaks? I did measure this off the wall and there's always a
| bit more overhead in that situation, but I'm curious.
| eropple wrote:
| Nope, I just remembered the wrong one. 3W on the OPi5, not
| the Plus. Mea culpa.
| roughly wrote:
| I'm curious how much more expensive, though - the low-end
| mini is $600. I don't know how much each of the OPis is once
| you've added all the accessories to make them functional, but
| it wouldn't surprise me if all of performance/$,
| performance/watt, and actual total cost winds up being better
| with the mini once you've got more than one or two of the
| OPis running.
| eropple wrote:
| A 16GB RAM OPi5 is $140 and a 16GB OPi5+ is $180, outside
| of sales. Both of my OPi5's have a small microSD boot
| volume and a 1TB NVMe drive that cost about $50, because
| when they're not doing arm64 builds for my GitHub projects,
| they also do some Longhorn volume replication in my home
| k8s cluster. My OPi5+ has a 2TB NVMe drive for video
| recording-to-disk. (I designed and 3D printed my own cases,
| so I didn't factor that in.)
|
| A Mac Mini that matches the important specifications here--
| and CPU performance isn't one of them, but memory capacity
| and disk storage are--is _twelve hundred dollars_. Before
| you add a capture card or the additional terabyte of
| storage for the video capture box. Also then I 'd have to
| fight with Asahi Linux or something, because my workflows,
| while probably portable to macOS, already exist on Linux.
|
| I have no problem buying Macs, I have plenty. The Mini is
| not a replacement for the needs I described. The more
| general Ryzen mini-PCs are better competitors, and if you
| need more and faster compute are a better call at ~$230 to
| $400--a far cry from the Mac mini's pricing.
| rcarmo wrote:
| Hi, author here. I live inside a Mac mini M2 Pro, and love the
| fact that it only draws around 21W while running Windows on ARM
| inside a VM with a Teams call, so yes, there is that. But they
| are different animals altogether.
| johnchristopher wrote:
| > I live inside a Mac mini M2 Pro
|
| How cozy is it :) ?
| rcarmo wrote:
| I gave it huge windows :)
| squarefoot wrote:
| Before someone starts the usual yadda yadda about the RPi biger
| community, the OS not having long time support etc. I would
| repeat one more time: _do not rely on board vendor supplied
| images_ ; this is valid for pretty much all boards. Just go to
| Armbian or DietPi pages and you'll almost certainly find one or
| more images that work on your board and forums to discuss about
| them with very knowledgeable people.
|
| https://www.armbian.com/download/
|
| https://dietpi.com/#download
|
| Those projects are well worth a contribution, as they don't have
| a giant like Broadcom behind them.
| rcarmo wrote:
| Yep. That is why I went with Armbian (even though I did test
| the OrangePi images while I waited for my NVMe). Can't wait for
| them to ship kernel 6.x support for this board.
| hrldcpr wrote:
| Mind elaborating on this? I've always used Raspbian and am
| interested in hearing about the downsides.
| bemusedthrow75 wrote:
| Not to mention that if you really want to _tinker_ you can
| use pi-gen to customise builds from a desktop without all
| that much difficulty:
|
| https://github.com/RPi-Distro/pi-gen
|
| Worth a play.
| squarefoot wrote:
| I was referring to other boards, not the Raspberries which
| have a well supported OS, apologies for not being clear about
| that. Other board manufacturers often publish distros which
| have been cobbled together with old kernels and proprietary
| blobs, then they abandon them when the board is declared
| obsolete. This is not the case of the Raspberry Pi of course,
| but for other boards I would check first the above mentioned
| distros before installing anything by the vendors.
| qwertox wrote:
| Armbian is great. Without it I would have never again bought
| non-Raspi SBCs.
|
| Board vendors believe that it is OK to host their images on
| dubious download sites, with zero information on what the image
| is built with.
| qchris wrote:
| Just adding in a link to their Donation page at
| https://www.armbian.com/donate/
|
| I'm not affiliated with them or anything, but also appreciate
| their efforts and have a small recurring donation set up in the
| hopes of seeing it continue. Especially for groups like this
| that have image hosting and hardware costs, even a few dollars
| can make the maintainers' load lighter and help them continue
| doing this kind of quiet, important work.
| bemusedthrow75 wrote:
| Why not rely on them if they do the job? What is the objection?
|
| Or why not take advantage of the absolutely trivial deployment
| that the Raspberry Pi Imager offers?
|
| This is like the place the 3D printing world has been in for
| the last two or three years. Why is it not OK to want to just
| do stuff and not think about performance-tuning the hardware
| before you do stuff?
|
| Some of us just want to make stuff, not tinker with the tools.
| ThatPlayer wrote:
| I agree they're worth a contribution, but sometimes I just want
| a device to work and even Armbian isn't as well supported as
| Raspberry Pi OS. For example 2 days ago I tried to get my
| Orange Pi 4 working. It turns out Armbian's Orange Pi 4 builds
| have had broken HDMI for months. There's value in having
| something just work.
|
| There's already a post about this issue on the forums, and a
| fix: https://forum.armbian.com/topic/26818-opi-4-lts-no-hdmi-
| outp... . But the precompiled version offered isn't for my
| board (I have the non-LTS version), so I'll have to compile it
| myself.
| dima55 wrote:
| Why wouldn't you just use stock Debian? The first rpi used a
| weird CPU, which didn't map well to the CPUs supported by the
| stock OSs, but that hasn't been true since that very first one.
| beebeepka wrote:
| heh, because normal linux distros are finally somewhat usable
| with the rpi 5, let alone the older versions. neat little
| headless servers? sure. capable desktops, not quite there
| yet.
| justin66 wrote:
| > I would repeat one more time: do not rely on board vendor
| supplied images; this is valid for pretty much all boards
|
| It's not valid, it's not even an argument.
| qwertox wrote:
| Which reminds me to mention that the Radxa ZERO 3E [0], which was
| announced last November, is now for sale [1] (since last week).
|
| It's basically a Raspberry Pi Zero with the difference that it
| has a gigabit ethernet port instead of WiFi+Bluetooth.
|
| This is not an ad, I've ordered two because my OpenVPN server
| which runs on a Raspberry Pi B+ (1st gen, 9 MBit/s throughput on
| Bookworm) needs upgraded hardware.
|
| In that context, it's remarkable that Bookworm still runs on an
| device as old and weak as the 1st-gen Raspi.
|
| [0] https://radxa.com/products/zeros/zero3e/
|
| [1] https://arace.tech/products/radxa-zero-3e
| rcarmo wrote:
| I got one for review in December. Had a few initial issues with
| it (this was way before release), hope to be able to test it
| fully soon.
| qwertox wrote:
| What does one have to do to get such review devices?
| rcarmo wrote:
| I have been working in the space for a while...
| bicebird wrote:
| Can't remember the exact name but I remember seeing a similar
| pi0 sized board with POE ethernet and an m.2 slot for wifi, and
| thinking it'd be perfect for a WAP.
|
| Unfortunately it was a industrial vendor so don't think you can
| buy it in low quantities and the price is probably way too high
| for what it is.
|
| I feel like there must be a market for something like that
| tough, a board with the bare essentials to make it cheap enough
| to have a few around the house / office and leave it up to the
| customer to find a wireless card that could be upgraded down
| the line.
| fisian wrote:
| I was interested in this board because of the HDMI input.
| However, I couldn't find anyone reviewing/testing that (last
| searched a few weeks ago).
|
| I have another board (Khadas Vim4) with HDMI input. But the HDMI
| input only recently got support in their vendor provided Linux
| image and is finnicky. In the Armbian image I couldn't get it to
| work for more than a few frames of input video (tried with
| gstreamer and ffmpeg).
|
| Additionally, I couldn't find any information on HDMI input in
| Linux (seems like everyone uses USB capture cards that use uvc
| with v4l2).
| rcarmo wrote:
| Hi, author here. That is something I intend to test. I know it
| works under Android (which I've yet to test), and under Linux I
| can see the audio part of it using lshw:
| *-sound:3 description: rockchiphdmiin
| physical id: 6 logical name: card3
| logical name: /dev/snd/controlC3 logical name:
| /dev/snd/pcmC3D0c
|
| I can't see anything interesting in the USB bus:
| $ lsusb -t /: Bus 06.Port 1: Dev 1, Class=root_hub,
| Driver=xhci-hcd/1p, 5000M |__ Port 1: Dev 2, If 0,
| Class=Hub, Driver=hub/4p, 5000M /: Bus 05.Port 1: Dev
| 1, Class=root_hub, Driver=xhci-hcd/1p, 480M |__
| Port 1: Dev 2, If 0, Class=Hub, Driver=hub/4p, 480M /:
| Bus 04.Port 1: Dev 1, Class=root_hub, Driver=ohci-platform/1p,
| 12M /: Bus 03.Port 1: Dev 1, Class=root_hub,
| Driver=ohci-platform/1p, 12M /: Bus 02.Port 1: Dev 1,
| Class=root_hub, Driver=ehci-platform/1p, 480M /: Bus
| 01.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M
|
| (other than the ludicrous bandwidth available, that is)
|
| ...but I am using the Armbian 5.x image, so maybe I am missing
| some driver or ARM DTD.
| chazeon wrote:
| Can I use a regular m.2 PC Wi-Fi card on these ARM SBCs? Suppose
| the SBC's CPU is mainline supported / armbian supported, and the
| PC Wi-Fi card works on x86 machines under Linux.
| ThatPlayer wrote:
| Generally yes. M.2 wifi cards are just PCI-E (except for Intel
| CNVi). Jeff Geerling tried a bunch of different PCI-E cards
| with the Raspberry Pi 4 Compute Module which does expose the
| PCI-E interface: https://pipci.jeffgeerling.com/
| bemusedthrow75 wrote:
| People who think the Raspberry Pi 5 is underwhelming
|
| a) don't understand the market or its needs particularly well
|
| b) aren't really paying attention to the underlying trends from
| the foundation or the trading company
|
| c) are willing to write off any absurdly arcane, poorly-
| documented things they had to do to get a competing board to
| offer a stable, supported alternative to the Pi 5.
|
| But the most interesting thing about the Pi 5 is what it tells
| you about what is coming.
|
| Look at the (astonishingly) successful RP2040, and then look at
| the Pi 5's RP1 Southbridge, and then scratch your chin and think
| for a bit.
|
| It's not really _incremental_. Something quite big has happened
| here, we just don 't see the product of it yet.
| hackernudes wrote:
| Care to share your opinion more clearly? PCIE is eating the
| world? More modular computing systems? None of that seems "big"
| to me so I am probably missing your point.
| 542458 wrote:
| Both the 2040 and the Pi5 south bridge are in-house custom
| silicon, and fairly good at that. I think the parent post is
| alluding to the Pi foundation eventually building their own
| processors, and in the process hopefully shrugging off many
| of the pi's longstanding limitations.
| SeasonalEnnui wrote:
| I agree with what the parent is alluding to; the introduction
| of the RP1 is very understated but perhaps it's more
| interesting to SBC engineers rather than the end users.
|
| In other words: 1. The RP1 (implemented on TSMC 40LP)
| contains all the power hungry/high bandwidth IO that is
| difficult to do on smaller process nodes. This allows the
| main processor to be moved to smaller nodes or even a
| different vendor/architecture in future boards. Easier to
| target better power efficiency in the future. 2. Going
| forwards, the IO feature set will now be consistent and
| reliable, by reusing RP1. It is no longer a requirement to
| try to get these peripherals on the main processor.
| bemusedthrow75 wrote:
| Yes, it is this -- and what the sibling comment says.
|
| It's clear that _at least_ these things have changed:
|
| 1) there is now independence from the "old smartphone
| processor" model
|
| Because the RP1 allows them to take control of the very
| bits of the puzzle that the Pi pioneered and apply them
| more broadly (including to x86 hardware if they chose to;
| they clearly did this in the development process)
|
| 2) nothing in particular stops them selling the RP1 as-is
| (except that they are not going to).
|
| There have been some interesting allusions very recently as
| to what the success of the RP2040 and the RP1 might mean
| for a future microcontroller lineup, but my guess would be
| a mid-sized processor optimised for very small educational
| computers and emulating larger machines.
|
| I would expect to see an RP2040 successor board based
| around something like the RP1 with USB-C and more
| concessions towards DVI/HDMI for one thing.
|
| 3) they now don't have all their eggs in the one basket
| (which is better for the foundation)
|
| 4) they could now choose a "partnership" model where
| something like the RP1 turns up in other people's hardware;
| there are already SBCs on the market using RP2040s for
| GPIO.
|
| Essentially, what has happened is not an incremental
| change. It's not even particularly incremental in the Pi 5,
| which is architecturally new.
|
| It is a step change on the design level but also on the
| business level.
| TheChaplain wrote:
| I always found it odd that many seem to belittle the RPi and/or
| say it's lacking in power or features.
|
| As I understood it since the beginning, the RPi is a teaching and
| learning tool, not your 32gb home server running git, nextcloud,
| plex, portainer and 15 other services. So faulting it for
| something it was never intended to be seems a bit unfair?
| NikkiA wrote:
| The RK3588 is great, but I _really_ want a Dimensity 9300 SBC
___________________________________________________________________
(page generated 2024-01-20 23:01 UTC)