Hello again HN, I'm bunnie! Unfortunately, time zones strike again...I'll check back when I can, and respond to your questions.
smackeyacky 11 hours ago [-]
I will forever be grateful to Bunnie, he pointed me in the direction of murmurhash when I needed something to help with the integrity of a section of memory in a microcontroller. Legend.
kev009 24 hours ago [-]
Have you looked at TI's PRU at all?
bsder 21 hours ago [-]
Emulating the RPI PIOs instead of the TI PRUs is really a miss.
The PRUs really get a bunch right. Very specifically, the ability to broadside dump the ENTIRE register file in a single cycle from one PRU to the other is gigantic. It's the single thing that allows you to transition the data from a hard real-time domain to a soft real-time domain and enables things like the industrial Ethernet protocols or the BeagleLogic, for example.
aa-jv 10 hours ago [-]
Tooling for the RPI PIO design is probably a bit more accessible than the TI PRU situation. I'd say its not really a miss - more of a necessity given bennies' proclivity towards open/available tools. Getting access to architecture details of the TI PRU would necessitate an NDA, would it not?
dmitrygr 1 days ago [-]
very cool. tiny processors everywhere. but be nice to PIO. PIO is good :)
bunnie 1 days ago [-]
Agreed! The PIO is great at what it does. I drew a lot of inspiration from it.
dmitrygr 1 days ago [-]
What are your thoughts on efficiency? BIO vs PIO implementing, say, 68k 16-bit-wide bus slave. I know i can support 66MHz 68K bus clock with PIO at 300MHz. How much clock speed would BIO need?
bunnie 1 days ago [-]
It depends a lot upon where the processing is happening. For example, you could do something where all the data is pre-processed and you're just blasting bits into a GPIO register with a pair of move instructions. In which case you could get north of 60MHz, but I think that's sort of cheating - you'll run out of pre-processed data pretty quickly, and then you have to take a delay to generate more data.
The 25MHz number I cite as the performance expectation is "relaxed": I don't want to set unrealistic expectations on the core's performance, because I want everyone to have fun and be happy coding for it - even relatively new programmers.
However, with a combination of overclocking and optimization, higher speeds are definitely on the horizon. Someone on the Baochip Discord thought up a clever trick I hadn't considered that could potentially get toggle rates into the hundreds of MHz's. So, there's likely a lot to be discovered about the core that I don't even know about, once it gets into the hands of more people.
dmitrygr 1 days ago [-]
I specified slave specifically because slave is a LOT harder. Master is always easy. Waiting for someone else’s clock and then capturing and replying asap is the hard part. Especially if as a slave you need to simulate a read.
On rp2350 it is pio (wait for clock) -> pio (read address bus) -> dma (addr into lower bits of dma source for next channel) -> dma (Data from SRAM to PIO) -> pio (write data to data bus) chain and it barely keeps up.
bunnie 14 hours ago [-]
If there's a single rising edge on the bus that you can use as quantum trigger, then, the reads turn into as series of moves into a FIFO, and the response can be quite fast. The quantum-trigger-on-GPIO was provided to solve exactly the problem you described.
dmitrygr 4 hours ago [-]
Awesome thank you.
MayeulC 4 hours ago [-]
Hey, glad to see you here. I'm a huge fan of your projects, and the Baochip was one I didn't see coming. Very nice surprise!
I ordered a few, thinking it would make a good logic analyzer (before the details of the BIO were published). Obviously, it's going to be a stretch with multiple cycles per instructions, and a reduced instruction set. I'll see how far I can push it if I rely on multiple BIOs, perhaps with some tricks such as relying on an external clock signal.
At first glance, they seemed to be perfect for doing some basic RLE or Huffman compression on-the-fly, but I am less sure now, I will have to play with it. Bit-packing may be somewhat expensive to perform, too.
One thing stood out to me in this design: that liberal use of the 16 extra registers. It's a very clever trick, but wouldn't some of these be better exposed as memory addresses? Or do you foresee applications where they are in the hot path (where the inability to write immediate values may matter). Stuff like core ID, debug, or even GPIO direction could be hard-wired to memory addresses, leaving space for some extra features (not sure which? General purpose registers? More queues? More GPIOs? A special purpose HW block?).
I really like the "snap to quantum" mechanism: as you wrote, it is good for portability, though there should be a way to query frequency, if portability is really a goal.
Anyway, it's plenty for a v1, plenty of exciting things to play with, including the MMU of the main core!
bunnie 4 hours ago [-]
The core ID definitely didn't need to be in a register, but the elapsed clocks since reset is actually really handy. Having this in the hot path allows me to build a captouch sensor using the BIO, because the clock increment is 1.42ns and even though the rise time of the pad is microseconds you get plenty of resolution at that counting rate.
I think it will be interesting to see what people end up doing with it and what are the pain points. As you say, it's a v1 - with any luck there will be a v2, so we could consider the time starting now as a deliberation period for what goes into v2.
The good news is that it also all compiles into an FPGA, so proposed patches can be tested & vetted in hardware, albeit at a much slower clock rate.
MayeulC 3 hours ago [-]
Ah, thank you for the example, I understand how a linearly-increasing counter can be useful, if you use it that way. It would obviously be more versatile with write access & configurable clock dividers, pre-setters, counting direction, etc. The current design probably allows re-using the counter across cores & minimize space, so makes sense to me. I should dig into the RTL when I have a bit of time… Maybe I'll make it my bedside reading?
You could also say it's up to the user to implement a fully-fledged timer/counter in a BIO coprocessor if they need one, though ideally there would be a shared register (or a way to configure the FIFOs depth + make them non-blocking) to communicate the result.
Small cores like these are really fun to play with: the constraints easily fit in your head, and finding some clever way to use the existing HW is very rewarding. Who needs Zachtronics games when you have a BIO or PIO?
Lerc 8 hours ago [-]
I'm currently elbow deep in making a PIO+DMA sprite and tile display renderer.
Losing the high maximum data rate is quite a cost, but in my use case BIO would be the clear winner, indexed pixel format conversion on PIO is shifting out the high bits of palette address, then the index, then some zeros. Which goes to a FIFO which is read by a DMA simply to write it to the readaddr+trigger of another DMA which feeds into another FIFO (which is the program doing the transparency)
That I suspect becomes a much simpler task with BIO
It is an interesting case, where just knowing that the higher potential rate of the PIO is there is a kind of comfort even when you don't currently need it.
Although for those higher rates it is very rarely reactive and most often just wiggling wires in a predetermined fashion.
I wonder if having a register that can be DMA'd to could perform the equivalent function of side-set to play a fixed sequence to some pins at full clock speed. Like playing macros.
I guess another approach a 32 bit register could shift out 4 bits of side set per clock cycle. Then you could pre program for the next 8 cycles in a single 32 bit write. It would give you breathing space to drive the main data while the side set does fixed pattern signaling.
bunnie 6 hours ago [-]
I suspect there are tricks to get higher rates, for sure. And hopefully once we see a library of applications forming, we can make informed decisions about what extensions and features would be necessary to enable the next level of I/O performance.
jononor 9 hours ago [-]
Very much looking forward to play with the BIO functionality on the Baochips that I have ordered. Thanks for the nice write up!
It is fascinating to see how widely applicable the "just throw a RISC-V core or 4 in there" design pattern is. The wide range of CPU designs that are standardized, the number oc mature open source implementations, and the lack of royalty fees, and the ready-to-run programming toolchains really drives this to a new level. And CPUs are small in die area anyway compared to SRAM! Was cool to see on the RPI2350 how they just threw in another two RISC-V cores next to the ARMs.
For these reasons specified above, I think that this trend will continue. For example, in my specialization of edge machine learning, we are seeing MEMS sensors that integrate user programmable DSP+ML+CPU right there on the sensor chip.
mrlambchop 1 days ago [-]
I loved this article and had wanted to play with PIO for a long time (or at least, learn from it through playing!).
One thing jumped out here - I assumed CISC inside PIO had a mental model of "one instruction by cycle" and thus it was pretty easy to reason about the underlying machine (including any delay slots etc...).
For this RISC model using C, we are now reasoning about compiled code which has a somewhat variable instruction timing (1-3 cycles) and that introduces an uncertainty - the compiler and understanding its implementation.
I think this means that the PIO is timing-first, as timing == waveform where BIO is clarity-first with C as the expression and then explicit hardware synchronization.
I like both models! I am wondering about the quantum delays however that are being used to set the deadlines - here, human derived wait delays are utilized knowledge of the compiled instructions to set the timing.
Might there not be a model of 'preparing the next hardware transaction' and then 'waiting for an external synchronization' such as an external signal or internal clock, so we don't need to count the instruction cycles so precisely. On the external signal side, I guess the instruction is 'wait for GPIO change' or something, so the value is immediately ready (int i = GPIO_read_wait_high(23) or something) and the external one is doing the same, but synchronizing (GPIO_write_wait_clock( 24, CLOCK_DEF)) as an alternative to the explicit quantum delays.
This might be a shadow register / latch model in more generic terms - prep the work in shadow, latch/commit on trigger.
Anyway, great work Bunnie!
bunnie 1 days ago [-]
The idea of the wait-to-quantum register is that it gets you out of cycle-counting hell at the expense of sacrificing a few cycles as rounding errors. But yes, for maximum performance you would be back to cycle counting.
That being said - one nice thing about the BIO being open source is you can run the verilog design in Verilator. The simulation shows exactly how many cycles are being used, and for what. So for very tight situations, the open source RTL nature of the design opens up a new set of tools that were previously unavailable to coders. You can see an example of what it looks like here: https://baochip.github.io/baochip-1x/ch00-00-rtl-overview.ht...
Of course, there's a learning curve to all new tools, and Verilator has a pretty steep curve in particular. But, I hope people give the Verilator simulations a try. It's kind of neat just to be able to poke around inside a CPU and see what it's thinking!
drob518 1 days ago [-]
You could always get around the compiler uncertainty using a RV assembler, no? These IO programs are not long or terribly sophisticated.
bunnie 14 hours ago [-]
Correct, actually most programs I've written for the BIO are in assembly.
The C compiler support is a relatively recent addition, mostly to showcase the possibilities of doing high-level protocol offloading into the BIO, and the tooling benefits of sticking with a "standard" instruction set.
throwa356262 12 hours ago [-]
The large area usage was a surprise. But is the real PIO also this huge?
My point is, maybe this is one of those designs that blow up in FPGA. Or maybe the open source version of the PIO is simply not as area efficient as the rpi version?
phire 12 hours ago [-]
Barrel shifters are one of those things that end up a lot bigger in FPGAs than ASICs. Not really because a barrel shifter is harder, but because FPGAs optimise for most other common building blocks and barrel shifters are kind of left behind.
But even on the real RP2040, PIO is not small.
Take a look at the annotated die shot [0]. The PIO blocks are notably bigger than the PROC blocks.
It's hard to know for sure, because we don't have access to the PIO's implementation, but I suspect that the PIO is "not small".
That being said - size isn't everything. At these small geometries you have gates to burn, and having access to multiple shifts in a single cycle really do help in a range of serialization tasks.
alex7o 1 days ago [-]
This is actually super cool, you can use those as both math accelerators and as io, and them being in lockstep you can kind of use them as int only shader units. I don't know how this is useful yet.
Btw I am curious what about edge cases. Maybe I have missed that from the article but what is the size of the FIFO?
Or the more dangerous part that is you have complex to determine timing now for complex cases like each reqd from FIFO is and ISR and you have until the next read from the FIFO amount of instructions otherwise you would stall the system and that looks to me too hard to debug.
bunnie 14 hours ago [-]
FIFO is 8-deep. I did fail to mention that explicitly in the article, I think. The depth is so automatic to me that I forget other people don't know it.
The deadlock possibilities with the FIFO are real. It is possible to check the "fullness" of a FIFO using the built-in event subsystem, which allows some amount of non-blocking backpressure to be had, but it does incur more instruction overhead.
guenthert 1 days ago [-]
I appreciate the intro, motivation and comparison to the PIO of the RP2040/2350. How would this compare to the (considerably older, slower, but more flexible) Parallax P8X32A ("Propeller")?
t-3 23 hours ago [-]
The Propeller 2 would be an interesting comparison as well, with it's own smart pins playing a similar role.
crest 22 hours ago [-]
IIRC the Propeller is an eight thread barrel CPU with the same number of pipeline stages. So it "retires" just one instruction per cycle. All PIO state machines can run every cycle so they should be considered very small CPU cores. You can think of them as channel I/O co-processors for a microcontroller instead of a mainframe.
> Above is the logic path isolated as one of the longest combination paths in the design, and below is a detailed report of what the cells are.
which is an argument that "fpga_pio" is badly implemented or that PIO is unsuitable for FPGA impls. Real silicon does not need to use a shitton of LUT4s to implement this logic and it can be done much more efficiently and closes timing at higher clocks (as we know since PIO will run near a GHz)
bunnie 1 days ago [-]
As a side note about speed comparisons - please keep in mind the faster speeds cited for the PIO are achieved through overclocking.
The BIO should also be able to overclock. It won't overclock as well as the PIO, for sure - the PIO stores its code in flip-flops, which performance scales very well with elevated voltages. The BIO uses a RAM macro, which is essentially an analog part at its heart, and responds differently to higher voltages.
That being said, I'm pretty confident that the BIO can run at 800MHz for most cases. However, as the manufacturer I have to be careful about frequency claims. Users can claim a warranty return on a BIO that fails to run at 700MHz, but you can't do the same for one that fails to run at 800MHz - thus whenever I cite the performance of the BIO, I always stick it at the number that's explicitly tested and guaranteed by the manufacturing process, that is, 700MHz.
Third-party overclockers can do whatever they want to the chip - of course, at that point, the warranty is voided!
Retr0id 1 days ago [-]
PIO is unsuitable for FPGA impls, that's what the article says.
> If you’re thinking about using it in an FPGA, you’d be better off skipping the PIO and just implementing whatever peripherals you want directly using RTL.
drob518 1 days ago [-]
Yea, I think the point is that if you’re implementing in FPGA in any case, a dedicated state machine is going to be a lot smaller than PIO or BIO. But if you’re making a standard part with hardcoded functionality then BIO is going to be smaller than PIO.
dmitrygr 1 days ago [-]
Yes, my point is that the article throws a lot of shade at PIO while the real issue is that the author is trying to shove a third-party FPGA reimpl of it into a place it never belonged. PIO itself is a perfectly good design for what it does and where it does it.
bunnie 1 days ago [-]
Actually, the PIO does what it does very well! There is no "worse" or "better" - just different.
Because it does what it does so well, I use the PIO as the design study comparison point. This requires taking a critical view of its architecture. Such a review doesn't mean its design is bad - but we try to take it apart and see what we can learn from it. In the end, there are many things the PIO can do that the BIO can't do, and vice-versa. For example, the BIO can't do the PIO's trick of bit-banging DVI video signals; but, the PIO isn't going to be able to protocol processing either.
In terms of area, the larger area numbers hold for both an ASIC flow as well as the FPGA flow. I ran the design through both sets of tools with the same settings, and the results are comparable. However, it's easier to share the FPGA results because the FPGA tools are NDA-free and everyone can replicate it.
That being said, I also acknowledge in the article that it's likely there are clever optimizations in the design of the actual PIO that I did not implement. Still, barrel shifters are a fairly expensive piece of hardware whether in FPGA or in ASIC, and the PIO requires several of them, whereas the BIO only has one. The upshot is that the PIO can do multiple bit-shifts in a single clock cycle, whereas the BIO requires several cycles to do the same amount of bit-shifting. Again, neither good or bad - just different trade-offs.
phire 7 hours ago [-]
> The upshot is that the PIO can do multiple bit-shifts in a single clock cycle... it's likely there are clever optimizations in the design of the actual PIO that I did not implement
I was curious, so looked into this. From what I can tell, PIO can only actually do a maximum of two shifts per cycle. That's one IN, OUT, or SET instruction plus a side-set.
And the side-set doesn't actually require a full barrel shifter. It only ever needs to shift a maximum of 5 bits (to 32 positions), which is going to cut down its size. With careful design, you could probably get away with only a single 32-bit barrel shifter (plus the 5-bit side-set shifter).
Interestingly, Figure 48 in the RP2040 Datasheet suggests they actually use seperate input and output shifters (possibly because IN and OUT rotate in opposite directions?). It also shows the interface between the state machine input/output mapping, pointing out the two seperate output channels.
raphlinus 1 days ago [-]
Thanks btw for saying clearly that BIO is not suitable for DVI output. I was curious about this and was planning to ask on social media.
I've done some fun stuff in PIO, in particular the NRZI bit stuffing for USB (12Mbps max). That's stretching it to its limit. Clearly there will be things for which BIO is much better.
I suspect that a variant of BIO could probably do DVI by optimizing for that specific use case (in particular, configuring shifters on the output FIFO), but I'm not sure it's worth the lift.
bunnie 14 hours ago [-]
USB 12Mbps is one of the envisioned core use cases - the Baochip doesn't have a host USB interface, so being able to emulate a full-speed USB host with a BIO core opens the possibility of things like having a keyboard that you can plug into the device. CAN is another big use case, once there is a CAN bus emulator there's a bunch of things you can do. Another one is 10/100Mbit ethernet - it's not fast - but good for extremely long runs (think repeaters for lighting protocols across building-scale deployments).
When considering the space of possibilities, I focused on applications that I could see there being actual product sold that rely upon the feature. The problem with DVI is that while it's a super-clever demo, I don't see volume products going to market relying upon that feature. The moment you connect to an external monitor, you're going to want an external DRAM chip to run the sorts of applications that effectively utilize all those pixels. I could be wrong and mis-judged the utility of the demo but if you do the analysis on the bandwidth and RAM available in the Baochip, I feel that you could do a retro-gaming emulator with the chip, but you wouldn't, for example, be replacing a video kiosk with the chip. Running DOOM on a TV would be cool, but also, you're not going to sell a video game kit that just runs DOOM and nothing else.
The good news is there's plenty of room to improve the performance of the BIO. If adoption is robust for the core, I can make the argument to the company that's paying for the tape-outs to give me actual back-end resources and I can upgrade the cores to something more capable, while improving the DMA bandwidth, allowing us to chase higher system frequencies. But realistically, I don't see us ever reaching a point where, for example, we're bit-banging USB high speed at 480Mbps - if not simply because the I/Os aren't full-swing 3.3V at that point in time.
awjlogan 12 hours ago [-]
My feeling about programmable IOs is they’re fun, but not the right choice for commodity high speed interfaces like USB. You obviously can make them work, but they’re large compared to what you would need for a dedicated unit. The DVI over PIO is a good example: showed something interesting (and that’s great!) but not widely useful. Also, a lot of protocols, even slow ones, have failure and edge cases that would need to be covered. Not to mention the physical characteristics, like you’ve said for high speed USB.
MayeulC 4 hours ago [-]
This is true, but only relevant if you order enough units (>100 k? Depending on price & margin of course) to customize your die. Otherwise, you have to find a chip with the I/Os that you want, all the rest being equal. Good luck with that if you need something specific (8 UARTs for instance) or obscure.
raphlinus 12 hours ago [-]
Yes, I can see BIO being really good at USB host. With 4k of SRAM I can see it doing a lot more of the protocol than just NRZI; easily CRC and the 1kHz SOF heartbeat, and I wouldn't be surprised if it could even do higher level things like enumeration.
You may be right about not much scope for DVI in volume products. I should be clear I'm just playing with RP2350 because it's fun. But the limitation you describe really has more to do with the architectural decision to use a framebuffer. I'm interested in how much rendering you can get done racing the beam, and have come to the conclusion it's quite a lot. It certainly includes proportional fonts, tiles'n'sprites, and 4bpp image decompression (I've got a blog post in the queue). Retro emulators are a sweet spot for sure (mostly because their VRAM fits neatly in on-chip SRAM), but I can imagine doing a kiosk.
Definitely agree that bit-banging USB at 480Mbps makes no sense, a purpose-built PHY is the way to go.
Retr0id 1 days ago [-]
It didn't read that way, to me.
RS-232 21 hours ago [-]
> The build script compiles C code down to a clang intermediate assembly, which is then handed off to a Python script that translates it into a Rust macro which is checked into Xous as a buildable artifact using its pure-Rust toolchain.
Ah yes, the good ol “we solved the C problem by turning it into four other problems” pipeline
nadavdebi 1 days ago [-]
[flagged]
Rendered at 19:48:25 GMT+0000 (Coordinated Universal Time) with Vercel.
The PRUs really get a bunch right. Very specifically, the ability to broadside dump the ENTIRE register file in a single cycle from one PRU to the other is gigantic. It's the single thing that allows you to transition the data from a hard real-time domain to a soft real-time domain and enables things like the industrial Ethernet protocols or the BeagleLogic, for example.
The 25MHz number I cite as the performance expectation is "relaxed": I don't want to set unrealistic expectations on the core's performance, because I want everyone to have fun and be happy coding for it - even relatively new programmers.
However, with a combination of overclocking and optimization, higher speeds are definitely on the horizon. Someone on the Baochip Discord thought up a clever trick I hadn't considered that could potentially get toggle rates into the hundreds of MHz's. So, there's likely a lot to be discovered about the core that I don't even know about, once it gets into the hands of more people.
On rp2350 it is pio (wait for clock) -> pio (read address bus) -> dma (addr into lower bits of dma source for next channel) -> dma (Data from SRAM to PIO) -> pio (write data to data bus) chain and it barely keeps up.
I ordered a few, thinking it would make a good logic analyzer (before the details of the BIO were published). Obviously, it's going to be a stretch with multiple cycles per instructions, and a reduced instruction set. I'll see how far I can push it if I rely on multiple BIOs, perhaps with some tricks such as relying on an external clock signal. At first glance, they seemed to be perfect for doing some basic RLE or Huffman compression on-the-fly, but I am less sure now, I will have to play with it. Bit-packing may be somewhat expensive to perform, too.
One thing stood out to me in this design: that liberal use of the 16 extra registers. It's a very clever trick, but wouldn't some of these be better exposed as memory addresses? Or do you foresee applications where they are in the hot path (where the inability to write immediate values may matter). Stuff like core ID, debug, or even GPIO direction could be hard-wired to memory addresses, leaving space for some extra features (not sure which? General purpose registers? More queues? More GPIOs? A special purpose HW block?).
I really like the "snap to quantum" mechanism: as you wrote, it is good for portability, though there should be a way to query frequency, if portability is really a goal.
Anyway, it's plenty for a v1, plenty of exciting things to play with, including the MMU of the main core!
I think it will be interesting to see what people end up doing with it and what are the pain points. As you say, it's a v1 - with any luck there will be a v2, so we could consider the time starting now as a deliberation period for what goes into v2.
The good news is that it also all compiles into an FPGA, so proposed patches can be tested & vetted in hardware, albeit at a much slower clock rate.
You could also say it's up to the user to implement a fully-fledged timer/counter in a BIO coprocessor if they need one, though ideally there would be a shared register (or a way to configure the FIFOs depth + make them non-blocking) to communicate the result.
Small cores like these are really fun to play with: the constraints easily fit in your head, and finding some clever way to use the existing HW is very rewarding. Who needs Zachtronics games when you have a BIO or PIO?
Losing the high maximum data rate is quite a cost, but in my use case BIO would be the clear winner, indexed pixel format conversion on PIO is shifting out the high bits of palette address, then the index, then some zeros. Which goes to a FIFO which is read by a DMA simply to write it to the readaddr+trigger of another DMA which feeds into another FIFO (which is the program doing the transparency)
That I suspect becomes a much simpler task with BIO
It is an interesting case, where just knowing that the higher potential rate of the PIO is there is a kind of comfort even when you don't currently need it.
Although for those higher rates it is very rarely reactive and most often just wiggling wires in a predetermined fashion.
I wonder if having a register that can be DMA'd to could perform the equivalent function of side-set to play a fixed sequence to some pins at full clock speed. Like playing macros.
I guess another approach a 32 bit register could shift out 4 bits of side set per clock cycle. Then you could pre program for the next 8 cycles in a single 32 bit write. It would give you breathing space to drive the main data while the side set does fixed pattern signaling.
For these reasons specified above, I think that this trend will continue. For example, in my specialization of edge machine learning, we are seeing MEMS sensors that integrate user programmable DSP+ML+CPU right there on the sensor chip.
One thing jumped out here - I assumed CISC inside PIO had a mental model of "one instruction by cycle" and thus it was pretty easy to reason about the underlying machine (including any delay slots etc...).
For this RISC model using C, we are now reasoning about compiled code which has a somewhat variable instruction timing (1-3 cycles) and that introduces an uncertainty - the compiler and understanding its implementation.
I think this means that the PIO is timing-first, as timing == waveform where BIO is clarity-first with C as the expression and then explicit hardware synchronization.
I like both models! I am wondering about the quantum delays however that are being used to set the deadlines - here, human derived wait delays are utilized knowledge of the compiled instructions to set the timing.
Might there not be a model of 'preparing the next hardware transaction' and then 'waiting for an external synchronization' such as an external signal or internal clock, so we don't need to count the instruction cycles so precisely. On the external signal side, I guess the instruction is 'wait for GPIO change' or something, so the value is immediately ready (int i = GPIO_read_wait_high(23) or something) and the external one is doing the same, but synchronizing (GPIO_write_wait_clock( 24, CLOCK_DEF)) as an alternative to the explicit quantum delays.
This might be a shadow register / latch model in more generic terms - prep the work in shadow, latch/commit on trigger.
Anyway, great work Bunnie!
That being said - one nice thing about the BIO being open source is you can run the verilog design in Verilator. The simulation shows exactly how many cycles are being used, and for what. So for very tight situations, the open source RTL nature of the design opens up a new set of tools that were previously unavailable to coders. You can see an example of what it looks like here: https://baochip.github.io/baochip-1x/ch00-00-rtl-overview.ht...
Of course, there's a learning curve to all new tools, and Verilator has a pretty steep curve in particular. But, I hope people give the Verilator simulations a try. It's kind of neat just to be able to poke around inside a CPU and see what it's thinking!
The C compiler support is a relatively recent addition, mostly to showcase the possibilities of doing high-level protocol offloading into the BIO, and the tooling benefits of sticking with a "standard" instruction set.
My point is, maybe this is one of those designs that blow up in FPGA. Or maybe the open source version of the PIO is simply not as area efficient as the rpi version?
But even on the real RP2040, PIO is not small.
Take a look at the annotated die shot [0]. The PIO blocks are notably bigger than the PROC blocks.
[0] https://assets.raspberrypi.com/static/floorplan@2x-a25341f50...
That being said - size isn't everything. At these small geometries you have gates to burn, and having access to multiple shifts in a single cycle really do help in a range of serialization tasks.
Btw I am curious what about edge cases. Maybe I have missed that from the article but what is the size of the FIFO?
Or the more dangerous part that is you have complex to determine timing now for complex cases like each reqd from FIFO is and ISR and you have until the next read from the FIFO amount of instructions otherwise you would stall the system and that looks to me too hard to debug.
The deadlock possibilities with the FIFO are real. It is possible to check the "fullness" of a FIFO using the built-in event subsystem, which allows some amount of non-blocking backpressure to be had, but it does incur more instruction overhead.
Have some on the way! Can't wait!
which is an argument that "fpga_pio" is badly implemented or that PIO is unsuitable for FPGA impls. Real silicon does not need to use a shitton of LUT4s to implement this logic and it can be done much more efficiently and closes timing at higher clocks (as we know since PIO will run near a GHz)
The BIO should also be able to overclock. It won't overclock as well as the PIO, for sure - the PIO stores its code in flip-flops, which performance scales very well with elevated voltages. The BIO uses a RAM macro, which is essentially an analog part at its heart, and responds differently to higher voltages.
That being said, I'm pretty confident that the BIO can run at 800MHz for most cases. However, as the manufacturer I have to be careful about frequency claims. Users can claim a warranty return on a BIO that fails to run at 700MHz, but you can't do the same for one that fails to run at 800MHz - thus whenever I cite the performance of the BIO, I always stick it at the number that's explicitly tested and guaranteed by the manufacturing process, that is, 700MHz.
Third-party overclockers can do whatever they want to the chip - of course, at that point, the warranty is voided!
> If you’re thinking about using it in an FPGA, you’d be better off skipping the PIO and just implementing whatever peripherals you want directly using RTL.
Because it does what it does so well, I use the PIO as the design study comparison point. This requires taking a critical view of its architecture. Such a review doesn't mean its design is bad - but we try to take it apart and see what we can learn from it. In the end, there are many things the PIO can do that the BIO can't do, and vice-versa. For example, the BIO can't do the PIO's trick of bit-banging DVI video signals; but, the PIO isn't going to be able to protocol processing either.
In terms of area, the larger area numbers hold for both an ASIC flow as well as the FPGA flow. I ran the design through both sets of tools with the same settings, and the results are comparable. However, it's easier to share the FPGA results because the FPGA tools are NDA-free and everyone can replicate it.
That being said, I also acknowledge in the article that it's likely there are clever optimizations in the design of the actual PIO that I did not implement. Still, barrel shifters are a fairly expensive piece of hardware whether in FPGA or in ASIC, and the PIO requires several of them, whereas the BIO only has one. The upshot is that the PIO can do multiple bit-shifts in a single clock cycle, whereas the BIO requires several cycles to do the same amount of bit-shifting. Again, neither good or bad - just different trade-offs.
I was curious, so looked into this. From what I can tell, PIO can only actually do a maximum of two shifts per cycle. That's one IN, OUT, or SET instruction plus a side-set.
And the side-set doesn't actually require a full barrel shifter. It only ever needs to shift a maximum of 5 bits (to 32 positions), which is going to cut down its size. With careful design, you could probably get away with only a single 32-bit barrel shifter (plus the 5-bit side-set shifter).
Interestingly, Figure 48 in the RP2040 Datasheet suggests they actually use seperate input and output shifters (possibly because IN and OUT rotate in opposite directions?). It also shows the interface between the state machine input/output mapping, pointing out the two seperate output channels.
I've done some fun stuff in PIO, in particular the NRZI bit stuffing for USB (12Mbps max). That's stretching it to its limit. Clearly there will be things for which BIO is much better.
I suspect that a variant of BIO could probably do DVI by optimizing for that specific use case (in particular, configuring shifters on the output FIFO), but I'm not sure it's worth the lift.
When considering the space of possibilities, I focused on applications that I could see there being actual product sold that rely upon the feature. The problem with DVI is that while it's a super-clever demo, I don't see volume products going to market relying upon that feature. The moment you connect to an external monitor, you're going to want an external DRAM chip to run the sorts of applications that effectively utilize all those pixels. I could be wrong and mis-judged the utility of the demo but if you do the analysis on the bandwidth and RAM available in the Baochip, I feel that you could do a retro-gaming emulator with the chip, but you wouldn't, for example, be replacing a video kiosk with the chip. Running DOOM on a TV would be cool, but also, you're not going to sell a video game kit that just runs DOOM and nothing else.
The good news is there's plenty of room to improve the performance of the BIO. If adoption is robust for the core, I can make the argument to the company that's paying for the tape-outs to give me actual back-end resources and I can upgrade the cores to something more capable, while improving the DMA bandwidth, allowing us to chase higher system frequencies. But realistically, I don't see us ever reaching a point where, for example, we're bit-banging USB high speed at 480Mbps - if not simply because the I/Os aren't full-swing 3.3V at that point in time.
You may be right about not much scope for DVI in volume products. I should be clear I'm just playing with RP2350 because it's fun. But the limitation you describe really has more to do with the architectural decision to use a framebuffer. I'm interested in how much rendering you can get done racing the beam, and have come to the conclusion it's quite a lot. It certainly includes proportional fonts, tiles'n'sprites, and 4bpp image decompression (I've got a blog post in the queue). Retro emulators are a sweet spot for sure (mostly because their VRAM fits neatly in on-chip SRAM), but I can imagine doing a kiosk.
Definitely agree that bit-banging USB at 480Mbps makes no sense, a purpose-built PHY is the way to go.
Ah yes, the good ol “we solved the C problem by turning it into four other problems” pipeline