ovgn / openhbmc Goto Github PK
View Code? Open in Web Editor NEWOpen-source high performance AXI4-based HyperRAM memory controller
License: Apache License 2.0
Open-source high performance AXI4-based HyperRAM memory controller
License: Apache License 2.0
Narrow bursts with axlen >= 1 are not supported yet. This is quite rare operation mode, though I have plans to support this feature too to keep IP core 100% AXI4 compliant.
Hello
Does OpenHBMC support Infineon HyperRAM chips?
Hello,
i have an xc7z Zynq as an Equivalent to the artix 7 and want to disable the DDR and use a HyperRam in some run-Modes for power optimization. The Zynq 7000 Modules aren't part of the compatibility list, but they should still work or?
Best Regards
? What frequency should the iserdes clock be? Same as hyperbus clock phase 0?
Hello
I use a 64 Mbit Hyper-RAM as Microblaze main memory and use a SREC boot-loader to load the program from a QSPI flash memory. Most of the time everything is OK but in some cases after loading the Hyper-RAM with the application file, reading first instruction from Hyper-RAM fails and program halts.
I captured the issue with ILA core and saw that the problem is because of wrong value (all ones) at the output of the RWDS ISERDESE2. This happens on the first time read after the power up. Sometimes, two clock cycles after reset deassertion, an output of all ones comes out of the RWDS hbmc_iobuf
which causes the DRU unit to capture an extra 16-bit word before the actual burst transfer begins.
According to UG471,
After deassertion of reset, the output is not valid until after two CLKDIV cycles.
So I added an iserdes_q_invalid
flag to ignore the ISERDESE2 output for two clock cycles of iserdes_clkdiv
after arst
falling edge.
diff --git a/OpenHBMC/hdl/hbmc_iobuf.v b/OpenHBMC/hdl/hbmc_iobuf.v
index 1df7ccf..77296db 100644
--- a/OpenHBMC/hdl/hbmc_iobuf.v
+++ b/OpenHBMC/hdl/hbmc_iobuf.v
@@ -55,6 +55,9 @@ module hbmc_iobuf #
wire iserdes_d;
wire [5:0] iserdes_q;
wire iserdes_ddly;
+
+ reg arst_shift_reg [0:1];
+ wire iserdes_q_invalid;
/*----------------------------------------------------------------------------------------------------------------------------*/
@@ -221,6 +224,24 @@ module hbmc_iobuf #
.SHIFTIN2 ( 1'b0 )
);
+/*----------------------------------------------------------------------------------------------------------------------------*/
+
+ /* ISERDESE2 reset extender
+ * According to UG471, ISERDESE2 output is invalid for two clock cycles after reset deassertion.
+ */
+
+ always @(posedge iserdes_clkdiv or posedge arst) begin
+ if (arst) begin
+ arst_shift_reg[0] <= 1'd1;
+ arst_shift_reg[1] <= 1'd1;
+ end else begin
+ arst_shift_reg[0] <= 1'd0;
+ arst_shift_reg[1] <= arst_shift_reg[0];
+ end
+ end
+
+ assign iserdes_q_invalid = arst_shift_reg[1];
+
/*----------------------------------------------------------------------------------------------------------------------------*/
/* Register ISERDESE2 output */
@@ -228,7 +249,11 @@ module hbmc_iobuf #
if (arst) begin
iserdes_o <= {6{1'b0}};
end else begin
- iserdes_o <= iserdes_q;
+ if (iserdes_q_invalid) begin
+ iserdes_o <= {6{1'b0}};
+ end else begin
+ iserdes_o <= iserdes_q;
+ end
end
end
This seems to work and the boot problem is now resolved.
@OVGN To solve this I had to deep dive into the code and I have to say you have done a wonderful job, thank you very much for sharing it! ๐
Hello!
This is some kind of discussion about DRU operation.
I think internals of this thread will be soon added to IP core documentation.
Feel free to ask question)
@OVGN
Hi, First I want to thank you for your great work and that you made it open source which is a very valuable.
I saw a problem in my system using openHBMC and found a workaround for it, so I decided to report it here for use of others facing it. Actually the problem is not still very clear to me and I don't exactly know where the cause of problem is, so I just report my observations without any conclusion.
I have three VDMAs and a Microblaze with I-Cache and D-Chache enabled and connected to openHBMC by AXI-Interconnect IP (Vivado's default suggestion is AXI-SmartConnect but it uses a lot of resources!). This is part of my system in Vivado:
Microblaze is configured with 8KB D-Cache and 8KB I-Cahce and each cache is configured with 16 Line Length parameter for better performance. When VDMAs Memory Map Data Width
parameter is configured automatically which is 64 bits, memory test (template from Vitis with no changes) Fials. This failure is not always the same as sometimes it fails only for 32 bits test, sometimes for all tests and sometimes microblaze satlls. However, when I change the VDMAs Memory Map Data Width
parameter manulally and set it to 32 bits every thing is OK and memory test passes.
My system spec is this:
Hi,
Has anyone got OpenHBMC working for Trenz TE=0725 board?
I have been trying to get this to work for days, but no luck, the cypress chip onboard(8M) needs 0 and 180 deg clock phases... but the IP block takes 0 and 90.
Is it possible to use OpenHBMC with the Cypress chip?
Chip P/N: S27KS0641DPBHI000
After IP block has been added in Vivado 2020.2 following the instructions on the OpenHBMC project page it synths / implements and the 8M is by default added ad memoryaddress: 0x7600000 in the microblaze memorymap. Exporting the board specs after that and .bin / .mmi files to XSDK(VITIS) the debugger in XSDK(VITIS) gets immediate failure if i try to access memory location 0x76000000 and above.
Oh and clocks p & n to the IP block + the iserdes is:
clk_0 = 88.888.. MHz
clk_90 = 88.888 MHz, 90 deg rel to clk_0
clk_iserdes is = 266MHz, 0 deg rel to clk_0;
All generated from same MMCM clk_wiz block.
And microblaze and everything else like AXI running on a 100MHz clock from the same clk_wiz instance.
Any ideas?
Thanks!
Iserdese3 doesn't support 6:1 ratio...could go with a 4x clock and 8:1, but that seems like overkill. Is there a reason behind going to 3x clock rather than 2x?
source C:\some_your_path\OpenHBMC\examples\mb_single_ram mb_single_ram.tcl
doesn't work on Windows, it generae an "error: [common 17-165] too many positional options when parsing"
I had to use:
C:/some_your_path/OpenHBMC/examples/mb_single_ram/mb_single_ram.tcl
no spaces, no backslash and it's work fine.
Hi, i have downloaded your openhbmc ip.
I do not now what freq and phase you should attache to clk_iserdes.
I am trying to run it at 100Mhz at the Hyperram.
Best regards
Lasse Eriksson
Not an issue, more of a question. I stumbled upon your core (thanks for putting it out into the community) while doing some research. I have a design with an STM32H microcontroller interfaced to a Kintex-7, and plan to have the STM as the HyperBUS master. Cypress supplies a Xilinx HyperBUS controller, but only in master mode. Before I started looking at a level of effort to attempt to come up with a controller of my own, I started poking around to see if someone else had done it yet. Your design seemed close, but I do not believe it can act as a slave, correct? Over the course of implementing this, did you ever run across a core that could act in slave mode?
TIA
Hi,
we have implemented OpenHBMC on a custom board, it seemed to work well, the memory test passed, passed, passed...
but I did run the memory test manually a few more times, and it seems that about one out of ten the 32-bit test is failing, is this maybe the KNOWN BUG with an initial transaction that fails?
UPDATE: sometimes the 16 bit test fails, so it is not the first word issue. Confirmed 1 out of 10 is stable fail either 32 or 16 bit test. 8 bit test seems to pass always, not seen 8 bit fail yet.
UPDATE2: the FAILING design had axi bus clock 100MHz, changing the axi clock to 75 MHz seems to fix the problem. So the issue only appears when axi bus clock equals hyperram clock!
UDPATE3: with 75 MHz axi clock the issue is less frequent, but it still happens so there is a bug with AXI
UPDATE4: with 81.81818 axi clock it seems to be less failures.
STATUS: 32 and 16 bit memory tests fail if executed in a loop, there seems to be relation to axi bus clock, with 100MHz most failures, with 75MHz less and 81.81818 even less failures
but the failure rate is way too high for the IP core to be useful, this is bad bug!
UPDATE 5:
it seems the issue is also related to BUFG clocking mode, we changed the FAILING design to use BUFIO/BUFR clocking and now we do not see issues, the memory tests are executed in infinite loop without fail, also with 100MHz axi clock, no failures
The loop was running 4+ days without failures, so the BUFIO mode works well!
This is now very bad issue. Sorry folks. After switching from BUFG mode BUFR/BUFIO mode we did see it working well, but just in case we let the loop test to run. Well first time it did run about a week until it failed. We are now running it all the time, every day we check if it has failed. And we are seeing failures almost every day. So there is a likelihood that the OpenHBMC fails within 24 hours of continuous testing.
This is bad, it actually means that OpenHBMC can not be used in real products. As once a day failure can not be tolerated. If it works, it should work and not fail every other day.
We are sure that our target hardware is near ideal for HyperRAM testing - all hyperbus signals are LESS than 4 mm long! This is amazing layout the HyperRAM sits below FPGA and the wires are really all in the range of 2..4mm! It cant be better than this. So it is for sure not signal integrity issue.
Argh! I recall that we have reports from another HyperRAM IP vendor that some HyperRAM devices itself have failures, like real memory content losses. This would mean that the error is really in the HyperRAM device itself. But how to verify it? Right now we just run the memory tests in forever loop, the error is only telling us the memory width at the failing test, we don't see what data was read or written. And its really not helpful for debugging.
As of now we do not also know if the problem is still related to the Xilinx FIFO and be essentially the same failure as with BUFG versions just happening not fast.
We would be really happy to assist in the debugging of this problem, let us know if we can try out something to rule out some possible causes. I myself have little ideas what we could try.
One interesting option to test the IP would be using CRUVI loopback adapter, but for this testing we would need a HyperRAM emulation model, I am guessing the model offered by Cypress would not work well in FPGA :(
Anyway we are happy to assist with this issue. We really would like to see that HyperRAM would work, and well more than 24 hours!
UPDATE: there is a 3 year old forum entry about errors that happen every 10..20 hours, with different hardware and different IP Core.
https://forum.trenz-electronic.de/index.php/topic,1320.0.html
So there are chances that there is something bad with the HyperRAM chip itself. ?
UPDATE2: different HyperRAM chip, different hardware and different IP Core, and also data corruption:
https://community.infineon.com/t5/Hyper-RAM/HyperRAM-Memory-Corruption/td-p/281115
It would be really nice to see WHAT type of errors happen here at our testing...
Looks like HyperRAM 3.0 is upcomming:
https://www.infineon.com/cms/en/product/memories/psram-pseudostatic-dram/
We are going to get twice wider data bus width, i.e. 16 bit at 200MHz = 800Mbit/s per chip
Part datasheet: https://www.infineon.com/dgdl/Infineon-Data_Sheet_HYPERRAM-DataSheet-v01_00-EN.pdf?fileId=8ac78c8c80027ecd018050aef7544588
OpenHBMC is going to support this new standard!
the current version does not work with Vivado 2022.2 steps to reproduce:
take GIT files, create project in 2020.2, change constraints compile
test with real hardware CR00107 all working
open project in 2022.2, upgrade IP
run the flow
Error running propagate TCL procedure......................
Any chances this issue will be resolved in near future? would be nice to see support in recent release of Vivado!
UPDATE: the issue seems to be related to ADDRESS MAP. If OpenHBMC is "unassigned" in memory map view then all runs to bitgen without errors, well the design will not work, but it emits also no error. Assigning the memory regions will trigger error and the memory regions will be unassigned again automatically.
Hi, I am using Vivado 2020.2 and testing the HyperRAM controller with an ARTY-S7 board and a PMOD board with HyperRAM. I am able to pass the memory test up to 108MHz. When I go to 128 MHz the memory test fails. I am looking at various possibilities for failures and have noticed that the timing report shows that there are unconstrained paths, specifically missing input path delay on the data bus and rwds signal. Also output path delay on same signals. Is this a concern? I know that you are over-sampling with a 3x clock.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.