Atlys MIG notes
- Create new design, component name ddr2
- Don't care about pin-compatible parts. Skip.
- Atlys has DDR2 on Bank 3, so set that. I didn't use AXI for the FFT interface, so don't bother with it here. Note to self - find out if AXI is easier to use than the other way!
- Set part to MT47H64M16xx-25E
- Digilent say that the boards are tested to 800 MHz, but it's not really clear what this means. UG388 states that 800 MHz is double the maximum DDR2 data rate! Either way, I'll leave the frequency at 3200 ps, which is as fast as it goes without extended MCB performance mode anyway.
- Leave all MCB options as default
- For the port configuration, I'm not entirely sure what I want. I suppose I want one port for loading data in and one for getting it out again. The data width is 4*14-bit, so two 64-bit interfaces seems like a good fit anyway. I didn't have any particular reason to select either mapping, but I picked the second one.
- Default timeslot options
- Termination: the Atlys manual states that address and control signals are terminated, but the schematic shows that RZQ (L6)/ZIO (C2) calibration pins are present. I'll try using the calibrated input termination and see what happens.
- System clock is single ended (both the 100 MHz oscillator and the 50 MHz clock from my ADC)
Now, to make sense of it all!
Implementation
- Generate clocks: The SP601 example has a 200 MHz oscillator and generates 625 MHz (2 * the frequency specified in the core generator), and 78.125 MHz for traffic and soft calibration. The same values can be generated with the Atlys' 100 MHz oscillator. Can't actually see how the clock is anything but 400 though!
localparam C3_CLKOUT0_DIVIDE = 1;
localparam C3_CLKOUT1_DIVIDE = 1;
localparam C3_CLKOUT2_DIVIDE = 8;
localparam C3_CLKOUT3_DIVIDE = 4;
localparam C3_CLKFBOUT_MULT = 25;
localparam C3_DIVCLK_DIVIDE = 4;
- Write a simple test bench to verify that the PLL is working in simulation - it is. After ~160ns, DDR2CLK_* start going at 312.5 MHz.
- Now try sending commands to the memory!
- This doesn't work - attempt to simulate the UG416 example and see what that's doing differently.
- Load ISE Design Suite Command Prompty
- Go to project/ipcore_dir/corename/example_design/par and run ise_flow.bat (if you want to synthesise the whole thing) or project/ipcore_dir/corename/sim/functional/ and run isim.bat .
- sim_tb_top.v does the initial setup and has some interesting Verilog test bench constructs that I didn't know about
- Resets:
initial begin
c3_sys_rst = 1'b0;
#20000;
c3_sys_rst = 1'b1;
end
assign c3_sys_rst_n = C3_RST_ACT_LOW ? c3_sys_rst : ~c3_sys_rst; (parameter C3_RST_ACT_LOW = 0)
- The Ztek example assigns reset based on calibration and some other stuff. Maybe it never calibrates if it hasn't reset? Or maybe they didn't use that code in the end.
- What about c3_rst0? It seems to be an output.
- My UUT is either not calibrating or not coming out of reset. These may be necessary for calibration to succeed:
PULLDOWN zio_pulldown3 (.O(zio3)); PULLDOWN rzq_pulldown3 (.O(rzq3));
- Still no good. Check all of the pins and make sure they're wired up - a few of them are Z in my sim but not in the example.
- Setting .C3_SIMULATION to true helps a bit. iSim says "The 200 us wait period required before CKE goes active has been skipped in Simulation" like the XAPP simulation, and I get some activity after 33uS, though no calib_done.
- Make some pins inout - this seems to help a lot - there are now many more interesting things going on - but I still don't get any debug messages or calib_done after 100us.
- Interestingly, while our memory clocks are the same (3200 ps), my c3_clk0 is 12.8 ns (78 MHz) and XAPP's is 25.6 ns (39 MHz)
- AHH! The simulation uses ddr2_model_c3.v, which emulates a Micron DDR2 memory. The MIG is trying to calibrate it, but nothing is happening because it's not there. Pretty obvious, really. Now it works. Add the source file, ipcore_dir/corename/user_design/sim/functional/ddr2_model_c3.v to your project, and connect that to the MIG core.
- Lots of timing violation errors, which are caused by a large TCK_MIN value. This is defined in ddr2_model_parameters_c3.vh, which defines these constants based on ifdefs. Where are these defined?
- Ah, in ddr2.prj, there's the line, verilog work ./ddr2_model_c3.v -d x1Gb -d sg25E -d x16 -i ./
- Can't work out how to set this for my test bench (only). `ifdef in my test bench doesn't work. Just add the following to the top of ddr2_model_parameters_c3.vh and it works. Parameters are valid for the Atlys.
`define x1Gb
`define sg25E
`define x16
- Much better, but still not working. Notice that my c3_p0_cmd_clk is Z, while XAPP simulation has it toggling at 25.6ns (39 MHz).
- This is an input (to the IP core) - ah, but c3_clk0 is an output (12.6 ns period) - wire these up together! Another misunderstanding from the Ztek article.
- Now, my commands aren't working. Compare the test bench waveforms to see what I'm doing differently.
- Oh, that would be because the c3_p0_rd_clk and c3_p0_wr_clk (FIFO clocks) aren't wired up to anything. Connect those to c3_clk0 (as well as the c3_p1_*_clk, though I might want to have those in a different clock domain later)
- It works! Now to write this up as an informative article.