01signal.com

Meaning of set_input_delay and set_output_delay in SDC timing constraints

Introduction

Synopsys Design Constraints (SDC) has been adopted by Xilinx (in Vivado, as .xdc files) as well as Intel FPGA (in Quartus, as .sdc files) and other FPGA vendors as well. Despite the wide use of this format, there seems to be some confusion regarding the constraints that are used to define I/O timing.

This post defines what these constraints mean, and then shows how Vivado and Quartus calculate the timing to validate the timing constraints. These detailed calculations are shown in two separate pages, one page for Vivado, and one page for Quartus. The output of these tools demonstrates the meaning of the timing constraints by showing the related parts in the timing reports.

So there’s no need to take my word for it, and these examples also give a direction on how to check that your own constraints did what they were supposed to do.

The set_input_delay and set_output_delay commands have several flags and options, which are not covered here. This post is about the basics. Advanced options are explained in the tools' documentation.

And yes, surprisingly enough, the meaning of these constraints is the same with Vivado and Quartus. Compatibility. Unbelievable, but true.

What they mean

In short,

Note that if neither -min or -max are used as flags on these constraints, it’s like two constraints, one with -min and one with -max. This is probably not what you want.

The definitions are confusing: set_input_delay defines the allowed range of delays of the data toggle after a clock, but set_output_delay defines the range of delays of the clock after a data toggle. Presumably, the rationale behind these definitions is that they make it possible to copy number from the datasheets of the external components, and use these numbers directly with these constraints.

Always use both min and max

It may seem meaningless to use the min/max constraints. For example, using a simple set_output_delay sets the setup time correctly, and the hold time to a negative value which is incorrect, but why bother? It allows the output port to change before the clock, but that couldn’t happen, could it?

Well, actually it can. For example, it’s quite common to let an FPGA PLL (or alike) generate the internal clock from a clock at some input pin (i.e. the clock that is visible on the board). This allows the PLL to align the clock on the FPGA’s internal clock network to the input clock. The PLL does this by moving (shifting) the clock slightly to compensate for the delay of the clock distribution network.

Actually, the implementation tools may feel free to move the clock to slightly earlier than the board's clock, in order to achieve timing easier. A slow path from logic to output may violate the maximal delay that is allowed from clock to output. Moving the clock to earlier in time fixes this.

But when the FPGA's internal clock is earlier than the clock on the board, the FPGA's outputs may change before the edge of the clock on the board. This can lead to a violation of hold time on the component that receives these outputs. Nothing prevents this from happening, except a min output delay constraint.

A simple example design

We’ll assume that test_clk is the input clock, test_in is an input pin, and test_out is an output pin, with the following relationship:

   always @(posedge test_clk)
     begin
	test_samp <= test_in;
	test_out <= test_samp;
     end

No PLL is used to align the internal clock with the board’s test_clk, so there’s a significant clock delay.

The following timing constraints applied in the SDC / XDC file:

create_clock -name theclk -period 20 [get_ports test_clk]
set_output_delay -clock theclk -max 8 [get_ports test_out]
set_output_delay -clock theclk -min -3 [get_ports test_out]
set_input_delay -clock theclk -max 4 [get_ports test_in]
set_input_delay -clock theclk -min 2 [get_ports test_in]

As the tools’ timing reports are rather long, they are shown on separate pages:

Copyright © 2021-2022. All rights reserved. (ba53f4ca)