25. Rasterization
Rasterization is the process by which a primitive is converted to a twodimensional image. Each discrete location of this image contains associated data such as depth, color, or other attributes.
Rasterizing a primitive begins by determining which squares of an integer grid in framebuffer coordinates are occupied by the primitive, and assigning one or more depth values to each such square. This process is described below for points, lines, and polygons.
A grid square, including its (x,y) framebuffer coordinates, z (depth), and associated data added by fragment shaders, is called a fragment. A fragment is located by its upper left corner, which lies on integer grid coordinates.
Rasterization operations also refer to a fragment’s sample locations, which are offset by fractional values from its upper left corner. The rasterization rules for points, lines, and triangles involve testing whether each sample location is inside the primitive. Fragments need not actually be square, and rasterization rules are not affected by the aspect ratio of fragments. Display of nonsquare grids, however, will cause rasterized points and line segments to appear fatter in one direction than the other.
We assume that fragments are square, since it simplifies antialiasing and texturing. After rasterization, fragments are processed by fragment operations.
Several factors affect rasterization, including the members of VkPipelineRasterizationStateCreateInfo and VkPipelineMultisampleStateCreateInfo.
The VkPipelineRasterizationStateCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkPipelineRasterizationStateCreateInfo {
VkStructureType sType;
const void* pNext;
VkPipelineRasterizationStateCreateFlags flags;
VkBool32 depthClampEnable;
VkBool32 rasterizerDiscardEnable;
VkPolygonMode polygonMode;
VkCullModeFlags cullMode;
VkFrontFace frontFace;
VkBool32 depthBiasEnable;
float depthBiasConstantFactor;
float depthBiasClamp;
float depthBiasSlopeFactor;
float lineWidth;
} VkPipelineRasterizationStateCreateInfo;

sType
is the type of this structure. 
pNext
isNULL
or a pointer to a structure extending this structure. 
flags
is reserved for future use. 
depthClampEnable
controls whether to clamp the fragment’s depth values as described in Depth Test. Enabling depth clamp will also disable clipping primitives to the z planes of the frustrum as described in Primitive Clipping. 
rasterizerDiscardEnable
controls whether primitives are discarded immediately before the rasterization stage. 
polygonMode
is the triangle rendering mode. See VkPolygonMode. 
cullMode
is the triangle facing direction used for primitive culling. See VkCullModeFlagBits. 
frontFace
is a VkFrontFace value specifying the frontfacing triangle orientation to be used for culling. 
depthBiasEnable
controls whether to bias fragment depth values. 
depthBiasConstantFactor
is a scalar factor controlling the constant depth value added to each fragment. 
depthBiasClamp
is the maximum (or minimum) depth bias of a fragment. 
depthBiasSlopeFactor
is a scalar factor applied to a fragment’s slope in depth bias calculations. 
lineWidth
is the width of rasterized line segments.
// Provided by VK_VERSION_1_0
typedef VkFlags VkPipelineRasterizationStateCreateFlags;
VkPipelineRasterizationStateCreateFlags
is a bitmask type for setting
a mask, but is currently reserved for future use.
The VkPipelineMultisampleStateCreateInfo
structure is defined as:
// Provided by VK_VERSION_1_0
typedef struct VkPipelineMultisampleStateCreateInfo {
VkStructureType sType;
const void* pNext;
VkPipelineMultisampleStateCreateFlags flags;
VkSampleCountFlagBits rasterizationSamples;
VkBool32 sampleShadingEnable;
float minSampleShading;
const VkSampleMask* pSampleMask;
VkBool32 alphaToCoverageEnable;
VkBool32 alphaToOneEnable;
} VkPipelineMultisampleStateCreateInfo;

sType
is the type of this structure. 
pNext
isNULL
or a pointer to a structure extending this structure. 
flags
is reserved for future use. 
rasterizationSamples
is a VkSampleCountFlagBits value specifying the number of samples used in rasterization. 
sampleShadingEnable
can be used to enable Sample Shading. 
minSampleShading
specifies a minimum fraction of sample shading ifsampleShadingEnable
is set toVK_TRUE
. 
pSampleMask
is a pointer to an array ofVkSampleMask
values used in the sample mask test. 
alphaToCoverageEnable
controls whether a temporary coverage value is generated based on the alpha component of the fragment’s first color output as specified in the Multisample Coverage section. 
alphaToOneEnable
controls whether the alpha component of the fragment’s first color output is replaced with one as described in Multisample Coverage.
Each bit in the sample mask is associated with a unique
sample index as defined for the
coverage mask.
Each bit b for mask word w in the sample mask corresponds to
sample index i, where i = 32 × w + b.
pSampleMask
has a length equal to ⌈
rasterizationSamples
/ 32 ⌉ words.
If pSampleMask
is NULL
, it is treated as if the mask has all bits
set to 1
.
// Provided by VK_VERSION_1_0
typedef VkFlags VkPipelineMultisampleStateCreateFlags;
VkPipelineMultisampleStateCreateFlags
is a bitmask type for setting a
mask, but is currently reserved for future use.
The elements of the sample mask array are of type VkSampleMask
,
each representing 32 bits of coverage information:
// Provided by VK_VERSION_1_0
typedef uint32_t VkSampleMask;
Rasterization only generates fragments which cover one or more pixels inside the framebuffer. Pixels outside the framebuffer are never considered covered in the fragment. Fragments which would be produced by application of any of the primitive rasterization rules described below but which lie outside the framebuffer are not produced, nor are they processed by any later stage of the pipeline, including any of the fragment operations.
Surviving fragments are processed by fragment shaders. Fragment shaders determine associated data for fragments, and can also modify or replace their assigned depth values.
25.1. Discarding Primitives Before Rasterization
Primitives are discarded before rasterization if the
rasterizerDiscardEnable
member of
VkPipelineRasterizationStateCreateInfo is enabled.
When enabled, primitives are discarded after they are processed by the last
active shader stage in the pipeline before rasterization.
25.2. Rasterization Order
Within a subpass of a render pass instance, for a given (x,y,layer,sample) sample location, the following operations are guaranteed to execute in rasterization order, for each separate primitive that includes that sample location:

Fragment operations, in the order defined

Blending, logic operations, and color writes
Execution of these operations for each primitive in a subpass occurs in primitive order.
25.3. Multisampling
Multisampling is a mechanism to antialias all Vulkan primitives: points, lines, and polygons. The technique is to sample all primitives multiple times at each pixel. Each sample in each framebuffer attachment has storage for a color, depth, and/or stencil value, such that perfragment operations apply to each sample independently. The color sample values can be later resolved to a single color (see Resolving Multisample Images and the Render Pass chapter for more details on how to resolve multisample images to nonmultisample images).
Vulkan defines rasterization rules for singlesample modes in a way that is equivalent to a multisample mode with a single sample in the center of each fragment.
Each fragment includes a coverage
mask with a single bit for each sample in the fragment, and a number of
depth values and associated data for each sample.
An implementation may choose to assign the same associated data to more
than one sample.
The location for evaluating such associated data may be anywhere within the
fragment area including the fragment’s center location (x_{f},y_{f}) or
any of the sample locations.
When rasterizationSamples
is VK_SAMPLE_COUNT_1_BIT
, the
fragment’s center location must be used.
The different associated data values need not all be evaluated at the same
location.
It is understood that each pixel has rasterizationSamples
locations
associated with it.
These locations are exact positions, rather than regions or areas, and each
is referred to as a sample point.
The sample points associated with a pixel must be located inside or on the
boundary of the unit square that is considered to bound the pixel.
Furthermore, the relative locations of sample points may be identical for
each pixel in the framebuffer, or they may differ.
If the current pipeline includes a fragment shader with one or more
variables in its interface decorated with Sample
and Input
, the
data associated with those variables will be assigned independently for each
sample.
The values for each sample must be evaluated at the location of the sample.
The data associated with any other variables not decorated with Sample
and Input
need not be evaluated independently for each sample.
A coverage mask is generated for each fragment, based on which samples within that fragment are determined to be within the area of the primitive that generated the fragment.
Single pixel fragments
have one set of samples.
Multipixel fragments defined by setting the
fragment shading rate have one set of
samples per pixel.
Each set of samples has a number of samples determined by
VkPipelineMultisampleStateCreateInfo::rasterizationSamples
.
Each sample in a set is assigned a unique sample index i in the
range [0, rasterizationSamples
).
Each sample in a fragment is also assigned a unique coverage index j
in the range [0, n × rasterizationSamples
), where n
is the number of sets in the fragment.
If the fragment contains a single set of samples, the coverage index is
always equal to the sample index.
If the fragment shading rate is set, the coverage index j is determined as a function of the pixel index p, the sample index i, and the number of rasterization samples r as:

j = i + r × ((f_{w} * f_{h})  1  p)
where the pixel index p is determined as a function of the pixel’s framebuffer location (x,y) and the fragment size (f_{w},f_{h}):

p_{x} = x % f_{w}

p_{y} = y % f_{h}

p = p_{x} + (p_{y} * f_{w})
The table below illustrates the pixel index for multipixel fragments:
1x1  1x2  1x4 

2x1  2x2  2x4 

4x1  4x2  4x4 

The coverage mask includes B bits packed into W words, defined as:

B = n ×
rasterizationSamples

W = ⌈B/32⌉
Bit b in coverage mask word w is 1
if the sample with coverage
index j = 32*w + b is covered, and 0
otherwise.
If the standardSampleLocations
member of VkPhysicalDeviceLimits
is VK_TRUE
, then the sample counts VK_SAMPLE_COUNT_1_BIT
,
VK_SAMPLE_COUNT_2_BIT
, VK_SAMPLE_COUNT_4_BIT
,
VK_SAMPLE_COUNT_8_BIT
, and VK_SAMPLE_COUNT_16_BIT
have sample
locations as listed in the following table, with the ith entry in
the table corresponding to sample index i.
VK_SAMPLE_COUNT_32_BIT
and VK_SAMPLE_COUNT_64_BIT
do not have
standard sample locations.
Locations are defined relative to an origin in the upper left corner of the
fragment.
Sample count  Sample Locations  


(0.5,0.5) 


(0.75,0.75) 


(0.375, 0.125) 


(0.5625, 0.3125) 


(0.5625, 0.5625) 
25.4. Fragment Shading Rates
The features advertised by VkPhysicalDeviceFragmentShadingRateFeaturesKHR allow an application to control the shading rate of a given fragment shader invocation.
The fragment shading rate strongly interacts with Multisampling, and the set of available rates for an implementation may be restricted by sample rate.
To query available shading rates, call:
// Provided by VK_KHR_fragment_shading_rate
VkResult vkGetPhysicalDeviceFragmentShadingRatesKHR(
VkPhysicalDevice physicalDevice,
uint32_t* pFragmentShadingRateCount,
VkPhysicalDeviceFragmentShadingRateKHR* pFragmentShadingRates);

physicalDevice
is the handle to the physical device whose properties will be queried. 
pFragmentShadingRateCount
is a pointer to an integer related to the number of fragment shading rates available or queried, as described below. 
pFragmentShadingRates
is eitherNULL
or a pointer to an array of VkPhysicalDeviceFragmentShadingRateKHR structures.
If pFragmentShadingRates
is NULL
, then the number of fragment
shading rates available is returned in pFragmentShadingRateCount
.
Otherwise, pFragmentShadingRateCount
must point to a variable set by
the user to the number of elements in the pFragmentShadingRates
array,
and on return the variable is overwritten with the number of structures
actually written to pFragmentShadingRates
.
If pFragmentShadingRateCount
is less than the number of fragment
shading rates available, at most pFragmentShadingRateCount
structures
will be written, and VK_INCOMPLETE
will be returned instead of
VK_SUCCESS
, to indicate that not all the available fragment shading
rates were returned.
The returned array of fragment shading rates must be ordered from largest
fragmentSize.width
value to smallest, and each set of fragment shading
rates with the same fragmentSize.width
value must be ordered from
largest fragmentSize.height
to smallest.
Any two entries in the array must not have the same fragmentSize
values.
For any entry in the array, the following rules also apply:

The value of
fragmentSize.width
must be less than or equal tomaxFragmentSize.width
. 
The value of
fragmentSize.width
must be greater than or equal to1
. 
The value of
fragmentSize.width
must be a poweroftwo. 
The value of
fragmentSize.height
must be less than or equal tomaxFragmentSize.height
. 
The value of
fragmentSize.height
must be greater than or equal to1
. 
The value of
fragmentSize.height
must be a poweroftwo. 
The highest sample count in
sampleCounts
must be less than or equal tomaxFragmentShadingRateRasterizationSamples
. 
The product of
fragmentSize.width
,fragmentSize.height
, and the highest sample count insampleCounts
must be less than or equal tomaxFragmentShadingRateCoverageSamples
.
Implementations must support at least the following shading rates:
sampleCounts 
fragmentSize 


{2,2} 

{2,1} 
~0 
{1,1} 
If framebufferColorSampleCounts
, includes VK_SAMPLE_COUNT_2_BIT
,
the required rates must also include VK_SAMPLE_COUNT_2_BIT
.
Note
Including the {1,1} fragment size is done for completeness; it has no actual effect on the support of rendering without setting the fragment size. All sample counts are supported for this rate. 
The VkPhysicalDeviceFragmentShadingRateKHR
structure is defined as
// Provided by VK_KHR_fragment_shading_rate
typedef struct VkPhysicalDeviceFragmentShadingRateKHR {
VkStructureType sType;
void* pNext;
VkSampleCountFlags sampleCounts;
VkExtent2D fragmentSize;
} VkPhysicalDeviceFragmentShadingRateKHR;

sType
is the type of this structure. 
pNext
isNULL
or a pointer to a structure extending this structure. 
sampleCounts
is a bitmask of sample counts for which the shading rate described byfragmentSize
is supported. 
fragmentSize
is a VkExtent2D describing the width and height of a supported shading rate.
Fragment shading rates can be set at three points, with the three rates combined to determine the final shading rate.
25.4.1. Pipeline Fragment Shading Rate
The pipeline fragment shading rate can be set on a perdraw basis by either setting the rate in a graphics pipeline, or dynamically via vkCmdSetFragmentShadingRateKHR.
The VkPipelineFragmentShadingRateStateCreateInfoKHR
structure is
defined as:
// Provided by VK_KHR_fragment_shading_rate
typedef struct VkPipelineFragmentShadingRateStateCreateInfoKHR {
VkStructureType sType;
const void* pNext;
VkExtent2D fragmentSize;
VkFragmentShadingRateCombinerOpKHR combinerOps[2];
} VkPipelineFragmentShadingRateStateCreateInfoKHR;

sType
is the type of this structure. 
pNext
isNULL
or a pointer to a structure extending this structure. 
fragmentSize
specifies a VkExtent2D structure containing the fragment size used to define the pipeline fragment shading rate for drawing commands using this pipeline. 
combinerOps
specifies a VkFragmentShadingRateCombinerOpKHR value determining how the pipeline, primitive, and attachment shading rates are combined for fragments generated by drawing commands using the created pipeline.
If the pNext
chain of VkGraphicsPipelineCreateInfo includes a
VkPipelineFragmentShadingRateStateCreateInfoKHR
structure, then that
structure includes parameters that control the pipeline fragment shading
rate.
If this structure is not present, fragmentSize
is considered to be
equal to (1,1), and both elements of combinerOps
are considered
to be equal to VK_FRAGMENT_SHADING_RATE_COMBINER_OP_KEEP_KHR
.
If a pipeline state object is created with
VK_DYNAMIC_STATE_FRAGMENT_SHADING_RATE_KHR
enabled, the pipeline
fragment shading rate and combiner operation is set by the command:
// Provided by VK_KHR_fragment_shading_rate
void vkCmdSetFragmentShadingRateKHR(
VkCommandBuffer commandBuffer,
const VkExtent2D* pFragmentSize,
const VkFragmentShadingRateCombinerOpKHR combinerOps[2]);

commandBuffer
is the command buffer into which the command will be recorded. 
pFragmentSize
specifies the pipeline fragment shading rate for subsequent drawing commands. 
combinerOps
specifies a VkFragmentShadingRateCombinerOpKHR determining how the pipeline, primitive, and attachment shading rates are combined for fragments generated by subsequent drawing commands.
25.4.2. Primitive Fragment Shading Rate
The primitive fragment shading rate can be set via the
PrimitiveShadingRateKHR
builtin in the last active vertex processing
shader stage.
The rate associated with a given primitive is sourced from the value written
to PrimitiveShadingRateKHR
by that primitive’s
provoking vertex.
25.4.3. Attachment Fragment Shading Rate
The attachment shading rate can be set by including VkFragmentShadingRateAttachmentInfoKHR in a subpass to define a fragment shading rate attachment. Each pixel in the framebuffer is assigned an attachment fragment shading rate by the corresponding texel in the fragment shading rate attachment, according to:

x' = floor(x / region_{x})

y' = floor(y / region_{y})
where x' and y' are the coordinates of a texel in the fragment
shading rate attachment, x and y are the coordinates of the
pixel in the framebuffer, and region_{x} and region_{y} are the
size of the region each texel corresponds to, as defined by the
shadingRateAttachmentTexelSize
member of
VkFragmentShadingRateAttachmentInfoKHR.
If multiview is enabled and the shading
rate attachment has multiple layers, the shading rate attachment texel is
selected from the layer determined by the
ViewIndex
builtin.
If multiview is disabled, and both the
shading rate attachment and the framebuffer have multiple layers, the
shading rate attachment texel is selected from the layer determined by the
Layer
builtin.
Otherwise, the texel is unconditionally selected from the first layer of the
attachment.
The fragment size is encoded into the first component of the identified texel as follows:

size_{w} = 2^{((texel/4)&3)}

size_{h} = 2^{(texel&3)}
where texel is the value in the first component of the identified texel, and size_{w} and size_{h} are the width and height of the fragment size, decoded from the texel.
If no fragment shading rate attachment is specified, this size is calculated as size_{w} = size_{h} = 1. Applications must not specify a width or height greater than 4 by this method.
The Fragment Shading Rate enumeration in SPIRV adheres to the above encoding.
25.4.4. Combining the Fragment Shading Rates
The final rate (C_{xy}') used for fragment shading must be one of the rates returned by vkGetPhysicalDeviceFragmentShadingRatesKHR for the sample count used by rasterization.
If any of the following conditions are met, C_{xy}' must be set to {1,1}:

If Sample Shading is enabled.

The
fragmentShadingRateWithSampleMask
limit is not supported, and VkPipelineMultisampleStateCreateInfo::pSampleMask
contains a zero value in any bit used by fragment operations. 
The
fragmentShadingRateWithShaderSampleMask
is not supported, and the fragment shader hasSampleMask
in the input or output interface. 
The
fragmentShadingRateWithShaderDepthStencilWrites
limit is not supported, and the fragment shader declares theFragDepth
builtin.
Otherwise, each of the specified shading rates are combined and then used to derive the value of C_{xy}'. As there are three ways to specify shading rates, two combiner operations are specified  between the pipeline and primitive shading rates, and between the result of that and the attachment shading rate.
The equation used for each combiner operation is defined by
VkFragmentShadingRateCombinerOpKHR
:
// Provided by VK_KHR_fragment_shading_rate
typedef enum VkFragmentShadingRateCombinerOpKHR {
VK_FRAGMENT_SHADING_RATE_COMBINER_OP_KEEP_KHR = 0,
VK_FRAGMENT_SHADING_RATE_COMBINER_OP_REPLACE_KHR = 1,
VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MIN_KHR = 2,
VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MAX_KHR = 3,
VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MUL_KHR = 4,
} VkFragmentShadingRateCombinerOpKHR;

VK_FRAGMENT_SHADING_RATE_COMBINER_OP_KEEP_KHR
specifies a combiner operation of combine(A_{xy},B_{xy}) = A_{xy}. 
VK_FRAGMENT_SHADING_RATE_COMBINER_OP_REPLACE_KHR
specifies a combiner operation of combine(A_{xy},B_{xy}) = B_{xy}. 
VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MIN_KHR
specifies a combiner operation of combine(A_{xy},B_{xy}) = min(A_{xy},B_{xy}). 
VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MAX_KHR
specifies a combiner operation of combine(A_{xy},B_{xy}) = max(A_{xy},B_{xy}). 
VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MUL_KHR
specifies a combiner operation of combine(A_{xy},B_{xy}) = A_{xy}*B_{xy}.
where combine(A_{xy},B_{xy}) is the combine operation, and A_{xy} and B_{xy} are the inputs to the operation.
If fragmentShadingRateStrictMultiplyCombiner
is VK_FALSE
, using
VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MUL_KHR
with values of 1 for both
A and B in the same dimension results in the value 2 being produced for that
dimension.
See the definition of fragmentShadingRateStrictMultiplyCombiner
for more information.
These operations are performed in a componentwise fashion.
This is used to generate a combined fragment area using the equation:

C_{xy} = combine(A_{xy},B_{xy})
where C_{xy} is the combined fragment area result, and A_{xy} and B_{xy} are the fragment areas of the fragment shading rates being combined.
Two combine operations are performed, first with A_{xy} equal to the
pipeline fragment shading rate
and B_{xy} equal to the primitive fragment shading rate, with the combine() operation
selected by combinerOps[0].
A second combination is then performed, with A_{xy} equal to the result
of the first combination and B_{xy} equal to the
attachment fragment shading
rate, with the combine() operation selected by combinerOps[1].
The result of the second combination is used as the final fragment shading
rate, reported via the ShadingRateKHR
builtin.
Implementations may clamp the C_{xy} result of each combiner operation separately, or only after the second combiner operation.
If the final combined rate is one of the rates returned by vkGetPhysicalDeviceFragmentShadingRatesKHR for the sample count used by rasterization, C_{xy}' = C_{xy}. Otherwise, C_{xy}' is selected from the rates returned by vkGetPhysicalDeviceFragmentShadingRatesKHR for the sample count used by rasterization. From this list of supported rates, the following steps are applied in order, to select a single value:

Keep only rates where C_{x}' ≤ C_{x} and C_{y}' ≤ C_{y}.

Implementations may also keep rates where C_{x}' ≤ C_{y} and C_{y}' ≤ C_{x}.


Keep only rates with the highest area (C_{x}' × C_{y}').

Keep only rates with the lowest aspect ratio (C_{x}' + C_{y}').

In cases where a wide (e.g. 4x1) and tall (e.g. 1x4) rate remain, the implementation may choose either rate. However, it must choose this rate consistently for the same shading rates, and combiner operations for the lifetime of the VkDevice.
25.5. Sample Shading
Sample shading can be used to specify a minimum number of unique samples to
process for each fragment.
If sample shading is enabled an implementation must provide a minimum of
max(⌈ minSampleShadingFactor
× totalSamples
⌉, 1) unique associated data for each fragment, where
minSampleShadingFactor
is the minimum fraction of sample shading.
totalSamples
is the value of
VkPipelineMultisampleStateCreateInfo::rasterizationSamples
specified at pipeline creation time.
These are associated with the samples in an implementationdependent manner.
When minSampleShadingFactor
is 1.0
, a separate set of associated
data are evaluated for each sample, and each set of values is evaluated at
the sample location.
Sample shading is enabled for a graphics pipeline:

If the interface of the fragment shader entry point of the graphics pipeline includes an input variable decorated with
SampleId
orSamplePosition
. In this caseminSampleShadingFactor
takes the value1.0
. 
Else if the
sampleShadingEnable
member of the VkPipelineMultisampleStateCreateInfo structure specified when creating the graphics pipeline is set toVK_TRUE
. In this caseminSampleShadingFactor
takes the value of VkPipelineMultisampleStateCreateInfo::minSampleShading
.
Otherwise, sample shading is considered disabled.
25.6. Points
A point is drawn by generating a set of fragments in the shape of a square
centered around the vertex of the point.
Each vertex has an associated point size that controls the width/height of
that square.
The point size is taken from the (potentially clipped) shader builtin
PointSize
written by:

the geometry shader, if active;

the tessellation evaluation shader, if active and no geometry shader is active;

the vertex shader, otherwise
and clamped to the implementationdependent point size range
[pointSizeRange
[0],pointSizeRange
[1]].
The value written to PointSize
must be greater than zero.
Not all point sizes need be supported, but the size 1.0 must be supported.
The range of supported sizes and the size of evenlyspaced gradations within
that range are implementationdependent.
The range and gradations are obtained from the pointSizeRange
and
pointSizeGranularity
members of VkPhysicalDeviceLimits.
If, for instance, the size range is from 0.1 to 2.0 and the gradation size
is 0.1, then the sizes 0.1, 0.2, …, 1.9, 2.0 are supported.
Additional point sizes may also be supported.
There is no requirement that these sizes be equally spaced.
If an unsupported size is requested, the nearest supported size is used
instead.
25.6.1. Basic Point Rasterization
Point rasterization produces a fragment for each fragment area group of
framebuffer pixels with one or more sample points that intersect a region
centered at the point’s (x_{f},y_{f}).
This region is a square with side equal to the current point size.
Coverage bits that correspond to sample points that intersect the region are
1, other coverage bits are 0.
All fragments produced in rasterizing a point are assigned the same
associated data, which are those of the vertex corresponding to the point.
However, the fragment shader builtin PointCoord
contains point sprite
texture coordinates.
The s and t point sprite texture coordinates vary from zero to
one across the point horizontally lefttoright and vertically
toptobottom, respectively.
The following formulas are used to evaluate s and t:
where size is the point’s size; (x_{p},y_{p}) is the location at which the point sprite coordinates are evaluated  this may be the framebuffer coordinates of the fragment center, or the location of a sample; and (x_{f},y_{f}) is the exact, unrounded framebuffer coordinate of the vertex for the point.
25.7. Line Segments
Each line segment has an associated width.
The line width is specified by the
VkPipelineRasterizationStateCreateInfo::lineWidth
property of
the currently active pipeline, if the pipeline was not created with
VK_DYNAMIC_STATE_LINE_WIDTH
enabled.
Otherwise, the line width is set by calling vkCmdSetLineWidth
:
// Provided by VK_VERSION_1_0
void vkCmdSetLineWidth(
VkCommandBuffer commandBuffer,
float lineWidth);

commandBuffer
is the command buffer into which the command will be recorded. 
lineWidth
is the width of rasterized line segments.
Not all line widths need be supported for line segment rasterization, but
width 1.0 antialiased segments must be provided.
The range and gradations are obtained from the lineWidthRange
and
lineWidthGranularity
members of VkPhysicalDeviceLimits.
If, for instance, the size range is from 0.1 to 2.0 and the gradation size
is 0.1, then the sizes 0.1, 0.2, …, 1.9, 2.0 are supported.
Additional line widths may also be supported.
There is no requirement that these widths be equally spaced.
If an unsupported width is requested, the nearest supported width is used
instead.
25.7.1. Basic Line Segment Rasterization
Rasterized line segments produce fragments which intersect a rectangle centered on the line segment. Two of the edges are parallel to the specified line segment; each is at a distance of onehalf the current width from that segment in directions perpendicular to the direction of the line. The other two edges pass through the line endpoints and are perpendicular to the direction of the specified line segment. Coverage bits that correspond to sample points that intersect the rectangle are 1, other coverage bits are 0.
Next we specify how the data associated with each rasterized fragment are
obtained.
Let p_{r} = (x_{d}, y_{d}) be the framebuffer coordinates at which
associated data are evaluated.
This may be the center of a fragment or the location of a sample within the
fragment.
When rasterizationSamples
is VK_SAMPLE_COUNT_1_BIT
, the fragment
center must be used.
Let p_{a} = (x_{a}, y_{a}) and p_{b} = (x_{b},y_{b}) be
initial and final endpoints of the line segment, respectively.
Set
(Note that t = 0 at p_{a} and t = 1 at p_{b}. Also note that this calculation projects the vector from p_{a} to p_{r} onto the line, and thus computes the normalized distance of the fragment along the line.)
The value of an associated datum f for the fragment, whether it be a shader output or the clip w coordinate, must be determined using perspective interpolation:
where f_{a} and f_{b} are the data associated with the starting and ending endpoints of the segment, respectively; w_{a} and w_{b} are the clip w coordinates of the starting and ending endpoints of the segment, respectively.
Depth values for lines must be determined using linear interpolation:

z = (1  t) z_{a} + t z_{b}
where z_{a} and z_{b} are the depth values of the starting and ending endpoints of the segment, respectively.
The NoPerspective
and Flat
interpolation decorations can be used
with fragment shader inputs to declare how they are interpolated.
When neither decoration is applied, perspective interpolation is performed as described above.
When the NoPerspective
decoration is used, linear interpolation is performed in the same fashion as for depth values,
as described above.
When the Flat
decoration is used, no interpolation is performed, and
outputs are taken from the corresponding input value of the
provoking vertex corresponding to that
primitive.
The above description documents the preferred method of line rasterization,
and must be used when the implementation advertises the strictLines
limit in VkPhysicalDeviceLimits as VK_TRUE
.
When strictLines
is VK_FALSE
, the edges of the lines are
generated as a parallelogram surrounding the original line.
The major axis is chosen by noting the axis in which there is the greatest
distance between the line start and end points.
If the difference is equal in both directions then the X axis is chosen as
the major axis.
Edges 2 and 3 are aligned to the minor axis and are centered on the
endpoints of the line as in Non strict lines, and each is
lineWidth
long.
Edges 0 and 1 are parallel to the line and connect the endpoints of edges 2
and 3.
Coverage bits that correspond to sample points that intersect the
parallelogram are 1, other coverage bits are 0.
Samples that fall exactly on the edge of the parallelogram follow the polygon rasterization rules.
Interpolation occurs as if the parallelogram was decomposed into two triangles where each pair of vertices at each end of the line has identical attributes.
Only when strictLines
is VK_FALSE
implementations may deviate
from the nonstrict line algorithm described above in the following ways:

Implementations may instead interpolate each fragment according to the formula in Basic Line Segment Rasterization using the original line segment endpoints.

Rasterization of nonantialiased nonstrict line segments may be performed using the rules defined in Bresenham Line Segment Rasterization.
25.7.2. Bresenham Line Segment Rasterization
Nonstrict lines may also follow these rasterization rules for nonantialiased lines.
Line segment rasterization begins by characterizing the segment as either xmajor or ymajor. xmajor line segments have slope in the closed interval [1,1]; all other line segments are ymajor (slope is determined by the segment’s endpoints). We specify rasterization only for xmajor segments except in cases where the modifications for ymajor segments are not selfevident.
Ideally, Vulkan uses a diamondexit rule to determine those fragments that are produced by rasterizing a line segment. For each fragment f with center at framebuffer coordinates x_{f} and y_{f}, define a diamondshaped region that is the intersection of four half planes:
Essentially, a line segment starting at p_{a} and ending at p_{b} produces those fragments f for which the segment intersects R_{f}, except if p_{b} is contained in R_{f}.
To avoid difficulties when an endpoint lies on a boundary of R_{f} we (in principle) perturb the supplied endpoints by a tiny amount. Let p_{a} and p_{b} have framebuffer coordinates (x_{a}, y_{a}) and (x_{b}, y_{b}), respectively. Obtain the perturbed endpoints p_{a}' given by (x_{a}, y_{a})  (ε, ε^{2}) and p_{b}' given by (x_{b}, y_{b})  (ε, ε^{2}). Rasterizing the line segment starting at p_{a} and ending at p_{b} produces those fragments f for which the segment starting at p_{a}' and ending on p_{b}' intersects R_{f}, except if p_{b}' is contained in R_{f}. ε is chosen to be so small that rasterizing the line segment produces the same fragments when δ is substituted for ε for any 0 < δ ≤ ε.
When p_{a} and p_{b} lie on fragment centers, this characterization of fragments reduces to Bresenham’s algorithm with one modification: lines produced in this description are "halfopen," meaning that the final fragment (corresponding to p_{b}) is not drawn. This means that when rasterizing a series of connected line segments, shared endpoints will be produced only once rather than twice (as would occur with Bresenham’s algorithm).
Implementations may use other line segment rasterization algorithms, subject to the following rules:

The coordinates of a fragment produced by the algorithm must not deviate by more than one unit in either x or y framebuffer coordinates from a corresponding fragment produced by the diamondexit rule.

The total number of fragments produced by the algorithm must not differ from that produced by the diamondexit rule by no more than one.

For an xmajor line, two fragments that lie in the same framebuffercoordinate column must not be produced (for a ymajor line, two fragments that lie in the same framebuffercoordinate row must not be produced).

If two line segments share a common endpoint, and both segments are either xmajor (both lefttoright or both righttoleft) or ymajor (both bottomtotop or both toptobottom), then rasterizing both segments must not produce duplicate fragments. Fragments also must not be omitted so as to interrupt continuity of the connected segments.
The actual width w of Bresenham lines is determined by rounding the
line width to the nearest integer, clamping it to the
implementationdependent lineWidthRange
(with both values rounded to
the nearest integer), then clamping it to be no less than 1.
Bresenham line segments of width other than one are rasterized by offsetting them in the minor direction (for an xmajor line, the minor direction is y, and for a ymajor line, the minor direction is x) and producing a row or column of fragments in the minor direction. If the line segment has endpoints given by (x_{0}, y_{0}) and (x_{1}, y_{1}) in framebuffer coordinates, the segment with endpoints $(x_{0},y_{0}−2w−1 )$ and $(x_{1},y_{1}−2w−1 )$ is rasterized, but instead of a single fragment, a column of fragments of height w (a row of fragments of length w for a ymajor segment) is produced at each x (y for ymajor) location. The lowest fragment of this column is the fragment that would be produced by rasterizing the segment of width 1 with the modified coordinates.
The preferred method of attribute interpolation for a wide line is to
generate the same attribute values for all fragments in the row or column
described above, as if the adjusted line was used for interpolation and
those values replicated to the other fragments, except for FragCoord
which is interpolated as usual.
Implementations may instead interpolate each fragment according to the
formula in Basic Line Segment Rasterization, using
the original line segment endpoints.
When Bresenham lines are being rasterized, sample locations may all be treated as being at the pixel center (this may affect attribute and depth interpolation).
Note
The sample locations described above are not used for determining coverage, they are only used for things like attribute interpolation. The rasterization rules that determine coverage are defined in terms of whether the line intersects pixels, as opposed to the point sampling rules used for other primitive types. So these rules are independent of the sample locations. One consequence of this is that Bresenham lines cover the same pixels regardless of the number of rasterization samples, and cover all samples in those pixels (unless masked out or killed). 
25.8. Polygons
A polygon results from the decomposition of a triangle strip, triangle fan or a series of independent triangles. Like points and line segments, polygon rasterization is controlled by several variables in the VkPipelineRasterizationStateCreateInfo structure.
25.8.1. Basic Polygon Rasterization
The first step of polygon rasterization is to determine whether the triangle is backfacing or frontfacing. This determination is made based on the sign of the (clipped or unclipped) polygon’s area computed in framebuffer coordinates. One way to compute this area is:
where $x_{f}$ and $y_{f}$ are the x and y framebuffer coordinates of the ith vertex of the nvertex polygon (vertices are numbered starting at zero for the purposes of this computation) and i ⊕ 1 is (i + 1) mod n.
The interpretation of the sign of a is determined by the
VkPipelineRasterizationStateCreateInfo::frontFace
property of
the currently active pipeline.
Possible values are:
// Provided by VK_VERSION_1_0
typedef enum VkFrontFace {
VK_FRONT_FACE_COUNTER_CLOCKWISE = 0,
VK_FRONT_FACE_CLOCKWISE = 1,
} VkFrontFace;

VK_FRONT_FACE_COUNTER_CLOCKWISE
specifies that a triangle with positive area is considered frontfacing. 
VK_FRONT_FACE_CLOCKWISE
specifies that a triangle with negative area is considered frontfacing.
Any triangle which is not frontfacing is backfacing, including zeroarea triangles.
Once the orientation of triangles is determined, they are culled according
to the VkPipelineRasterizationStateCreateInfo::cullMode
property
of the currently active pipeline.
Possible values are:
// Provided by VK_VERSION_1_0
typedef enum VkCullModeFlagBits {
VK_CULL_MODE_NONE = 0,
VK_CULL_MODE_FRONT_BIT = 0x00000001,
VK_CULL_MODE_BACK_BIT = 0x00000002,
VK_CULL_MODE_FRONT_AND_BACK = 0x00000003,
} VkCullModeFlagBits;

VK_CULL_MODE_NONE
specifies that no triangles are discarded 
VK_CULL_MODE_FRONT_BIT
specifies that frontfacing triangles are discarded 
VK_CULL_MODE_BACK_BIT
specifies that backfacing triangles are discarded 
VK_CULL_MODE_FRONT_AND_BACK
specifies that all triangles are discarded.
Following culling, fragments are produced for any triangles which have not been discarded.
// Provided by VK_VERSION_1_0
typedef VkFlags VkCullModeFlags;
VkCullModeFlags
is a bitmask type for setting a mask of zero or more
VkCullModeFlagBits.
The rule for determining which fragments are produced by polygon rasterization is called point sampling. The twodimensional projection obtained by taking the x and y framebuffer coordinates of the polygon’s vertices is formed. Fragments are produced for any fragment area groups of pixels for which any sample points lie inside of this polygon. Coverage bits that correspond to sample points that satisfy the point sampling criteria are 1, other coverage bits are 0. Special treatment is given to a sample whose sample location lies on a polygon edge. In such a case, if two polygons lie on either side of a common edge (with identical endpoints) on which a sample point lies, then exactly one of the polygons must result in a covered sample for that fragment during rasterization. As for the data associated with each fragment produced by rasterizing a polygon, we begin by specifying how these values are produced for fragments in a triangle.
Barycentric coordinates are a set of three numbers, a, b, and c, each in the range [0,1], with a + b + c = 1. These coordinates uniquely specify any point p within the triangle or on the triangle’s boundary as

p = a p_{a} + b p_{b} + c p_{c}
where p_{a}, p_{b}, and p_{c} are the vertices of the triangle. a, b, and c are determined by:
where A(lmn) denotes the area in framebuffer coordinates of the triangle with vertices l, m, and n.
Denote an associated datum at p_{a}, p_{b}, or p_{c} as f_{a}, f_{b}, or f_{c}, respectively.
The value of an associated datum f for a fragment produced by rasterizing a triangle, whether it be a shader output or the clip w coordinate, must be determined using perspective interpolation:
where w_{a}, w_{b}, and w_{c} are the clip w
coordinates of p_{a}, p_{b}, and p_{c}, respectively.
a, b, and c are the barycentric coordinates of the
location at which the data are produced  this must be the location of the
fragment center or the location of a sample.
When rasterizationSamples
is VK_SAMPLE_COUNT_1_BIT
, the fragment
center must be used.
Depth values for triangles must be determined using linear interpolation:

z = a z_{a} + b z_{b} + c z_{c}
where z_{a}, z_{b}, and z_{c} are the depth values of p_{a}, p_{b}, and p_{c}, respectively.
The NoPerspective
and Flat
interpolation decorations can be used
with fragment shader inputs to declare how they are interpolated.
When neither decoration is applied, perspective interpolation is performed as described above.
When the NoPerspective
decoration is used,
linear interpolation is performed in the
same fashion as for depth values, as described above.
When the Flat
decoration is used, no interpolation is performed, and
outputs are taken from the corresponding input value of the
provoking vertex corresponding to that
primitive.
For a polygon with more than three edges, such as are produced by clipping a triangle, a convex combination of the values of the datum at the polygon’s vertices must be used to obtain the value assigned to each fragment produced by the rasterization algorithm. That is, it must be the case that at every fragment
where n is the number of vertices in the polygon and f_{i} is the value of f at vertex i. For each i, 0 ≤ a_{i} ≤ 1 and $∑_{i=1}a_{i}=1$. The values of a_{i} may differ from fragment to fragment, but at vertex i, a_{i} = 1 and a_{j} = 0 for j ≠ i.
Note
One algorithm that achieves the required behavior is to triangulate a polygon (without adding any vertices) and then treat each triangle individually as already discussed. A scanline rasterizer that linearly interpolates data along each edge and then linearly interpolates data across each horizontal span from edge to edge also satisfies the restrictions (in this case the numerator and denominator of perspective interpolation are iterated independently, and a division is performed for each fragment). 
25.8.2. Polygon Mode
Possible values of the
VkPipelineRasterizationStateCreateInfo::polygonMode
property of
the currently active pipeline, specifying the method of rasterization for
polygons, are:
// Provided by VK_VERSION_1_0
typedef enum VkPolygonMode {
VK_POLYGON_MODE_FILL = 0,
VK_POLYGON_MODE_LINE = 1,
VK_POLYGON_MODE_POINT = 2,
} VkPolygonMode;

VK_POLYGON_MODE_POINT
specifies that polygon vertices are drawn as points. 
VK_POLYGON_MODE_LINE
specifies that polygon edges are drawn as line segments. 
VK_POLYGON_MODE_FILL
specifies that polygons are rendered using the polygon rasterization rules in this section.
These modes affect only the final rasterization of polygons: in particular, a polygon’s vertices are shaded and the polygon is clipped and possibly culled before these modes are applied.
25.8.3. Depth Bias
The depth values of all fragments generated by the rasterization of a
polygon can be offset by a single value that is computed for that polygon.
This behavior is controlled by the depthBiasEnable
,
depthBiasConstantFactor
, depthBiasClamp
, and
depthBiasSlopeFactor
members of
VkPipelineRasterizationStateCreateInfo, or by the corresponding
parameters to the vkCmdSetDepthBias
command if depth bias state is dynamic.
// Provided by VK_VERSION_1_0
void vkCmdSetDepthBias(
VkCommandBuffer commandBuffer,
float depthBiasConstantFactor,
float depthBiasClamp,
float depthBiasSlopeFactor);

commandBuffer
is the command buffer into which the command will be recorded. 
depthBiasConstantFactor
is a scalar factor controlling the constant depth value added to each fragment. 
depthBiasClamp
is the maximum (or minimum) depth bias of a fragment. 
depthBiasSlopeFactor
is a scalar factor applied to a fragment’s slope in depth bias calculations.
If depthBiasEnable
is VK_FALSE
at draw time, no depth bias is
applied and the fragment’s depth values are unchanged.
depthBiasSlopeFactor
scales the maximum depth slope of the polygon,
and depthBiasConstantFactor
scales the minimum resolvable difference
of the depth buffer.
The resulting values are summed to produce the depth bias value which is
then clamped to a minimum or maximum value specified by
depthBiasClamp
.
depthBiasSlopeFactor
, depthBiasConstantFactor
, and
depthBiasClamp
can each be positive, negative, or zero.
The maximum depth slope m of a triangle is
where (x_{f}, y_{f}, z_{f}) is a point on the triangle. m may be approximated as
The minimum resolvable difference r is a parameter that depends on the
depth buffer representation.
It is the smallest difference in framebuffer coordinate z values that
is guaranteed to remain distinct throughout polygon rasterization and in the
depth buffer.
All pairs of fragments generated by the rasterization of two polygons with
otherwise identical vertices, but z
_{f} values that differ by
r, will have distinct depth values.
For fixedpoint depth buffer representations, r is constant throughout the range of the entire depth buffer. Its value is implementationdependent but must be at most

r = 2 × 2^{n}
for an nbit buffer. For floatingpoint depth buffers, there is no single minimum resolvable difference. In this case, the minimum resolvable difference for a given polygon is dependent on the maximum exponent, e, in the range of z values spanned by the primitive. If n is the number of bits in the floatingpoint mantissa, the minimum resolvable difference, r, for the given primitive is defined as

r = 2^{en}
If no depth buffer is present, r is undefined.
The bias value o for a polygon is
m is computed as described above. If the depth buffer uses a fixedpoint representation, m is a function of depth values in the range [0,1], and o is applied to depth values in the same range.
For fixedpoint depth buffers, fragment depth values are always limited to the range [0,1] by clamping after depth bias addition is performed. Fragment depth values are clamped even when the depth buffer uses a floatingpoint representation.