GithubHelp home page GithubHelp logo

o3de / o3de-azslc Goto Github PK

View Code? Open in Web Editor NEW
21.0 21.0 12.0 5.95 MB

Amazon Shader Language (AZSL) Compiler

License: Other

C++ 83.67% Batchfile 0.29% Python 13.83% Shell 0.19% CMake 0.14% Smarty 0.31% ANTLR 1.52% HLSL 0.06%

o3de-azslc's Introduction

O3DE (Open 3D Engine)

O3DE (Open 3D Engine) is an open-source, real-time, multi-platform 3D engine that enables developers and content creators to build AAA games, cinema-quality 3D worlds, and high-fidelity simulations without any fees or commercial obligations.

Contribute

For information about contributing to Open 3D Engine, visit https://o3de.org/docs/contributing/.

Download and Install

This repository uses Git LFS for storing large binary files.

Verify you have Git LFS installed by running the following command to print the version number.

git lfs --version 

If Git LFS is not installed, download and run the installer from: https://git-lfs.github.com/.

Install Git LFS hooks

git lfs install

Clone the repository

git clone https://github.com/o3de/o3de.git

Building the Engine

Build requirements and redistributables

For the latest details and system requirements, refer to System Requirements in the documentation.

Windows

Optional

  • Wwise audio SDK
    • For the latest version requirements and setup instructions, refer to the Wwise Audio Engine Gem reference in the documentation.

Quick start engine setup

To set up a project-centric source engine, complete the following steps. For other build options, refer to Setting up O3DE from GitHub in the documentation.

  1. Create a writable folder to cache downloadable third-party packages. You can also use this to store other redistributable SDKs.

  2. Install the following redistributables:

    • Visual Studio and VC++ redistributable can be installed to any location.
    • CMake can be installed to any location, as long as it's available in the system path.
  3. Configure the engine source into a solution using this command line, replacing <your build path>, <your source path>, and <3rdParty package path> with the paths you've created:

    cmake -B <your build path> -S <your source path> -G "Visual Studio 16" -DLY_3RDPARTY_PATH=<3rdParty package path>
    

    Example:

    cmake -B C:\o3de\build\windows -S C:\o3de -G "Visual Studio 16" -DLY_3RDPARTY_PATH=C:\o3de-packages
    

    Note: Do not use trailing slashes for the <3rdParty package path>.

  4. Alternatively, you can do this through the CMake GUI:

    1. Start cmake-gui.exe.
    2. Select the local path of the repo under "Where is the source code".
    3. Select a path where to build binaries under "Where to build the binaries".
    4. Click Add Entry and add a cache entry for the <3rdParty package path> folder you created, using the following values:
      1. Name: LY_3RDPARTY_PATH
      2. Type: STRING
      3. Value: <3rdParty package path>
    5. Click Configure.
    6. Wait for the key values to populate. Update or add any additional fields that are needed for your project.
    7. Click Generate.
  5. Register the engine with this command:

    scripts\o3de.bat register --this-engine
    
  6. The configuration of the solution is complete. You are now ready to create a project and build the engine.

For more details on the steps above, refer to Setting up O3DE from GitHub in the documentation.

Setting up new projects and building the engine

  1. From the O3DE repo folder, set up a new project using the o3de create-project command.

    scripts\o3de.bat create-project --project-path <your new project path>
    
  2. Configure a solution for your project.

    cmake -B <your project build path> -S <your new project source path> -G "Visual Studio 16"
    

    Example:

    cmake -B C:\my-project\build\windows -S C:\my-project -G "Visual Studio 16"
    

    Note: Do not use trailing slashes for the <3rdParty cache path>.

  3. Build the project, Asset Processor, and Editor to binaries by running this command inside your project:

    cmake --build <your project build path> --target <New Project Name>.GameLauncher Editor --config profile -- /m
    

    Note: Your project name used in the build target is the same as the directory name of your project.

This will compile after some time and binaries will be available in the project build path you've specified, under bin/profile.

For a complete tutorial on project configuration, see Creating Projects Using the Command Line Interface in the documentation.

Code Contributors

This project exists thanks to all the people who contribute. [Contribute].

License

For terms please see the LICENSE*.TXT files at the root of this distribution.

o3de-azslc's People

Contributors

amzn-alexpete avatar galibzon avatar invertednormal avatar jeremyong avatar jeremyong-az avatar kh-huawei avatar martinwinter-huawei avatar rgba16f avatar santorac avatar siliconvoodoo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

o3de-azslc's Issues

Defaut setup ends up in 20/356 failed tests

FINISHED. Total = 356
PASS = 333 /356
TODO = 3 /356
Missing EC = 0 /356
FAIL = 20 /356

I suspect a pyyaml problem. Some detection should happen to propose a pip install if that's the case.

Command Line Namespace Option Consumes AZSL File Path

Run a command like "azslc.exe --namespace dx MyShader.azsl". It will report an error "input file could not be opened". If you step through in a debugger, you'll see that the namespace option thinks there are two namespaces: "dx" and "MyShader.azsl".

I suggest we change the data type for namespace to be a single string, instead of a list of strings, so it will stop when it hits a space. Then split the string on the "," character to support multiple namespaces.

Generated files (from ANTLR) should be not be checked in

Using CMake commands such as add_custom_command in conjunction with add_custom_target, it should be relatively straightforward to emit the generated grammar file as part of the build, with the token/bnf files as dependencies. This way, they would not need to be checked in and remove a source of possible error with the source files going out of sync with the generated files. More importantly, this change would make grammar changes easier to review.

Refactor Ast pointers out of the IR

[Migrated from JIRA ATOM-559]
During the previous refactoring of December 11th (unification of the database of symbols), I judged that introducing pointers to the original AST nodes that served the construction of the intermediate representation (IR) would help to reduce the code needed in the IR.

Example today in VarInfo because we have the AstDeclNode* we can have a query function HasStorageFlag(Flag flag) that directly goes fishing in the AST node for the tokens declaring the qualifiers. This way, we don't have redundancy of information, keeping the IR structure leaner.

Second advantage from this design, is access to the original source location; so better ability to quote the actual user code, both for error messages or for code emission.

Third advantage, is in the case that the IR is incomplete, the components (clients) that use the IR (code emission) can reconstruct information that hasn't been prepared in the semantic pass.

It enables a bit more painless (future-opened) back-end clients data feeding.

BUT.

The problem we see arising from this decision today, is that we have a tight coupling between AST and IR.

The IR is incomplete without the pointers to some Ast nodes, and the pointers being set (not null), is a program invariant. (for the sake of "if-less programming", client code don't have to contort against optionally set pointers. ifless programming ensures canonical treatment, no special case, and neatly unbloated code. The objective metric measuring that is called cyclomatic complexity.)

This tight coupling is undesirable today, because of code constructs that we want to emulate to behave like another canonical construct. The problem, is that we can't; since there is no syntax that has been instantiated, to support the virtual construct.

Example:

ShaderResourceGroupSemantic stuff

{     FrequencyId = 1; }

;

This is the syntax we require; but internally we want to treat this as:

ShaderResourceGroupSemantic stuff

{     static const intrinsicattribute int FrequencyId = 1; }

;

That way, the FrequencyId can be stored in the IR as a VarInfo, no need to create a new Kind; nor a new semantic validation, nor a new IR subkind, nor extend the variant of possible subkinds, nor go in all client of the IR, to update the support for this new subkind.

Basically, by doing this virtual syntax alteration, we get all features down the chain, to work out of the box; without increasing the source-line count of the compiler.

But today, because of this tight coupling, we have an impossibility to virtualize IR construction from procedural code. In other words, we can't have "generated IR" without an original parsed syntax. And that is a big problem.

I am to blame for this original decision so I should cleanup this coupling.

That means: Augmenting the IR data model to store any information that is today retrieved by the clients doing their own custom Ast analysis.

And removing all pointers from the IR, storing information important for error reporting, and code emission that are now accessed through these pointers (like line number, often).

We emit a lot of code today in a dumb way of just iterating though the tokens, pointing to string pieces that exist in the original source stream.

Decoupling thus involves being able to do code emission at a much finer grained level, like reconstructing all language expressions, statements and structural constructions (if, while..) from IR data, and not from tokens.

This is unfortunately a fairly big amount of work.

Validate shader build pipeline with DXC 1.6.2106+. Validate data layout produced '-fvk-use-dx-layout' matches dx12 rules.

In response to github issue #8, AZSLc has a feature that checks for potential data alignment errors that occur due to unexpected results with '-fvk-use-dx-layout'. DXC 1.6.2106+ fixes '-fvk-use-dx-layout'.

This ticket is about validating that '-fvk-use-dx-layout' is fixed with DXC 1.6.2106+, and in such case, the changes to AZSLc should be reverted. Also the whole shader build pipeline for O3DE should be revalidated. If everything works then the shader build pipeline should also be upgraded to DXC 1.6.2106+

class static members

[Migrated from JIRA ATOM-2671]
I noticed that we have no support for static variables and static functions in classes.

Throw Exception When Data Layout Will Encounter Alignment Differences With DXC '-fvk-use-dx-layout'

AZSLc is used to transpile AZSL code into HLSL code, during this process AZSLc can generate reflection data along with size and offset information for all structs, classes and SRGs. AZSLc follows DX12 rules to calculate size and offset of the reflected data.

The idea is that eventually when compiling the HLSL code with DXC, the DXC-calculated sizes and offsets match those calculated by AZSLc. This is true when compiling HLSL code to DX12 byte code.

Natively the layout rules of SPIRV (Vulkan) are different than DX12, hence We request DXC to use DX12 layout rules when compiling for SPIRV with the option '-fvk-use-dx-layout'. The problem is that there are some cases where the expected layout don't match causing crashes or unexpected behavior at runtime.

The idea is to add a feature that checks for those error cases, treat them as errors and provide a message with a data padding solution. Additionally, provide a command line option '--no-alignment-validation' to avoid check for these kind of errors.

The most common error cases are:

  1. A 'float' or 'float2' variable preceded by matrices or structs that end in matrices of type:
    float2x2, float3x2, float4x2
    In such case the 'float' or 'float2' should be be pre-padded by a 'float3' variable.

  2. A 'float' type variable is preceded by a matrix or struct that ends in matrix of type:
    float2x3, float3x3, float4x3
    In such case the 'float' should be pre-padded by a 'float2' variables.

When AZSLc finds cases in the code that will have layout differences between dx12 and vulkan it will print this message:

tests\Emission\AsError\matRC_padding.azsl(,)
: IR error #131: Detected potential alignment issues related with DXC
flag '-fvk-use-dx-layout'.
Alternatively you can use the option '--no-alignment-validation' to void this error and compile as is:

A 'float3' variable should be added before the variable 'm_f3' in
'struct /A/SD2/m_f3' at Line number 21 of
'D:\o3de-azslc\tests\Emission\AsError\matRC_padding.azsl'
A 'float3' variable should be added before the variable 'm_f5' in
'struct /A/SD2/m_f5' at Line number 23 of
'D:\o3de-azslc\tests\Emission\AsError\matRC_padding.azsl'

NOTE: Allegedly the most recent versions of DXC, in particular v1.6.2106+, fixes the issues when using '-fvk-use-dx-layout'.
This needs to be validated. See: microsoft/DirectXShaderCompiler#3945

generate_mcpp.bat script seems to have stale paths

@echo off
set DEVPATH=%1
set MCPP=%DEVPATH%\Gems\Atom\Asset\Shader\External\mcpp\2.7.2-az.1\lib\win_x64\mcpp.exe
set AZSLC=..\..\..\bin\win_x64\Release\azslc.exe
set DXC=%DEVPATH%\Gems\Atom\Asset\Shader\External\DirectXShaderCompiler\2020.08.07\bin\win_x64\Release\dxc.exe"

set AZSL=%2
set PREPROCESSED=%AZSL%.mcpp
set HLSL=%PREPROCESSED%.hlsl

rem %MCPP% %AZSL% > %PREPROCESSED%

%AZSLC% %PREPROCESSED% -o %HLSL%

rem %DXC% -help
rem %DXC% -T cs_6_2 %HLSL%
rem @echo on
rem %DXC% -T cs_6_2 main.azsl.mcpp.hlsl2.hlsl

If I understand correctly the changes that occurred for o3de release made that this test script cannot run from where it stands today.
It is in o3de-azslc\tests\Advanced\RespectEmitLine
I suspect that we need the introduction of an environment variable to be able to locate mcpp.exe which appears to be in o3de-packages\packages\mcpp-2.7.2_az.2-rev1-windows\mcpp\lib
(a python test script could also make the 2.7.2_.... part a non-necessary part with a regex like /mcpp-.*/)

Obscure Error Message For Non ASCII Characters

If you have non-ASCII characters in an AZSL source file, the compiler will report an obscure error message with no line number information: "bad conversion"

Try adding the following comment to an existing .azsl file (note that Notepad++ or other text editors might convert the character to ASCII automatically. Try pasting this in Visual Studio, that should do the trick):

// This file can’t compile
//              ^ because of this character

image

[Info][Patch] Current O3DE is incompatible with current azslc

The --use-spaces change is a breaking change within current O3DE, as it is no longer a valid keyword.
To be able to use the current version of azslc with the current state of development of o3de, small changes are necessary.
The following patch (for o3de) introduces these fixes

diff --git a/Gems/Atom/Asset/Shader/Code/Source/Editor/ShaderAssetBuilder.cpp b/Gems/Atom/Asset/Shader/Code/Source/Editor/ShaderAssetBuilder.cpp
index 38cd7af751..7663c90558 100644
--- a/Gems/Atom/Asset/Shader/Code/Source/Editor/ShaderAssetBuilder.cpp
+++ b/Gems/Atom/Asset/Shader/Code/Source/Editor/ShaderAssetBuilder.cpp
@@ -448,8 +448,8 @@ namespace AZ
                 // since the register Id of the resource will not change even if the pipeline layout changes.
                 // We can pass in a default ShaderCompilerArguments because all we care about is whether the shaderPlatformInterface
                 // appends the "--use-spaces" flag.
-                const auto& azslcArguments = buildArgsManager.GetCurrentArguments().m_azslcArguments;
-                const bool platformUsesRegisterSpaces = RHI::ShaderBuildArguments::HasArgument(azslcArguments, "--use-spaces");
+                // const auto& azslcArguments = buildArgsManager.GetCurrentArguments().m_azslcArguments;
+                const bool platformUsesRegisterSpaces = true; // RHI::ShaderBuildArguments::HasArgument(azslcArguments, "--use-spaces");
 
                 uint32_t supervariantIndex = 0;
                 for (const auto& supervariantInfo : supervariantList)
diff --git a/Gems/Atom/Asset/Shader/Code/Source/Editor/ShaderVariantAssetBuilder.cpp b/Gems/Atom/Asset/Shader/Code/Source/Editor/ShaderVariantAssetBuilder.cpp
index 41ef6dc7af..0d0b520f26 100644
--- a/Gems/Atom/Asset/Shader/Code/Source/Editor/ShaderVariantAssetBuilder.cpp
+++ b/Gems/Atom/Asset/Shader/Code/Source/Editor/ShaderVariantAssetBuilder.cpp
@@ -866,9 +866,10 @@ namespace AZ
                     RHI::Ptr<RHI::PipelineLayoutDescriptor> pipelineLayoutDescriptor;
                     if (shaderPlatformInterface->VariantCompilationRequiresSrgLayoutData())
                     {
-                        const auto& azslcArguments = buildArgsManager.GetCurrentArguments().m_azslcArguments;
-                        const bool platformUsesRegisterSpaces = RHI::ShaderBuildArguments::HasArgument(azslcArguments, "--use-spaces");
-                    
+                        // const auto& azslcArguments = buildArgsManager.GetCurrentArguments().m_azslcArguments;
+                        const bool platformUsesRegisterSpaces =
+                            true; //= RHI::ShaderBuildArguments::HasArgument(azslcArguments, "--use-spaces");
+
                         RPI::ShaderResourceGroupLayoutList srgLayoutList;
                         RootConstantData rootConstantData;
                         if (!LoadSrgLayoutListFromShaderAssetBuilder(
diff --git a/Gems/Atom/Asset/Shader/Config/shader_build_options.json b/Gems/Atom/Asset/Shader/Config/shader_build_options.json
index df69bd17fd..f14ae106ee 100644
--- a/Gems/Atom/Asset/Shader/Config/shader_build_options.json
+++ b/Gems/Atom/Asset/Shader/Config/shader_build_options.json
@@ -5,7 +5,8 @@
               , "-+" // C++ mode
         ],
         "azslc": ["--full" // Always generate the *.json files with SRG and reflection info.
-                , "--use-spaces"
+				// ,
+				// "--use-spaces"
                 , "--Zpr" // Row major matrices.
                 , "--W1" // Warning Level 1
                 , "--strip-unused-srgs" // Remove unreferenced SRGs.

option range not reconstructed

While investigating o3de/o3de#13625
I stumbled into the possibility that option with min != 0 never really having worked ever.

This is my basis for the claim:
Take this input program:

ShaderResourceGroupSemantic slot1
{
    FrequencyId = 1;
    ShaderVariantFallback = 128;
};

ShaderResourceGroup SRG : slot1
{
};

[[range(10,11)]]
option int o_stuff = 10;

option enum E { v = 10, v2 } o_s2;

float4 MainPS(float2 uv : TEXCOORD0) : SV_Target0
{
    return o_stuff;
}

The getter functions look like this:

#if defined(o_stuff_OPTION_DEF)
    static const int o_stuff = o_stuff_OPTION_DEF ;
#else
    static const int o_stuff = GetShaderVariantKey_o_stuff();
#endif

#if defined(o_s2_OPTION_DEF)
    static const ::E o_s2 = o_s2_OPTION_DEF ;
#else
    static const ::E o_s2 = GetShaderVariantKey_o_s2();
#endif

// ... code

int GetShaderVariantKey_o_stuff()
{
    uint shaderKey = (::SRG_SRGConstantBuffer.SRG_m_SHADER_VARIANT_KEY_NAME_[0].x >> 0) & 1;
    return (int) shaderKey;
}

::E GetShaderVariantKey_o_s2()
{
    uint shaderKey = (::SRG_SRGConstantBuffer.SRG_m_SHADER_VARIANT_KEY_NAME_[0].x >> 1) & 1;
    return (::E) shaderKey;
}

I think we can mathematically prove that the values coming out of GetShaderVariantKey_o_stuff are 0 or 1.
Because it's and'd with 1.
On the C++ counterpart of the option management system, we see a lot of this:
EncodeBits(group.GetShaderVariantKey(), valueIndex.GetIndex() - m_minValue.GetIndex());

Which hints that the min is NOT encoded in the key bits. Which is logical since we are doing compression.
But, if we do so, we need to add the min at reconstruction, otherwise the contract about the range is not respected, we destroy the values by flooring them back to a 0-based range.
I believe this is a bug, even though I have not attempted an empirical (observed) reproduction yet.

Function redeclaration internal error

[Migrated from JIRA ATOM-6265]
AZSL is supposed to tolerate redundant redeclarations of functions with the same signature. They emit a warning because it's useless, and also potentially could lose default parameter values.

But instead, the validator in the function MergeDefaultParameters seems to stop the compilation in all situations.

Tested to happen in all versions from 1.5 to now.

!image-2020-11-05-00-26-54-851.png!

ShaderResourceGroupStruct

[Migrated from JIRA ATOM-14465]
Description:

Update AZSLc to support a ShaderResourceGroupStruct concept to replace the various "COMMON_SRG_INPUTS_" macros.

Details:

This is like a struct but includes support for stripping out Textures from the constant buffer. We also might want to look into tighter packing, as structs force everything to 16 byte boundaries.

Acceptance Criteria:

  • Replace all of the "COMMON_SRG_INPUTS_" macros with a struct of some kind.
  • All shader inputs like "m_layer1_m_baseColorFactor" become "m_layer1.m_baseColorFactor".

Additional Information:

Shader Option Priority

The shader variant tree is formed in such a way that the highest priority options are nearest to the root of the tree. That way, if a requested shader variant is not available and a fallback is selected instead, there is a high chance that the fallback variant will still have baked the most impactful shader options.

Currently, the shader option priority order is simply based on the order in which the options are declared in the shader source code. But since the shader options are often declared in a variety of .azsli files and #included together, it is impractical to arrange their priority order in this way. We need some to control the shader option priority order independent of the declaration order.

I suggest the following approach (but I'm open to other ideas): Use a "priority" attribute to assign each option a priority value. If no priority value is provided, the default will be to repeat the previous priority value. The lowest priority number is the highest priority. For example, 10 is a higher priority than 100, and will appear closer to the root of the shader variant tree. If two or more options have the same priority value, they will be sorted in the order of declaration.

Example syntax:

[priority(1000)]
option bool o_enable_IBL = true;

The command line interface for azslc does not need to change, the existing "--options" flag will still output the metadata for the available shader options. The only difference is the order in which the options appear will match the indicated priority order.

The actual priority values do not need to appear in the returned reflection metadata, but I suggest it's a good idea to do so, as the values could come in handy in the future.

Update AZSLc (and O3DE) To Support HLSL 2021 Features

O3DE already uses the latest release of DXC: 1.6.2112. This includes preview support for the new HLSL 2021 feature set (see https://devblogs.microsoft.com/directx/announcing-hlsl-2021/) but AZSLc does not support these new features. I haven't tested this, but I suspect that at least some of the new features will fail to be parsed by AZSLc and will need upgrades to be passed through to the HLSL output.

We should update AZSLc's grammar to support these new features, and cut a new release. To demonstrate the new features we should look for some ways to improve existing shader code in O3DE, maybe templatize some utility functions

Note that there is a command line option for DXC to enable these features: "-HV 2021". We will need to update O3DE's configuration to pass this command line option when running DXC. See Gems\Atom\Asset\Shader\Config\shader_build_options.json

Shader Authoring in VSCode

[Migrated from JIRA ATOM-5434]
Description:

Provide plugins for VSCode to improve shader authoring experience.

Details:

Acceptance Criteria:

  • Provide AZSL syntax highlighting
  • Provide auto-completion
  • Link to the shader in Asset Processor
  • Link to the shader in the Shader Variant Management Console
  • Provide a dedicated shader compilation result interface where we have more control over how the results are displayed. (rather than AP's event log)
  • Users can click a link or button to open a failed AZSL file and jump to the problematic line.
  • Stretch: Provide an interactive graphical view of the shader compilation pipeline that shows AZSLc, DXC, Spirv-Cross, etc in a flow chart. This can show where a shader complation failure occurred and other debug information.
  • others?...

Additional Information:

A prototype by @jeremyong-az
https://www.jeremyong.com/graphics/parsers/hlsl/azsl/2022/01/02/azsl-intellisense-prototype/

Compiler error when using an #include "..." on the first line of an .azsl file

AZSLc returns an error when the first line of an .azsl file contains an #include "...". It works well if it begins in the second line, or if the include has an #include <...> format.

To quickly reproduce, move the first #include "Skin_Common.azsli" from Skin.azsl to the first line (above the copyright notice).

`uint64_t` literal `ULL` handled incorrectly

Using the unsigned long long literal ULL currently does not compile.
The following code:

uint64_t test_variable = 1ULL << shift_amount;

will result in the following error message:

syntax error #1: missing ';' at 'L'

Using ULL directly in hlsl code works without an issue using dxc.

Support kind sensitive symbol lookup

[Migrated from JIRA ATOM-542]
Symbol lookup is of great importance. We need to resolve to symbols that are not surprising to the users of the language. 

With the new DXC, since it is clang underneath, the rules of C++ will apply for symbol lookup. And these rules are crazy complicated.

take a look:

[https://en.cppreference.com/w/cpp/language/qualified_lookup]

Particularly of interest is: 
lookup of A to the left of :: ignores the variable
This is what I call "kind sensitive". The context changes the "bitmask" of activated symbol that can be resolved. There is none of that mecanism in AZSLc at this moment.

Release schedule

Is there a specific release schedule or how is a new release triggered?
Our team currently has to patch azslc within o3de as some of the newer features are necessary for our work, but this makes a fresh setup a bit cumbersome as well as keeping development up to date.

Would it be possible to release a current version such that o3de can update as well?

Emitted HLSL contains many unneeded new lines

// HLSL emission by AZSL Compiler 1.7.35 Win64
#line 14 "D:/o3de-atom-sampleviewer/user/AssetProcessorTemp/JobTemp-KiBTnM/StandardPBR_ForwardPass.azsl.dx12.prepend"


















static const float4 s_AzslDebugColor = float4 ( 16.0 / 255.0 , 124.0 / 255.0 , 16.0 / 255.0 , 1 ) ;
#line 11 "D:/o3de/Gems/Atom/Feature/Common/Assets/ShaderLib/Atom/Features/SrgSemantics.azsli"
#line 13 "D:/o3de/Gems/Atom/Feature/Common/Assets/ShaderResourceGroups/Decals/ViewSrg.azsli"














For example ^.

The new lines here corresponded to stripped comments, but instead of emitting new lines, we should just be adjusting the first argument of those #line directives.

ShaderOptionStruct

[Migrated from JIRA ATOM-14466]
Description:

Update AZSLc to support a ShaderOptionStruct concept to replace the various "COMMON_OPTIONS_" macros.

Details:

This like a namespace for shader options. All options across all structs will get flattened into a single options layout.

Acceptance Criteria:

  • Replace all of the "COMMON_OPTIONS_" macros with a struct of some kind.

Additional Information:

RFC - unconstrained language evolution

Motivation

With the announcement of HLSL 2021 https://devblogs.microsoft.com/directx/announcing-hlsl-2021/ templates, operator overloading and bitfields have been introduced. We observe that it would be a substantial cost and time inertia to follow such impactful language evolutions in AZLSc. The question is: is there a way to remove the logic-heavy part of azslc (semantic analysis) to make it that azsl is transparently hlsl? That way future evolutions of HLSL as a language would naturally become immediately available, ideally simply by a package release of DXC.

Suggestion

One option that seems to me of least effort, would be to cut the edition process in 2 parts, the AZSL part that holds the resources, and the HLSL part that holds the code.

Concept prototype idea 1

If from that input:

ShaderResourceGroupSemantic slot1
{
    FrequencyId = 1;
};

ShaderResourceGroup SRG : slot1
{
    struct CB
    {
        float4 color;
    };

    ConstantBuffer<CB> m_uniforms;
};

We get that output:

struct SRG_CB
{
    float4 color;
};
ConstantBuffer <::SRG_CB> SRG_m_uniforms : register(b0, space0);

Then we can save it to inputs.hlsl and extend it with follow up file:

#include "inputs.hlsl"  // auto generated from azsl

float4 MainPS( float2 uv : TEXCOORD0) :SV_Target0
{
   // edit here
   return SRG_m_uniforms.color;  // your resource names have mutated, refer to inputs.hlsl to identify their flattened names
} 

We note that the resource variables have changed names because of the mutations undergone in the process of SRG-erasure (transpilation from AZSL to HLSL). So it requires the programmers to take consciousness of the mutation scheme, and consult the input.hlsl to know what they have to work with.

Advantages

Inconvenients

  • Not good touch-and-feel for the user, since discoverability of "secret variables" is not clear from the original azsl source.
  • Loss of code mutators (Zpc Zpr matrix qualifiers, --no-ms or --cb-body mode).
  • The 2-step authoring has effects on the Asset Processor build steps. There is one build of the .azsl and another build for the .hlsl which includes the generated part and user-authored parts.

Evolution idea 1.1

The problem is that the mutation can be platform specific, and can be azslc version dependent. Also, it can be unpredictable because of name collision avoidance. e.g. SRG::m_uniforms may become SRG_m_uniforms or SRG_m_uniforms1.
Otherwise said, there is no specification guarantee on the rename scheme.
To ease that issue, we can imagine an __asm__ block scheme, with what historically was called "clobber" declarations to make the link between host language and DSL.

Example of what it could look like:

ShaderResourceGroupSemantic slot1
{
    FrequencyId = 1;
    ShaderVariantFallback = 128;
};

option bool reflections = false;

ShaderResourceGroup SRG : slot1
{
    struct CB
    {
        float3 sceneBounds;
    };

    ConstantBuffer<CB> uniforms;
    
    float4 iblAvg;
    float4 ambient;
    
    Texture2DMS<float4, 8> fresnel;

    enum Composite { Spec, Diff };
    
    Composite Get(bool forceOff) { return !forceOff && reflections ? Spec : Diff; }
    CB Get() { return uniforms; }
    float4 Get(Composite c) { return c == Spec ? iblAvg : ambient; }
};

struct PSInput
{
    float4 position : SV_Position;
    float4 color    : COLOR0;
};

typealias CB = SRG::CB;   // this location is stable so can be referred to

__hlsl__
@{
   // declarative zone where lookup happens once from the global scope and gets cached into an alias, that becomes available for the HLSL block.
    using Get = SRG::Get;  // alias the overloadset
    using Spec = SRG::Composite::Spec;  // enumerators mutate
    using fresnel = SRG::fresnel;    // variables also mutate

    // from here, code is like a comment for AZSLc
    template< typename UV_t >
    float4 PScoreT(PSInput input, UV_t uv, int si)
    {
        CB cb = Get();
        float4 spec = fresnel.sample[si](uv);  // we lose ability to mutate --no-ms
        if (position.xyz < cb.sceneBounds)  // field names don't mutate AFAIR
            return Get(Get(false));
        else
            return 0;
    }

    // templates can't be entry points in HLSL. declare a concrete version
    float4 PSMainF2(PSInput i, float2 f : TEXCOORD0, int si : SV_SampleIndex) : SV_Target0
    { return PSMainT(i,f, si); }

}@  // we need a "raw string literal"-way of ending the block

Bear with me that the program is nonsensical. But the point is to illustrate what we lose and what we win.
We win language involutivity, but we lose perfect integration with the azsl-declared resource. They need to be bridged in some way (somewhat akin to lambda capture), so that the access to the mutated symbols in the HLSL block can bind to their intended symbol.

Advantages

  • No more magic names as in idea 1, the links become explicit.
  • Possibility of preserving an integrated asset build (no 2 steps with the auto-generated include).

Inconvenients

  • Do not open the possibility of a codebase diet (symbol lookup must still work for using directives)
  • Not the best UX because of need to identify used symbols that are external to the __hlsl__ block and repeat a short declaration.
  • Like for idea 1, loss of code mutators (matrix qualifiers, --no-ms or --cb-body mode).

Concept idea 2

Strongly reduce the invasiveness of AZSL specific syntax constructs. Tending instead toward a decorated HLSL.
The compiler would still need to exist to do reflection and resource registers assignation in each platform way. Also would still generate option and rootconstant variable getters. Would still need to accept non-HLSL blocks such as: static samplers with in-situ states declarations, or SRG frequencies and option fallback key.
But the names would be expected to be stable since no flattening or scope mutation will happen. Client-site usage (later in code), will remain naturally compatible with the declaration.

As per @santorac proposal:

ShaderResourceGroupSemantic slot1
{
    FrequencyId = 1;
};

[ShaderResourcesGroup(slot1)]
namespace
{
    struct CB
    {
        float4 color;
    };
    ConstantBuffer<CB> uniforms;
}

Using an annotation, AZSLc2 would have to recognize that attribute to register resources instead of the ShaderResourceGroup block of today.

Advantage

  • Opens the possibility of a codebase diet
  • On paper, it render azsl files syntactically compatible with shader explorers like godbolt or tim jones playground, OpenGPU analyzers etc.

Inconvenient

  • Like for idea 1 and 1.1, loss of code mutators (matrix qualifiers, --no-ms or --cb-body mode).
  • Necessity of a one large sweep intervention in current shaders to adapt them. Though we can imagine shipping both azslc versions until potential deprecation at an undefined date.

Prototype

I (@siliconvoodoo) am forking the main repository to try this evolution here: https://github.com/SiliconStudio/o3de-azslc-evo

Findings

I see 3 pathways of implementation to the target:

image

Further

We can also decide to delete the ShaderResourceGroupSemantic syntax and integrate it to attributes as well:

[[azsl::ResourceGroupSemantic]]
namespace slot1
{
    static const int slot = 1;
    static const int frequencyId = 128;
};

[[azsl::ResourceGroup(slot1)]]
namespace SRG
{
    struct Data { float4 f; };
    ConstantBuffer<Data> glob : register(b0);
}

We'll note that those attributes are still oddly not compatible with DXC. Even with -HV 2021:

error: an attribute list cannot appear here

But it's reasonable as long as AZSLc2 swallows those attributes.

Desirable diet features

seenat

refer to https://github.com/o3de/o3de-azslc/wiki/Features#seenats
This is a necessity for mathematically infallible symbol rename and migration. (the migration from SRG scopes to global, and some typealias/structs from function scopes to outter scope which was a bonus of azsl)

Maintaining this is the most costly because of its dependency to reliable lookup. Lookup depends on semantic contexts and requires understanding of scopes, type deduction, inheritance, function overloads, and overrides.
Introduction of templates is hindered by the weight of updating all these mechanisms.

impacts:

srg-constants references mutations

in the original azsl source:

// ...
ShaderResourceGroup S : slot1
{
    float3 sunDir;
    
    float3 Get() { return sunDir; }
};

float4 psmain() : SV_Target0
{
    return float4(S::sunDir, 1);
}

accesses to sunDir get mutated to their actual materialization in a generated constant buffer, as such:

struct S_SRGConstantsStruct
{
    float3 S_sunDir;
};

ConstantBuffer<::S_SRGConstantsStruct> S_SRGConstantBuffer : register(b0, space0);

float3 S_Get()
{
    return ::S_SRGConstantBuffer.S_sunDir ;
} 

float4 psmain() :SV_Target0
{
    return float4 ( ::S_SRGConstantBuffer.S_sunDir , 1 ) ;
}

Finding the points of mutation requires the seenat system.

One way to do away with that problem is to adopt the option strategy which is to declare a static variable that is fetch from a function call. Refer to wiki features paragraph for illustration.

rootconstant mutations

azsl example source:

rootconstant bool fog;

static const float3 fogClr = float3(0.5, 0.5, 0.5);
float4 psmain(float3 clr : COLOR0, float d : DEPTH) : SV_Target0
{
    return float4(fog ? lerp(clr, fogClr, pow(1.8, d)) : clr, 1);
}

results in mutated references to:

bool GetShaderRootConst_Root_Constants_fog();

static const bool _g_Root_Constants_fog = GetShaderRootConst_Root_Constants_fog();

static const float3 fogClr = float3 ( 0.5 , 0.5 , 0.5 ) ;
float4 psmain( float3 clr :COLOR0,  float d :DEPTH) :SV_Target0
{
    return float4 ( _g_Root_Constants_fog ? lerp ( clr , fogClr , pow ( 1.8 , d ) ) : clr , 1 ) ;
} 

struct Root_Constants
{
    bool fog;
};
ConstantBuffer<::Root_Constants> rootconstantsCB : register(b0, space0);
bool GetShaderRootConst_Root_Constants_fog()
{
    return ::rootconstantsCB.fog;
}

Suggestion for solution: the variable is mutated at definition site to a self-initializing static. Therefore we could imagine that the name needn't change. That behavior seems like a conservative choice but it doesn't seem necessary, the original symbol name could probably be preserved. That would be consistent with the behavior for options. And lift us from the need of iterating the references.

--no-ms, --strip-unused-srg, --cb-body, --bindingdep

Mentioned in later paragraphs.

Packing

Feature to reflect "constantByteOffset" "constantByteSize" "typeDimensions" (document).
It would be desirable to delete all the alignment computation code that serves as a support to RHI for buffer-as-byte understanding of where (at what offset) variables actually sit in the CB. This code is heavy and costs many tests, has cost long investigations of reverse engineering DXC, and multiplied by the combination of options for vulkan/dx/glsl rules.

Dependencies: The pack computer relies on the type system. Because it accesses type class (user defined or fundamental, matrix or vector) and typeinfo (array or matrix dimensions), and sizeof. The type system can't work without symbol lookup system because they can be combined (UDT members, inheritance...), and typedefed.

Alternative: rely on a sort of reflective DXC API?

Constant folding

It would save us a small amount of code, but always a nice to add to the diet. Unfortunately at this point, it's important for array dimension reflection of SRG resources. Also for [[pad_to(N)]] feature, or option range, the thread count reflector (for metal), static sampler reflection.

Difficult features

--no-ms

Any route will at least diminish the robustness of the current approach: no more possible to check that X in X.Load() is a Texture2DMS type-referring symbol since it relies on Lookup facility, and typeof facility.

Yellow route: Totally unsupported. We won't have enough grammar power to work on AST level anymore.

Alternatives:

  • dxil/spirv evolution for flag-configurable texture resources?
  • a separated human assisted tool that works with regex (or tree-sitter?) to generate the supervariant version after an edit of the main file, if that supervariant is present in the .shader?

--cb-body

This behavior switch on the CLI activates a mode where generation of constant buffer takes a different form, as such:
input:

ShaderResourceGroup SRG : slot1
{
    float4 color;
    struct CB { bool fog; };
    ConstantBuffer<CB> uniforms;
};

float4 MainPS(float2 uv : TEXCOORD0) : SV_Target0
{
    return SRG::color + SRG::uniforms.fog;
}

output:

struct SRG_CB { bool fog; };

ConstantBuffer SRG_CBContainer : register(b0)
{
    CBVArrayView uniforms;
    float4 SRG_color;
};
static const RegularBuffer<::SRG_CB> SRG_uniforms = RegularBuffer<::SRG_CB> (uniforms);

float4 MainPS( float2 uv :TEXCOORD0) :SV_Target0
{
    return ::SRG_color + ::SRG_uniforms[0] . fog ;
}

We note that again, in MainPS the references to external resources are mutated to a different symbol. SRG::color becomes ::SRG_Color and more complicated, access to uniforms has be enriched with a subscript access.

Solution: It seems like we could once again go with the option strategy, declare a static accessor variable with an initializer calling a generated getter function fetching the member in the generated ConstantBuffer corresponding member.
As a matter of fact, since these srg-constant variables becomes immediately visible to the outter scope, they may even be declared as is, with the same original name. It will cause declaration vs access order problems though since their declaration site will be migrated all together into one location. We already have this problem for srg-constants. The static variable with initializer seem like a simple enough counter strategy for that.

The [0] subscript can maybe be solved in the exact same way, let the references refer to a generated static variable, and initialize it with a fetcher function, the body of that fetcher function will possess the [0] subscript. This way we free the reference sites accross the program from need of awareness of this specificity.

--bindingdep

This system reflects the "participantsConstants" (in JSON) by "dependentFunctions" (entry points). (documentation in features page).
It relies on the reference tracker to iterate on appearances through the program of external resources. Same core system than --strip-unsued-srgs as described in this picture.
The code for the feature can be found here.

We can either forgo of that facility, if we re-evaluate its necessity in sensitive platforms like vulkan or metal. Or we will need to find an alternative, using DXC internal API, maybe by analyzing remaining resources post optimization. Though I seem to recall optimization was not a factor, even on the contrary we needed to know the variables that should be there by contract, irrespective of potential dead-variable optimizations.
Maybe leveraging clang (or the clang in DXC) if push comes to shove.

--strip-unused-srgs

Reference doc: https://github.com/o3de/o3de-azslc/wiki/Features#strip-unused-srgs
This relies on the homonym visitor system to visit the seenats of each resource inside an SRG.
It's the same problem as --bindingdep since it relies on the same system.

Any route: will rid us of ability to iterate over seenats since the point of the evolution RFC, is to remove the complexity involved in the seenat system.

Alternatives: Drop that feature? It seems undesirable in raytracing contexts. Maybe we can hack an artificial resource-toucher to force DXC to not optimize out resources (or propose a flag)? Or if we do the clang explorer for --bindingdep it will be factorized for that feature too.

Finalize AZSLc Changes To Support Bindless

Work is being done on a branch to add general support for bindless resources in the Atom renderer (see o3de/o3de#8410). This required some updates to AZSLc to support an unlimited number of unbounded arrays (see #42). Before the O3DE changes can be merged, we need finish the corresponding AZSLc changes and cut a new package of AZSLc.

  1. As @moudgils continues testing the changes in O3DE, make any additional changes to AZSLc that might be necessary, on the same branch as #42
  2. Once the remaining stability issues are fixed (probably stuff that @moudgils needs to address on the O3DE side), then merge #42 into development.
  3. Cut a new package of AZSLc for all platforms. (see https://github.com/o3de/o3de-azslc/wiki/Releasing-A-New-Version-Of-AZSLc)
  4. Someone (either the owner of this ticket or @moudgils ) will update o3de/o3de#8410 to pull in the new AZSLc package.
  5. @moudgils will merge o3de/o3de#8410 to development and this ticket can be closed.

attribute before SRG mixes up inside comment

source:

ShaderResourceGroupSemantic ExampleBinding 
{
    FrequencyId = 0; 
    ShaderVariantFallback = 128;
};

[[verbatim("#include \"header.azsli\"")]]

ShaderResourceGroup ExampleSRG : ExampleBinding
{
    float4 exampleColor;
};

output:

// HLSL emission by AZSL Compiler 1.8.11 Win64
#line 7 "C:/Users/VIVIEN~1.ODD/AppData/Local/Temp/tmp-24300NAm7hfF3AICj"
#line 15 "C:/Users/VIVIEN~1.ODD/AppData/Local/Temp/tmp-24300NAm7hfF3AICj"
/* Generated code from #include "header.azsli"
 ShaderResourceGroup ExampleSRG*/
struct ExampleSRG_SRGConstantsStruct
{
    float4 ExampleSRG_exampleColor;
    uint4 ExampleSRG_m_SHADER_VARIANT_KEY_NAME_[1];
};

ConstantBuffer<::ExampleSRG_SRGConstantsStruct> ExampleSRG_SRGConstantBuffer : register(b0, space0);

DXC invocation tests lost

I can't find any test that will call BuildDXC, nor buildDXCCompute python utilities. Therefore the Windows platform folder must have been lost.

Per-Target Blend States

[Migrated from JIRA ATOM-14657]
Description:

Currently, the blend state configuration in a .shader file applies to all render targets. We need a way to specify blend states on a per-target basis.

Details:

Acceptance Criteria:

Additional Information:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.