Comments (4)
Pytorch/xla is exclusively for TPU
It isn't, we use it for GPUs as well.
from xla.
-
LMHLO doesn't exist anymore I believe.
-
Technically, roundtripping back to MHLO should work, did you try to use the available
translate
tool to convert the intermediate HLO result back to MHLO?
@GleasonK should be able to provide more details.
from xla.
Hello!
Frameworks/compilers interested in an MLIR interface for XLA should use StableHLO instead of MHLO, which should be possible so long as the HLO passes/optimizations used are not hardware specific. As an added benefit this would provide an easy entrypoint from JAX/TF/PT -- see tutorials for details on this.
For exporting HLO to MHLO/StableHLO: PyTorch/XLA's export to StableHLO goes via HLO->MHLO->StableHLO
, so can provide a good code of how to do this programmatically: stablehlo_helper.cc
As for APIs to "access intermediate results", I'm not sure what you have in mind, a custom pipeline can be built which runs the HLO passes of interest and then export to MHLO/StableHLO (see APIs above). Alternatively you could look into dumping HLO at a specific point and using xla-translate
(example) to convert from HLO to MHLO, and mlir-hlo-opt
(example) if you want to go to StableHLO via available tooling.
from xla.
@GleasonK Thank you! Your answer was indeed insightful. I will proceed to use the converted StableHLO from Pytorch/xla. I do have a few more inquiries, however:
- Pytorch/xla is exclusively for TPU, but the hardware I intend to use in my compiler is not TPUs. Will there be any issues concerning optimization or compatibility if I utilize the StableHLO generated in this pytorch/xla?
- To my understanding, the operations of MHLO and StableHLO do not correspond on a 1:1 basis. Does this mean we can still leverage all the optimizations offered by the XLA Compiler even if we use StableHLO instead of MHLO?
from xla.
Related Issues (20)
- gpu_hlo_cost_analysis NumOfDevices always return 0 HOT 2
- [Feature Request]Add more comm op support on gpu_hlo_cost_analysis HOT 4
- gpu f16 cast to fp32 calculation, and then converted back? HOT 1
- "Could not find executable `nvidia-smi`" for `./configure.py --backend=CUDA` HOT 6
- Build from source fails HOT 7
- XLA documentation for Windows HOT 1
- Implement GitHub Presubmit Checks for Windows Environment HOT 1
- Support builds with cuDNN 9
- gpu_fused_mha_test fails at HEAD on H100 HOT 1
- Workable example on normal DNN model
- [xla:auto_sharding] Question about resharding costs of Reshape strategies
- OpenCL Support. HOT 2
- BF16 matmul slower than F32 matmul on T4 GPU HOT 3
- PJRT `CopyCpuBufferToLiteral` of JAX buffer taking too long HOT 9
- Porting XLA to different backends. HOT 4
- Compiling xla/mlir/tools/mlir_interpreter/dialects/util.cc failed HOT 3
- Controlling a single compiler pass in XLA for CPU target HOT 2
- Compiling scatter results in very slow while-loop on TPU
- Wrong output from JAX test after onednn change HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from xla.