summaryrefslogtreecommitdiff
path: root/mlir/README.md
diff options
context:
space:
mode:
authorDan Zheng <danielzheng@google.com>2019-04-03 17:50:41 -0700
committerMehdi Amini <joker.eph@gmail.com>2019-04-03 19:22:14 -0700
commit0737776200a18d56eaa75c443c58cd05c0e90ce8 (patch)
treee2441ff5bc59d370cffc59a2eacf71bbd9196e21 /mlir/README.md
parent7ce55ff7c2ed743c7396e7f4681f11bd4feeb9ac (diff)
downloadllvm-0737776200a18d56eaa75c443c58cd05c0e90ce8.tar.gz
[README] Minor edits.
Minor stylistic edits. Use standard "TensorFlow" spelling. -- PiperOrigin-RevId: 241842272
Diffstat (limited to 'mlir/README.md')
-rw-r--r--mlir/README.md44
1 files changed, 23 insertions, 21 deletions
diff --git a/mlir/README.md b/mlir/README.md
index 00ff2d2705da..98c2eb8a1620 100644
--- a/mlir/README.md
+++ b/mlir/README.md
@@ -7,23 +7,24 @@ include the application of HPC techniques, along with integration of search
algorithms like reinforcement learning. This project aims to reduce the cost to
bring up new hardware, and improve usability for existing TensorFlow users.
-Note that this repository contains the core of the MLIR framework, the
-Tensorflow compilers we are building on top of MLIR will be part of the
-main Tensorflow repository soon.
+Note that this repository contains the core of the MLIR framework. The
+TensorFlow compilers we are building on top of MLIR will be part of the
+main TensorFlow repository soon.
## More resources
For more information on MLIR, please see:
* [The MLIR draft specification](g3doc/LangRef.md), which describes the IR
- itself,
+ itself.
* [The MLIR rationale document](g3doc/Rationale.md), covering motivation
- behind some decisions,
-* previous external [talks](#talks),
+ behind some decisions.
+* Previous external [talks](#talks).
-or join the [MLIR mailing list](https://groups.google.com/a/tensorflow.org/forum/#!forum/mlir).
-Please be mindful of the [TensorFlow Code of Conduct](https://github.com/tensorflow/tensorflow/blob/master/CODE_OF_CONDUCT.md)
-that pledges to foster an open and welcoming environment.
+Join the [MLIR mailing list](https://groups.google.com/a/tensorflow.org/forum/#!forum/mlir)
+to hear about announcements and discussions.
+Please be mindful of the [TensorFlow Code of Conduct](https://github.com/tensorflow/tensorflow/blob/master/CODE_OF_CONDUCT.md),
+which pledges to foster an open and welcoming environment.
## What is MLIR for?
@@ -46,7 +47,7 @@ requirements in a unified infrastructure. For example, this includes:
architectures.
* Ability to represent target-specific operations, e.g. the MXU on TPUs.
-MLIR is a common IR which also supports hardware specific operations. Thus,
+MLIR is a common IR that also supports hardware specific operations. Thus,
any investment into the infrastructure surrounding MLIR (e.g. the compiler
passes that work on it) should yield good returns; many targets can use that
infrastructure and will benefit from it.
@@ -55,16 +56,16 @@ MLIR is a powerful representation, but it also has non-goals. We do not try to
support low level machine code generation algorithms (like register allocation
and instruction scheduling). They are a better fit for lower level optimizers
(such as LLVM). Also, we do not intend MLIR to be a source language that
-end-users would themselves write kernels in (analogous to CUDA C++). While we'd
-love to see a kernel language happen someday, that will be an independent
+end-users would themselves write kernels in (analogous to CUDA C++). While we
+would love to see a kernel language happen someday, that will be an independent
project that compiles down to MLIR.
## Compiler Infrastructure {#compiler-infrastructure}
-We benefitted from the experience gained building HLO, LLVM and SIL when
-building MLIR. We will directly adopt existing best practices, e.g. writing and
-maintaining an IR spec, building an IR verifier, providing the ability to dump
-and parse MLIR files to text, writing extensive unit tests with the
+We benefitted from experience gained from building other IRs (HLO, LLVM and SIL)
+when building MLIR. We will directly adopt existing best practices, e.g. writing
+and maintaining an IR spec, building an IR verifier, providing the ability to
+dump and parse MLIR files to text, writing extensive unit tests with the
[FileCheck](https://llvm.org/docs/CommandGuide/FileCheck.html) tool, and
building the infrastructure as a set of modular libraries that can be combined
in new ways. We plan to use the infrastructure developed by the XLA team for
@@ -74,11 +75,11 @@ Other lessons have been incorporated and integrated into the design in subtle
ways. For example, LLVM has non-obvious design mistakes that prevent a
multithreaded compiler from working on multiple functions in an LLVM module at
the same time. MLIR solves these problems by having per-function constant pools
-and by making references explicit with function_ref.
+and by making references explicit with `function_ref`.
# Getting started with MLIR
-MLIR has been tested on Linux and MacOS, with a recent clang or with gcc 7.
+MLIR has been tested on Linux and macOS, with recent versions of clang and with gcc 7.
```
git clone https://github.com/llvm/llvm-project.git
@@ -96,7 +97,8 @@ building a compiler for a Toy language.
# MLIR talks {#talks}
-* "[MLIR Primer: A Compiler Infrastructure for the End of Moore’s Law](https://drive.google.com/file/d/1hUeAJXcAXwz82RXA5VtO5ZoH8cVQhrOK/view?usp=sharing)",
- Chris Lattner & Jacques Pienaar, Google at
+* "[MLIR Primer: A Compiler Infrastructure for the End of Moore’s
+ Law](https://drive.google.com/file/d/1hUeAJXcAXwz82RXA5VtO5ZoH8cVQhrOK/view?usp=sharing)"
+ * Chris Lattner & Jacques Pienaar, Google at
[Compilers for Machine Learning](https://www.c4ml.org/) workshop at
- [CGO 2019](http://cgo.org/cgo2019/).
+ [CGO 2019](http://cgo.org/cgo2019/)