Skip to content

Commit

Permalink
Remove transformPreLowering() (pytorch#2365)
Browse files Browse the repository at this point in the history
  • Loading branch information
jfix71 authored Feb 9, 2019
1 parent 9f8b68d commit 8929a8c
Show file tree
Hide file tree
Showing 4 changed files with 20 additions and 41 deletions.
38 changes: 14 additions & 24 deletions docs/Backends.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,19 +44,6 @@ are two pure virtual functions all backends must implement:

Additionally, there are virtual functions that backends can override:

- `virtual bool transformPreLowering(Function *F, CompilationMode mode) const;`

- Allow the backend to transform the `Function *F` before [node
lowering](https://github.com/pytorch/glow/blob/master/docs/IR.md#node-lowering)
occurs, given some `CompilationMode mode`. For example, a backend may prefer
to replace a ConvolutionNode followed by a ReluNode with a
[backend-specific](https://github.com/pytorch/glow/blob/master/docs/NewBackendSpecificNode.md)
fused ConvReluNode. This should be done prior to node lowering, as otherwise
the ReluNode will already be lowered to a MaxNode and may be transformed by
other optimization passes. Returns true if the Function was modified at
all. See [below](#backend-specific-nodes-and-instructions-transformations)
for more information.

- `virtual bool transformPostLowering(Function *F, CompilationMode mode) const;`

- Allow the backend to transform the `Function *F` after [node
Expand All @@ -73,9 +60,13 @@ Additionally, there are virtual functions that backends can override:
- `virtual bool shouldLower(const Node *N) const;`

- Allow the backend to prevent lowering for some `Node *N`. For example, if a
backend supports executing a FullyConnected operator, it would want to
prevent lowering for it and provide a backend-specific Instruction for the
FullyConnectedNode to be
backend wants to fuse a `ReluNode` into a `ConvNode` to create some
backend-specific node `ConvReluNode`, then it may prevent lowering for
`ReluNode`. Then during `transformPostLowering()` it can look for patterns
of `ConvNode` followed by `ReluNode` to swap out for `ConvReluNode`. Another
example is if a backend supports executing a FullyConnected operator, it
would want to prevent lowering for it and provide a backend-specific
Instruction for the FullyConnectedNode to be
[IRGen'd](https://github.com/pytorch/glow/blob/master/docs/IR.md#low-level-ir)
into. Note that IRGen for a Node can be specified via the
[ClassGen](https://github.com/pytorch/glow/blob/master/docs/ClassGen.md)
Expand All @@ -98,7 +89,7 @@ Additionally, there are virtual functions that backends can override:

- `virtual bool generateInst(Node *N, IRGenVisitor &irgen) const;`

- Allow the backend to custom lower from Node to Instruction IR.
- Allow the backend to custom lower from Node to Instruction IR.
Returns true if lowering is performed, false otherwise.

### `CompiledFunction` Abstract Class
Expand All @@ -113,8 +104,8 @@ responsible for copying inputs to the device from all input
[Placeholders](https://github.com/pytorch/glow/blob/master/docs/IR.md#placeholders),
executing the function, and copying outputs back from the device to output
Placeholders. The `CompiledFunction` contains a [RuntimeBundle](#runtimebundle-helper-class)
which contains the symbol information and mappings of inputs and outputs. Thus after the
function returns, all Placeholders for the outputs of the function should have had
which contains the symbol information and mappings of inputs and outputs. Thus after the
function returns, all Placeholders for the outputs of the function should have had
their backing tensor updated.

### `RuntimeBundle` Helper Class
Expand All @@ -138,11 +129,10 @@ entire Splat tensor.
### Backend-Specific Transformation

Backends have the opportunity to perform their own analysis and transformations
before or after lowering depending on their requirements. This is exposed via
`transformPreLowering()` and `transformPostLowering()` hooks, during which a
backend can transform the graph however it desires. For example, the backend
could use `transformPostLowering()` to search the graph looking for the above
`CPUMaxSplat` pattern.
after lowering. This is exposed via the `transformPostLowering()` hook, during
which a backend can transform the graph however it desires. For example, the
backend could use `transformPostLowering()` to search the graph looking for the
above `CPUMaxSplat` pattern.

#### Backend-Specific Nodes and Instructions

Expand Down
4 changes: 3 additions & 1 deletion docs/NewBackendSpecificNode.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,9 @@ Here are mainly three steps to add a new backend-specific node in Glow:

1. Add a backend-specific Node `FusedAB` to `XSpecificNodes.h`, and a corresponding backend-specific Instruction `FusedAB` to `XSpecificInstrs.h`. Note that the `FusedABInst` needs to be marked with `autoIRGen()` so that the node is automatically IRGen'd to the instruction, as we currently do not support backend-specific IRGen.

2. Add logic to `XBackend::transformPreLowering()` or `XBackend::transformPostLowering()` (or both) depending on when you want to do the transformation. This logic would look for the pattern of nodes you want to fuse (`A` with a single use by `B`), and replaces all uses of the result of B with the new backend-specific `FusedABNode`.
2. Add logic to `XBackend::transformPostLowering()` that looks for the pattern
of nodes you want to fuse (`A` with a single use by `B`), and replaces all uses
of the result of B with the new backend-specific `FusedABNode`.

3. Have your backend `X` implement `FusedABInst`. For example, for the OpenCL backend, this would mean adding a case to enqueue a kernel for the `FusedABInst` to `OpenCLFunction::execute()`, and then adding the corresponding kernel in `kernels.cl`.

Expand Down
12 changes: 3 additions & 9 deletions include/glow/Backends/Backend.h
Original file line number Diff line number Diff line change
Expand Up @@ -57,20 +57,14 @@ class Backend {
GLOW_UNREACHABLE("Saving a bundle is not supported by the backend");
}

/// @name Backend transform methods for different phases.
/// These methods are called by the compiler before code generation and gives
/// the backend an opportunity to transform the graph before IRGen. The
/// backend may insert target specific nodes. The backend is responsible for
/// Used by the compiler during graph optimization and before code generation,
/// giving the backend an opportunity to transform the graph before IRGen. The
/// backend may insert backend-specific nodes. The backend is responsible for
/// cleaning up after itself.
/// \returns True if the graph was modified.
///@{
virtual bool transformPreLowering(Function *F, CompilationMode mode) const {
return false;
}
virtual bool transformPostLowering(Function *F, CompilationMode mode) const {
return false;
}
/// @}

/// \returns true if backend supports given kind of operation with
/// the given \p elementTy element type.
Expand Down
7 changes: 0 additions & 7 deletions lib/Backends/Backend.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -27,13 +27,6 @@ void Backend::optimizeFunction(CompilationMode mode, Function *F) {
// Optimize the graph.
::glow::optimize(F, mode);

// Allow the backend to transform the graph prior to lowering.
if (transformPreLowering(F, mode)) {
// Optimize the graph again after the backend transformation.
// In particular, DCE is very likely to be useful.
::glow::optimize(F, mode);
}

// Lower the graph into a sequence of low-level linear algebra operations.
::glow::lower(F, *this);

Expand Down

0 comments on commit 8929a8c

Please sign in to comment.