Files
llvm-project/llvm/utils/TableGen/DecoderTableEmitter.h
Sergei Barannikov 60bdf09654 [TableGen][DecoderEmitter] Rework table construction/emission (#155889)
### Current state

We have FilterChooser class, which can be thought of as a **tree of
encodings**. Tree nodes are instances of FilterChooser itself, and come
in two types:

* A node containing single encoding that has *constant* bits in the
specified bit range, a.k.a. singleton node.
* A node containing only child nodes, where each child represents a set
of encodings that have the same *constant* bits in the specified bit
range.

Either of these nodes can have an additional child, which represents a
set of encodings that have some *unknown* bits in the same bit range.

As can be seen, the **data structure is very high level**.

The encoding tree represented by FilterChooser is then converted into a
finite-state machine (FSM), represented as **byte array**. The
translation is straightforward: for each node of the tree we emit a
sequence of opcodes that check encoding bits and predicates for each
encoding. For a singleton node we also emit a terminal "decode" opcode.

The translation is done in one go, and this has negative consequences:

* We miss optimization opportunities.
* We have to use "fixups" when encoding transitions in the FSM since we
don't know the size of the data we want to jump over in advance. We have
to emit the data first and then fix up the location of the jump. This
means the fixup size has to be large enough to encode the longest jump,
so **most of the transitions are encoded inefficiently**.
* Finally, when converting the FSM into human readable form, we have to
**decode the byte array we've just emitted**. This is also done in one
go, so we **can't do any pretty printing**.

### This PR

We introduce an intermediary data structure, decoder tree, that can be
thought as **AST of the decoder program**.
This data structure is **low level** and as such allows for optimization
and analysis.
It resolves all the issues listed above. We now can:
* Emit more optimal opcode sequences.
* Compute the size of the data to be emitted in advance, avoiding
fixups.
* Do pretty printing.

Serialization is done by a new class, DecoderTableEmitter, which
converts the AST into a FSM in **textual form**, streamed right into the
output file.

### Results
* The new approach immediately resulted in 12% total table size savings
across all in-tree targets, without implementing any optimizations on
the AST. Many tables observe ~20% size reduction.
* The generated file is much more readable.
* The implementation is arguably simpler and more straightforward (the
diff is only +150~200 lines, which feels rather small for the benefits
the change gives).
2025-09-20 01:58:53 +00:00

70 lines
2.3 KiB
C++

//===----------------------------------------------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef LLVM_UTILS_TABLEGEN_DECODERTABLEEMITTER_H
#define LLVM_UTILS_TABLEGEN_DECODERTABLEEMITTER_H
#include "DecoderTree.h"
#include "llvm/Support/FormattedStream.h"
namespace llvm {
struct DecoderTableInfo {
bool HasCheckPredicate = false;
bool HasSoftFail = false;
};
class DecoderTableEmitter {
DecoderTableInfo &TableInfo;
formatted_raw_ostream OS;
/// The number of positions occupied by the index in the output. Used to
/// right-align indices and left-align the text that follows them.
unsigned IndexWidth;
/// The current position in the output stream. After the table is emitted,
/// this is its size.
unsigned CurrentIndex;
/// The index of the first byte of the table row. Used as a label in the
/// comment following the row.
unsigned LineStartIndex;
public:
DecoderTableEmitter(DecoderTableInfo &TableInfo, raw_ostream &OS)
: TableInfo(TableInfo), OS(OS) {}
void emitTable(StringRef TableName, unsigned BitWidth,
const DecoderTreeNode *Root);
private:
unsigned computeNodeSize(const DecoderTreeNode *Node) const;
unsigned computeTableSize(const DecoderTreeNode *Root,
unsigned BitWidth) const;
void emitStartLine();
void emitOpcode(StringRef Name);
void emitByte(uint8_t Val);
void emitUInt8(unsigned Val);
void emitULEB128(uint64_t Val);
raw_ostream &emitComment(indent Indent);
void emitCheckAnyNode(const CheckAnyNode *N, indent Indent);
void emitCheckAllNode(const CheckAllNode *N, indent Indent);
void emitSwitchFieldNode(const SwitchFieldNode *N, indent Indent);
void emitCheckFieldNode(const CheckFieldNode *N, indent Indent);
void emitCheckPredicateNode(const CheckPredicateNode *N, indent Indent);
void emitSoftFailNode(const SoftFailNode *N, indent Indent);
void emitDecodeNode(const DecodeNode *N, indent Indent);
void emitNode(const DecoderTreeNode *N, indent Indent);
};
} // namespace llvm
#endif // LLVM_UTILS_TABLEGEN_DECODERTABLEEMITTER_H