This is a fix for the `BufferizableOpInterface` implementation for
`ml_program.global_store`.
`bufferizesToMemoryRead` currently returns false for
`GlobalStoreOpInterface`, but I believe it should return true as
`ml_program.global_store` needs to read its input buffer to know what
value to store to global.
This manifested in a bug where `one-shot-bufferize` would produce MLIR
that copies uninitialized data to the global var instead of the intended
value to be stored.
For the following MLIR:
```
module {
ml_program.global private mutable @"state_tensor"(dense<0.0> : tensor<4x75xf32>) : tensor<4x75xf32>
func.func @main() -> tensor<4x75xf32> {
%c0 = arith.constant 0 : index
%cst_val = arith.constant 1.0 : f32
%initial_state = ml_program.global_load @"state_tensor" : tensor<4x75xf32>
%val = tensor.extract %initial_state[%c0, %c0] : tensor<4x75xf32>
%next_val = arith.addf %val, %cst_val : f32
%updated_tensor = tensor.insert %next_val into %initial_state[%c0, %c0] : tensor<4x75xf32>
ml_program.global_store @"state_tensor" = %updated_tensor : tensor<4x75xf32>
return %updated_tensor : tensor<4x75xf32>
}
}
```
`one-shot-bufferize` produces this incorrect MLIR
```
module {
memref.global "private" @state_tensor : memref<4x75xf32> = dense<0.000000e+00>
func.func @main() -> tensor<4x75xf32> {
%c0 = arith.constant 0 : index
%cst = arith.constant 1.000000e+00 : f32
%0 = memref.get_global @state_tensor : memref<4x75xf32>
%1 = memref.load %0[%c0, %c0] : memref<4x75xf32>
%2 = arith.addf %1, %cst : f32
%alloc = memref.alloc() {alignment = 64 : i64} : memref<4x75xf32>
memref.copy %0, %alloc : memref<4x75xf32> to memref<4x75xf32>
memref.store %2, %alloc[%c0, %c0] : memref<4x75xf32>
%3 = bufferization.to_tensor %alloc : memref<4x75xf32> to tensor<4x75xf32>
%alloc_0 = memref.alloc() {alignment = 64 : i64} : memref<4x75xf32>
%4 = memref.get_global @state_tensor : memref<4x75xf32>
memref.copy %alloc_0, %4 : memref<4x75xf32> to memref<4x75xf32>
return %3 : tensor<4x75xf32>
}
}
```
Note that `memref.copy` at the end copies an uninitialized `alloc_0` to
the global variable.
But after the change we see the following MLIR:
```
module {
memref.global "private" @state_tensor : memref<4x75xf32> = dense<0.000000e+00>
func.func @main() -> tensor<4x75xf32> {
%c0 = arith.constant 0 : index
%cst = arith.constant 1.000000e+00 : f32
%0 = memref.get_global @state_tensor : memref<4x75xf32>
%1 = memref.load %0[%c0, %c0] : memref<4x75xf32>
%2 = arith.addf %1, %cst : f32
%alloc = memref.alloc() {alignment = 64 : i64} : memref<4x75xf32>
memref.copy %0, %alloc : memref<4x75xf32> to memref<4x75xf32>
memref.store %2, %alloc[%c0, %c0] : memref<4x75xf32>
%3 = bufferization.to_tensor %alloc : memref<4x75xf32> to tensor<4x75xf32>
%alloc_0 = memref.alloc() {alignment = 64 : i64} : memref<4x75xf32>
memref.copy %alloc, %alloc_0 : memref<4x75xf32> to memref<4x75xf32>
%4 = memref.get_global @state_tensor : memref<4x75xf32>
memref.copy %alloc_0, %4 : memref<4x75xf32> to memref<4x75xf32>
return %3 : tensor<4x75xf32>
}
}
```
We now see that the relevant data is copied to `alloc_0` before it is
stored in global.
Co-authored-by: Nathan Malimban <nmalimba@ah-nmalimba-l.dhcp.mathworks.com>
5.5 KiB
5.5 KiB