-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add QNN EP HTP shared memory allocator #23136
base: main
Are you sure you want to change the base?
Conversation
… declarations and definitions for IAllocator::TensorAlloc().
…ion clean up callback
// - QNN context handle is still valid. This should be true as long as QNN contexts are not freed from | ||
// anywhere other than the destructor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be true as long as QNN contexts are not freed from anywhere other than the destructor.
it seems kind of brittle to depend on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't we catch it during development if someone changed the code to free the context somewhere else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the concern is a race between this clean up function locking weak_context_mem_handle_manager (thus keeping it alive) and the QNN context handle getting freed.
I'm thinking it may be possible to manage the QNN context handle (as well as the context mem handles) in some object and have a weak_ptr to that instead.
@@ -1098,6 +1099,38 @@ TEST_F(QnnHTPBackendTests, EPOffloadsGraphIOQuantDequant) { | |||
} | |||
} | |||
|
|||
TEST_F(QnnHTPBackendTests, UseHtpSharedMemoryAllocatorForInputs) { | |||
#if !defined(__ANDROID__) && !defined(_WIN32) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
QC device for windows is Arm64 based, so you can check defined(aarch64) defined(_M_ARM64)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this code is within an ifdef that checks for those macros:
#if defined(__aarch64__) || defined(_M_ARM64) || defined(__linux__) |
@@ -1098,6 +1099,38 @@ TEST_F(QnnHTPBackendTests, EPOffloadsGraphIOQuantDequant) { | |||
} | |||
} | |||
|
|||
TEST_F(QnnHTPBackendTests, UseHtpSharedMemoryAllocatorForInputs) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also have some codes to demonstrate how this feature get used from user code.
Here are some IObinding examples for other EPs:
#if defined(USE_CUDA) || defined(USE_TENSORRT) |
|
||
struct AllocationRecord { | ||
SharedMemoryInfo shared_memory_info; | ||
InlinedVector<AllocationCleanUpFn, 1> clean_up_fns; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we expect more than one cleanup func?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not unexpected. e.g., if the same shared memory is used from more than one QNN context, there will be a separate cleanup function per QNN context.
marker.fill('\0'); | ||
allocator_ptr = nullptr; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we limit doing the fill
to a debug build? not sure how many allocations QNN makes and whether there's any meaningful perf cost.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little hesitant to remove it as it did catch some issues during my testing. maybe we can do that later if it is measured to have a significant performance cost? it is only overwriting the 8 marker bytes.
|
||
namespace { | ||
|
||
struct AllocationHeader { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be great to add a comment describing the overall setup and how it uses this header.
htp_arch, | ||
soc_model, | ||
enable_htp_weight_sharing); | ||
static const std::string QNN_HTP_SHARED_MEMORY_ALLOCATOR_ENABLED = "enable_htp_shared_memory_allocator"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be more user visible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
documented in onnxruntime_c_api.h. I can also document it in the gh-pages branch after this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: moved SharedContext class from qnn_execution_provider.h to its own file.
// Note: creation should be done via Create() | ||
QnnBackendManager(const QnnBackendManagerConfig& config, PrivateConstructorTag) | ||
: backend_path_(config.backend_path), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be private if it's not meant to be called directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ideally it would be private, but then std::make_shared wouldn't be able to access it
// - QNN context handle is still valid. This should be true as long as QNN contexts are not freed from | ||
// anywhere other than the destructor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't we catch it during development if someone changed the code to free the context somewhere else?
auto backend_manager = weak_backend_manager.lock(); | ||
if (!backend_manager) { | ||
return; | ||
} | ||
|
||
auto context_mem_handle_manager = weak_context_mem_handle_manager.lock(); | ||
if (!context_mem_handle_manager) { | ||
return; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we log something if either of these are false or is that expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the weak_ptrs wouldn't be able to be locked in the case where the backend manager (i.e., the QNN EP) is destroyed before the allocation is freed. currently it should be fine for the allocator to outlive the EP. so it seems like this case is not too unexpected.
Status QnnContextMemHandleManager::GetOrRegister(void* shared_memory_address, const Qnn_Tensor_t& qnn_tensor, | ||
Qnn_MemHandle_t& qnn_mem_handle, bool& did_register) { | ||
const auto qnn_tensor_rank = GetQnnTensorRank(qnn_tensor); | ||
auto* const qnn_tensor_dims = GetQnnTensorDims(qnn_tensor); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do all QNN tensors have fixed shapes that are guaranteed to be known?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not certain. @HectorSVC or @adrianlizarraga can you comment on this?
Qnn_MemDescriptor_t mem_descriptor{}; | ||
mem_descriptor.memShape.dimSize = qnn_tensor_dims; | ||
mem_descriptor.memShape.numDim = qnn_tensor_rank; | ||
mem_descriptor.memShape.shapeConfig = nullptr; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of interest when might shapeConfig be used and what for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -63,6 +65,12 @@ size_t GetElementSizeByType(ONNXTensorElementDataType elem_type) { | |||
return pos->second; | |||
} | |||
|
|||
size_t GetQnnTensorDataSize(gsl::span<const uint32_t> shape, Qnn_DataType_t element_type) { | |||
ORT_ENFORCE(!shape.empty(), "Empty shape not allowed."); // TODO can we just treat empty shape as a scalar? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we treat it as a scalar (IIRC we do that in the CoreML EP) or could/should some other place make that adjustment?
Could we potentially get here from a tensor that has an unknown shape (e.g. downstream of a dynamic reshape)? Not sure if those get rejected earlier on in the QNN EP processing.
Description
Adds QNN EP HTP shared memory allocator.
The HTP shared memory allocator (
HtpSharedMemoryAllocator
) calls the rpcmem shared library (libcdsprpc.so/dll) to allocate and free memory that can be shared between HTP and CPU.The allocator can be enabled by setting QNN EP option
enable_htp_shared_memory_allocator
to1
.QNNExecutionProvider::CreatePreferredAllocators()
will then return an instance ofHtpSharedMemoryAllocator
.For each QNN context, we also need to register and unregister memory handles in order to use the HTP shared memory. This memory handle management is added to
QnnBackendManager
, which also manages the QNN context handles.For more information about using HTP shared memory with QNN, see: https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/htp_shared_buffer_tutorial.html#shared-buffer-tutorial
Limitations:
Motivation and Context
Improve performance by using HTP shared memory to avoid overhead from copying data between CPU and NPU.