-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Segmentation fault when properties exceed ~100kb #223
Comments
🙈 I made a mistake in moving this issue. I first thought that it was in the apache/pulsar repository and transferred it to pulsar-client-cpp. I'm sorry about that. Perhaps this is pulsar-client-cpp related in any case. |
Ok, thanks! I also think it has to do with the core c++ implementation. But as a python user I wasn't sure. |
Yeah, it's related to the C++ core. And it's a known issue. It might be easy to fix. I will take some time for it when I'm free. |
I just double checked the issue and it looks like a bug with protobuf:
I tried the python client on macOS and it does not have this error but encountered the same issue for Ubuntu. However, I tried the C++ client 3.5.1 on Ubuntu and this error does not happen: #include <pulsar/Client.h>
using namespace pulsar;
int main() {
const std::string topic = "test-topic";
Client client("pulsar://host.docker.internal:6650");
Producer producer;
client.createProducer(topic, producer);
MessageBuilder::StringMap properties;
for (int i = 0; i < 3000; i++) {
properties["key" + std::to_string(i)] = "{\"foo\": \"bar\"}";
}
auto msg = MessageBuilder().setProperties(properties).setContent("test-message").build();
producer.send(msg);
client.close();
} Increasing the It might be an issue from the Python client side. So let me move it to |
Oh I realized where the issue is, there is a bug with the C++ client when batching is disabled. However, the Python client somehow disables batching by default. You can manually enable the batching to get the issue around, while I'm going to push a fix at the C++ client. I also pushed a PR (#224) to enable batching by default. |
…ze exceeds 64KB See apache/pulsar-client-python#223 ### Motivation Currently a shared buffer is used to store serialized message metadata for each send request. However, its capacity is only 64KB, when the metadata size exceeds 64KB, buffer overflow could happen. ### Modifications When the metadata size is too large, allocate a new buffer instead of using the shared buffer. Add `testLargeProperties` to cover it.
…ze exceeds 64KB (#443) See apache/pulsar-client-python#223 ### Motivation Currently a shared buffer is used to store serialized message metadata for each send request. However, its capacity is only 64KB, when the metadata size exceeds 64KB, buffer overflow could happen. ### Modifications When the metadata size is too large, allocate a new buffer instead of using the shared buffer. Add `testLargeProperties` to cover it.
I confirm that by enabling batching, the bug does not appear. Thanks for the workaround suggestion! |
…ze exceeds 64KB (#443) See apache/pulsar-client-python#223 ### Motivation Currently a shared buffer is used to store serialized message metadata for each send request. However, its capacity is only 64KB, when the metadata size exceeds 64KB, buffer overflow could happen. ### Modifications When the metadata size is too large, allocate a new buffer instead of using the shared buffer. Add `testLargeProperties` to cover it. (cherry picked from commit 8f269e8)
Hi everyone, i'm running an event driven app and i use apache pulsar as backbone. I use message's properties to exchange metadata between the services.
I noted that, when the properties exceed ~100kb the client gives
Segmentation fault (core dumped)
(sometimescorrupted double-linked list
instead)I was able to simulate the behaviour in a notebook:
Unfortunately i don't have details in the stacktrace:
Pulsar client version is 3.5.0.
The text was updated successfully, but these errors were encountered: