implement publishing crashid to pub/sub queue
Categories
(Socorro :: Antenna, task, P2)
Tracking
(Not tracked)
People
(Reporter: willkg, Assigned: willkg)
References
Details
Attachments
(2 files)
Currently, Antenna saves crash data to AWS S3 and then AWS S3 generates an event and that triggers Pigeon to add the crash id to the RabbitMQ socorro.normal processing queue.
That involves a few parts. When we originally designed it, Pigeon was also throttling incoming crashes and handling populating the socorro.submitter queue. We don't do that anymore in Pigeon.
This bug covers:
- set up local dev environment bits for pubsub
- implementing a pubsub publishing component in Antenna
- wrapping the component in a feature-flag which will default to off for now
- writing tests
Assignee | ||
Updated•6 years ago
|
Assignee | ||
Comment 2•6 years ago
|
||
I've got this working in a local dev environment. I'm using the pubsub emulator which works nicely. I wrote a helper script for setting up and manipulating the pubsub emulator environment.
I still need to explore fail-scenarios and make sure they're covered appropriately, test it with a real Pub/Sub project/topic, and add unit tests and a systemtest.
Assignee | ||
Comment 3•6 years ago
|
||
Assignee | ||
Comment 4•6 years ago
|
||
We're not doing a feature flag. Instead, we're just defaulting to a NoOpCrashPublish class. When we want to publish to Pub/Sub, we'll switch it.
Assignee | ||
Comment 5•6 years ago
|
||
Assignee | ||
Comment 6•6 years ago
|
||
Assignee | ||
Comment 7•6 years ago
|
||
We pushed this to prod just now. Marking as FIXED.
Description
•