summaryrefslogtreecommitdiff
path: root/kafka/consumer/fetcher.py
Commit message (Collapse)AuthorAgeFilesLines
* Only try to update sensors fetch lag if the unpacked list contains elements ↵Keith So2020-11-051-2/+3
| | | | | (#2158) Previously, if the `unpacked` list was empty, the call to `unpacked[-1]` would throw an `IndexError: list index out of range`
* Fix typo (#2096)KimDongMin2020-09-071-1/+1
|
* Set length of header value to 0 if Nonekvfi2020-03-021-1/+3
|
* Optionally return OffsetAndMetadata from consumer.committed(tp) (#1979)Dana Powers2019-12-291-1/+1
|
* Fix typosCarson Ip2019-11-081-2/+2
|
* Follow up to PR 1782 -- fix tests (#1914)Dana Powers2019-09-301-1/+2
|
* Issue #1780 - Consumer hang indefinitely in fetcher._retrieve_offsets() due ↵Commander Dishwasher2019-09-301-7/+21
| | | | to topic deletion while rebalancing (#1782)
* Do not use wakeup when sending fetch requests from consumer (#1911)Dana Powers2019-09-291-1/+1
|
* Wrap consumer.poll() for KafkaConsumer iteration (#1902)Dana Powers2019-09-281-4/+6
|
* 1701 use last offset from fetch v4 if available (#1724)Keith So2019-03-131-0/+19
|
* Remove unused `skip_double_compressed_messages`Jeff Widman2019-01-131-8/+0
| | | | | | | | | | This `skip_double_compressed_messages` flag was added in https://github.com/dpkp/kafka-python/pull/755 in order to fix https://github.com/dpkp/kafka-python/issues/718. However, grep'ing through the code, it looks like it this is no longer used anywhere and doesn't do anything. So removing it.
* Be explicit with tuples for %s formattingJeff Widman2018-11-181-7/+7
| | | | Fix #1633
* Expose record headers in ConsumerRecordsHeikki Nousiainen2018-09-271-3/+5
|
* Stop using deprecated log.warn()Jeff Widman2018-05-101-3/+3
|
* Use local copies in Fetcher._fetchable_partitions to avoid mutation errors ↵Dana Powers2018-03-071-3/+6
| | | | (#1400)
* Fix KafkaConsumer compacted offset handling (#1397)Dana Powers2018-02-261-8/+9
|
* Avoid consuming duplicate compressed messages from mid-batch (#1367)Dana Powers2018-02-051-2/+11
|
* KAFKA-3949: Avoid race condition when subscription changes during rebalance ↵Dana Powers2018-02-021-6/+0
| | | | (#1364)
* Avoid KeyError when filtering fetchable partitions (#1344)Dana Powers2018-01-121-2/+2
| | | | * Avoid KeyError when filtering fetchable partitions
* KAFKA-3888 Use background thread to process consumer heartbeats (#1266)Dana Powers2017-12-211-0/+3
|
* Minor Exception cleanupJeff Widman2017-12-121-2/+2
|
* Revert ffc7caef13a120f69788bcdd43ffa01468f575f9 / PR #1239Dana Powers2017-11-161-7/+2
| | | | The change caused a regression documented in issue #1290
* Use correct casing for MBJeff Widman2017-11-151-1/+1
| | | | | | These values refer to Megabytes, not Megabits. Fix #1295
* Add DefaultRecordBatch implementation aka V2 message format parser/builder. ↵Taras Voinarovskyi2017-10-251-7/+23
| | | | | (#1185) Added bytecode optimization for varint and append/read_msg functions. Mostly based on avoiding LOAD_GLOBAL calls.
* Merge pull request #1252 from dpkp/legacy_records_refactorTaras Voinarovskyi2017-10-141-79/+27
|\ | | | | Refactor MessageSet and Message into LegacyRecordBatch
| * Fix tests and rebase problemsTaras2017-10-121-2/+1
| |
| * Remove the check for timestamp None in producer, as it's done in RecordBatch ↵Taras2017-10-121-6/+0
| | | | | | | | | | | | anyway. Minor abc doc fixes.
| * Refactor MessageSet and Message into LegacyRecordBatch to later support v2 ↵Taras2017-10-111-72/+27
| | | | | | | | message format
* | KAFKA-4034: Avoid unnecessary consumer coordinator lookup (#1254)Dana Powers2017-10-111-3/+12
|/
* More testsKAFKA_3977_defer_fetch_parsingDana Powers2017-10-081-0/+5
|
* Avoid sys.maxint; not supported on py3Dana Powers2017-10-081-2/+4
|
* KAFKA-3977: Defer fetch parsing for space efficiency, and to raise ↵Dana Powers2017-10-071-261/+230
| | | | exceptions to user
* Fix Fetcher.PartitionRecords to handle fetch_offset in the middle of ↵Dana Powers2017-10-051-2/+7
| | | | compressed messageset (#1239)
* Fix grammarJeff Widman2017-10-041-1/+1
|
* Drop unused sleep kwarg to poll (#1177)Dana Powers2017-08-151-2/+1
|
* Added unit tests for fetcher's `_reset_offset` and related functions.Taras Voinarovskiy2017-08-071-5/+16
|
* Added `beginning_offsets` and `end_offsets` API's and fixed @jeffwidman ↵Taras Voinarovskiy2017-08-071-4/+19
| | | | review issues
* Fix test for older brokersTaras Voinarovskiy2017-08-071-1/+1
|
* Changed retrieve_offsets to allow fetching multiple offsets at onceTaras Voinarovskiy2017-08-071-95/+130
|
* Added basic support for offsets_for_times API. Still needs to group by nodes ↵Taras Voinarovskiy2017-08-071-18/+76
| | | | and send in parallel.
* Fixed Issue 1033.Raise AssertionError when decompression unsupported. (#1159)webber2017-08-051-0/+7
|
* Added `max_bytes` option and FetchRequest_v3 usage. (#962)Taras Voinarovskyi2017-03-061-7/+36
| | | | * Added `max_bytes` option and FetchRequest_v3 usage. * Add checks for versions above 0.10 based on ApiVersionResponse
* PEP-8: Spacing & removed unused imports (#899)Jeff Widman2017-02-091-8/+8
|
* Fix of exception raise in case of auto_offset_reset is set to None in ↵Alexander Sibiryakov2016-12-271-2/+2
| | | | KafkaConsumer (#860)
* Add kafka.serializer interfaces (#912)Dana Powers2016-12-191-12/+19
|
* Fix fetcher bug when processing offset out of range (#911)Dana Powers2016-12-171-1/+1
|
* Revert consumer iterators from max_poll_records (#856)Dana Powers2016-10-241-7/+92
|
* Bugfix on max_poll_records - TypeError: object of type NoneType has no len()Dana Powers2016-10-041-1/+1
|
* KAFKA-3007: KafkaConsumer max_poll_records (#831)Dana Powers2016-09-281-135/+91
|
* Treat metric_group_prefix as config in KafkaConsumerDana Powers2016-08-041-3/+3
|