[DO NOT REVIEW YET] bump retry count upon task initialization as opposed to failure#1366
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 201c0b76f3
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| /// If retry count already exceeds limit, mark as failed (recovery for stale state). | ||
| if (processing_part_entry.retry_count > max_retries) | ||
| { |
There was a problem hiding this comment.
Use >= when enforcing retry cap
The new stale-state recovery check only trips when retry_count > max_retries, so a part that has already consumed its last allowed retry (e.g. a node crashes after incrementing retry_count but before handlePartExportFailure can mark FAILED) will still be re-attempted. That yields an extra export attempt beyond the configured export_merge_tree_partition_max_retries, and can lead to repeated retries if the same crash repeats. Using >= here (and in the analogous scheduler check) would keep behavior consistent with the failure handler’s >= limit and avoid exceeding the configured cap in this crash/pending scenario.
Useful? React with 👍 / 👎.
Changelog category (leave one):
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Related to #1278
Documentation entry for user-facing changes
...
CI/CD Options
Exclude tests:
Regression jobs to run: