Skip to content

chore(facade): parse buffers eagerly and simplify IoLoopV2 drain#7012

Open
romange wants to merge 1 commit intomainfrom
V2Parse
Open

chore(facade): parse buffers eagerly and simplify IoLoopV2 drain#7012
romange wants to merge 1 commit intomainfrom
V2Parse

Conversation

@romange
Copy link
Copy Markdown
Collaborator

@romange romange commented Mar 29, 2026

Summary

  • Parse incoming data eagerly via ParseFromBuffer() right after reading, instead of deferring to the main loop body
  • Simplify IoLoopV2 drain loop assuming ParseFromBuffer fully depletes io_buf_ (enforced by CHECK)
  • Move CheckIoBufCapacity into the drain loop so the buffer can grow between recv calls

Changes

  • src/facade/dragonfly_connection.h — Add ParseFromBuffer() declaration
  • src/facade/dragonfly_connection.cc — Add ParseFromBuffer() (calls ParseMCBatch/ParseRedisBatch). Call it after CommitWrite in the drain loop and after WriteAndCommit in the multishot recv path. Remove redundant ParseFromBuffer/NEED_MORE/CheckIoBufCapacity from main loop body. Add CHECK_EQ(io_buf_.InputLen(), 0u) after eager parse.

🤖 Generated with Claude Code

@augmentcode
Copy link
Copy Markdown

augmentcode bot commented Mar 29, 2026

🤖 Augment PR Summary

Summary: This PR moves protocol parsing to happen immediately after socket reads (instead of deferring to the IoLoopV2 loop body).

Changes:

  • Adds Connection::ParseFromBuffer() to dispatch to the appropriate batch parser.
  • Invokes parsing right after reads in DoReadOnRecv and the IoLoopV2 drain loop.
  • Refactors IoLoopV2 to always run execute+reply after awaiting, with an extra parse step when buffered data remains.

🤖 Was this summary useful? React with 👍 or 👎

Copy link
Copy Markdown

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 1 suggestion posted.

Fix All in Augment

Comment augment review to trigger a new review at any time.

break;
}
io_buf_.CommitWrite(*res);
ParseFromBuffer();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ParseMCBatch() uses a do { ... } while (parsed_cmd_q_len_ < 128 && ...), so calling ParseFromBuffer() repeatedly from the drain loop can still enqueue at least one command even when the parsed-queue is already at/over the intended cap. Can you double-check this can’t lead to unbounded growth of parsed_cmd_q_len_ under heavy pipelining before ExecuteBatch() gets a chance to run?

Severity: medium

Other Locations
  • src/facade/dragonfly_connection.cc:2606
  • src/facade/dragonfly_connection.cc:2738

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

@romange romange changed the title chore(facade): parse buffers eagerly when reading from socket chore(facade): parse buffers eagerly and simplify IoLoopV2 drain Mar 29, 2026
@romange romange force-pushed the IOParse branch 2 times, most recently from 10469fd to b7d20de Compare March 29, 2026 18:49
Base automatically changed from IOParse to main March 30, 2026 09:08
Refactor the parsed command queue with named predicates:
- HasInFlightCommands(): dispatched commands not yet replied
- HasCommandToExecute(): head is ready with nothing in-flight

Simplify IoLoopV2 recv loop: move socket draining from DoReadOnRecv
into IoLoopV2 directly using pending_input_ flag, eliminating the
indirect DoReadOnRecv(true) call and duplicated TryRecv logic.

Refactor ReplyBatch to iterate via parsed_head_ directly instead of
aliasing it through a for-loop reference variable with exchange().

Add test_get_many memcached test for multi-key get coverage.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant