Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[V1][Metrics] Handle preemptions #13169

Merged
merged 2 commits into from
Feb 27, 2025

Conversation

markmc
Copy link
Contributor

@markmc markmc commented Feb 12, 2025

Part of #10582

Add a core engine PREEMPTED event.

Add the num_preemptions_total counter from v0.

Also, make preemptions reset the scheduled and first token timestamps resulting in:

  << queued timestamp >>
    [ queue interval ]
  << scheduled timestamp (FIRST) >>
    [ prefill interval ]
  << new token timestamp (FIRST) >>
    [ inter-token interval ]
  << new token timestamp >>
    [ inter-token interval ]
      |
      |	(possible preemptions)
      | << scheduled timestamp >>
      | << preempted timestamp >>
      | << scheduled timestamp >>
      | << new token timestamp >>
      | << preempted timestamp >>
      v
  << new token timestamp >>
    [ decode interval (relative to first token time)
    [ inference interval (relative to first scheduled time)
  << new token timestamp (FINISHED) >>

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Feb 12, 2025
@markmc
Copy link
Contributor Author

markmc commented Feb 12, 2025

pre-commit failure is a yapf failure that doesn't happen for me locally:

  File "/home/runner/.cache/pre-commit/repom9gt4aao/py_env-python3.12/lib/python3.12/site-packages/yapf_third_party/_ylib2to3/pygram.py", line 39, in <module>
    pattern_grammar = driver.load_grammar(_PATTERN_GRAMMAR_FILE)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/.cache/pre-commit/repom9gt4aao/py_env-python3.12/lib/python3.12/site-packages/yapf_third_party/_ylib2to3/pgen2/driver.py", line 248, in load_grammar
    g.load(gp)
  File "/home/runner/.cache/pre-commit/repom9gt4aao/py_env-python3.12/lib/python3.12/site-packages/yapf_third_party/_ylib2to3/pgen2/grammar.py", line 128, in load
    d = pickle.load(f)
        ^^^^^^^^^^^^^^
EOFError: Ran out of input

@markmc markmc force-pushed the metrics-v1-preemptions branch 2 times, most recently from 28c55d0 to 3876e2c Compare February 15, 2025 17:12
Copy link
Collaborator

@robertgshaw2-redhat robertgshaw2-redhat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the POV of a client, "preemptions" create bumps in ITL (inter token latency) aka TPOT (time per output token).

You can see this clearly when considering streaming:

  • The user sends a prompt like Hello my name is Robert and I
  • The model streams back like -> to -> work -> on -> vllm

If a request is preempted after the word to, we will evict the request and re-run once we have enough KV cache memory with the prompt as "Hello my name is Robert and I like to" (so the generated tokens are added to the prompt when recomputing --- this makes the recomputation happen much more quickly --- and prefix caching also helps to make this happen fast).

From the POV of the user, this means that the ITL of the word "work" will be elevated (since all the time associated with being preempted, waiting, and then recomputing will happen before the word is emitted). The TTFT is not impacted since we already streamed the word like. As a result, I think that we should make the metrics coming out of vLLM match this structure

  • When a PREEMPTION occurs, this should manifest as a higher inter token interval for that token
  • When a PREEMPTION occurs, this should manifest as higher decode interval for that request

@robertgshaw2-redhat
Copy link
Collaborator

robertgshaw2-redhat commented Feb 20, 2025

So I think that it should look like this:

<< queued timestamp >>  # unique per request and frozen after being set
    [ queue interval ]
<< scheduled timestamp >> # unique per request and frozen after being set
    [ prefill interval ]
<< new token timestamp (FIRST) >> # unique per request and frozen after being set
    [ inter-token interval ]
<< new token timestamp >>
<< preempted timestamp >>
    | request is in the preempted queue
    | request is re-scheduled
    | recompute up to the current token
    [ inter-token interval ]
<< new token timestamp >>
    [ inter-token interval ]
<< new token timestamp >>
<< preempted timestamp >>
    | request is in the preempted queue
    | request is re-scheduled
    | recompute up to the current token
    [ inter-token interval ]
  << new token timestamp >>
    [ decode interval (relative to scheduled) ] # all the time spent in the preempted + recompute state is allocated here
    [ inference interval (relative to scheduled) ]
  << new token timestamp (FINISHED) >>

@robertgshaw2-redhat
Copy link
Collaborator

One other case we should make sure we cover is --- what happens if the request is preempted during prefill?

E.g. if we have a prompt of length 100k and we get preempted half way through. In this case, the preemption/recomputation time should be allocated to the prefill phase and the TTFT phase

Im not sure if this is possible to happen or whether there is some invariant that we always have enough KV cache for the prompt to be processed. Worth asking Woosuk or Cody

Copy link

mergify bot commented Feb 25, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @markmc.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Feb 25, 2025
@markmc
Copy link
Contributor Author

markmc commented Feb 25, 2025

I drafted some thoughts, and found it super difficult to clarify this in words or ASCII art so let's try this ... does this match your thinking?

vLLM Interval Metrics - Frame 1
vLLM Interval Metrics - Frame 2
vLLM Interval Metrics - Frame 3 (1)

@markmc
Copy link
Contributor Author

markmc commented Feb 25, 2025

In the "preempted prefill" case, I had imagined the queued interval to be up until the final SCHEDULED event ... nothing useful happened with the request, its waiting to be prioritized for resources? Not a big deal, I guess - most important that TTFT ~= queued_interval + prefill_interval

If we're aligned on the diagrams above, I think the code change is simply to not reset scheduled_ts or first_token_ts once they've been set?

@robertgshaw2-redhat
Copy link
Collaborator

I am in alignment with the charts above! Thanks for drawing it out!

Add a core engine PREEMPTED event.

Add the num_preemptions_total counter from v0.

Also, make preemptions reset the scheduled and first token timestamps
resulting in:

```
  << queued timestamp >>
    [ queue interval ]
      |
      |	(possible preemptions)
      | << scheduled timestamp >>
      | << preempted timestamp >>
      | << scheduled timestamp >>
      | << new token timestamp (FIRST) >>
      | << preempted timestamp >>
      v
  << scheduled timestamp >>
    [ prefill interval ]
  << new token timestamp (FIRST) >>
    [ inter-token interval ]
  << new token timestamp >>
    [ decode interval (relative to most recent first token time)
    [ inference interval (relative to most recent scheduled time)
  << new token timestamp (FINISHED) >>
```

Signed-off-by: Mark McLoughlin <[email protected]>
Don't include prefill preemption time in the queued interval.

Don't reset first token on preemption - already decoded tokens
are retained and reused.

Signed-off-by: Mark McLoughlin <[email protected]>
@markmc markmc force-pushed the metrics-v1-preemptions branch from 3876e2c to ed0dfd8 Compare February 26, 2025 17:17
@mergify mergify bot removed the needs-rebase label Feb 26, 2025
markmc added a commit to markmc/vllm that referenced this pull request Feb 26, 2025
As per the discussion in vllm-project#13169.

Signed-off-by: Mark McLoughlin <[email protected]>
Copy link
Collaborator

@robertgshaw2-redhat robertgshaw2-redhat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great and is very robust. Thanks!

@robertgshaw2-redhat robertgshaw2-redhat enabled auto-merge (squash) February 27, 2025 02:06
@robertgshaw2-redhat robertgshaw2-redhat added the ready ONLY add when PR is ready to merge/full CI is needed label Feb 27, 2025
@vllm-bot vllm-bot merged commit cd711c4 into vllm-project:main Feb 27, 2025
45 of 47 checks passed
johnnynunez pushed a commit to johnnynunez/vllm that referenced this pull request Mar 3, 2025
Akshat-Tripathi pushed a commit to krai/vllm that referenced this pull request Mar 3, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Mar 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants