Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stack overflow if router been .boxed for more than 281 times #392

Closed
fluxxu opened this issue Oct 18, 2021 · 5 comments
Closed

stack overflow if router been .boxed for more than 281 times #392

fluxxu opened this issue Oct 18, 2021 · 5 comments
Labels
C-bug Category: This is a bug.

Comments

@fluxxu
Copy link

fluxxu commented Oct 18, 2021

Bug Report

Version

0.2.8

Platform

Windows x64 msvc
macOS

Description

I'm migrating a large project from rocket to axum
I want to keep the route configurations (path, params) close to the route functions, so for each module I have a function called register_routes which looks like this:

fn register_routes(router: Router<BoxRoute>) -> Router<BoxRoute> {
   // ...
}

then in the main I have a function to call all register_routes.
This worked well until the number of the routes reach a certain amount: axum crashes with

thread 'tokio-runtime-worker' has overflowed its stack
error: process didn't exit successfully: `target\debug\axum-stackoverflow.exe` (exit code: 0xc00000fd, STATUS_STACK_OVERFLOW)

Then I found out that if the number of routes passes 282, axum will crash:
Here is the reproduction:

https://github.com/fluxxu/axum-stack-overflow

@davidpdrsn davidpdrsn added the C-bug Category: This is a bug. label Oct 18, 2021
@davidpdrsn
Copy link
Member

Uh thats an interesting bug. I did some digging and it only seems to happen if you're running the router with hyper. If you call it directly it doesn't happen:

use axum::body::Body;
use axum::handler::get;
use axum::http::Request;
use axum::routing::{BoxRoute, Router};
use tower::{Service, ServiceExt};

#[tokio::main]
async fn main() {
    let mut app = get_router();

    for uri in ["/get1", "/get281"] {
        let res = app
            .ready()
            .await
            .unwrap()
            .call(Request::builder().uri(uri).body(Body::empty()).unwrap())
            .await
            .unwrap();
        dbg!(&res);
    }
}

fn get_router() -> Router<BoxRoute> {
    let mut router = Router::new().boxed();

    for i in 0..282 {
        router = router
            .route(
                &format!("/get{}", i),
                get(move || async move { format!("i = {}", i) }),
            )
            .boxed();
    }

    router
}

I don't really know how to go about debugging this. @jplatte any ideas?

While I think this is something that should be fixed I'm also getting the impression that you're boxing all of your routes. That really isn't recommended. You should rather make logical groups of routes, box those, and combine them either with nest or or.

@fluxxu
Copy link
Author

fluxxu commented Oct 18, 2021

@davidpdrsn
Thanks for the quick reply.
The reason I box all routes it to workaround #200, to be more specific, it's from this comment:
#200 (comment)
I found it's the only way to avoid infinity compile time. I tried to only box the last route call in each fn register_routes
But just 20-ish route() call can block rustc forever. The last time I tried, rustic used 1hour and 30GB of memory before I give up...
I'll try to create a new router for each fn register_routes.

BTW, if you set RUST_LOG=trace and run the reproduction code, you'll see a lot of

Oct 18 14:19:06.887 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
Oct 18 14:19:06.888 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
Oct 18 14:19:06.888 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
Oct 18 14:19:06.889 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
Oct 18 14:19:06.889 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
Oct 18 14:19:06.889 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
Oct 18 14:19:06.890 TRACE hyper::proto::h1::conn: flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
.
.
.

It looks like there is a positive correlation between the number of times of this log message and the number of routes.

@fluxxu fluxxu changed the title stackoverflow if router been .boxed for more than 282 times stack overflow if router been .boxed for more than 282 times Oct 18, 2021
@fluxxu fluxxu changed the title stack overflow if router been .boxed for more than 282 times stack overflow if router been .boxed for more than 281 times Oct 18, 2021
@davidpdrsn
Copy link
Member

Have you tried the workaround I mention in #200 (comment)? With that boxing once should work.

@programatik29
Copy link
Contributor

programatik29 commented Oct 19, 2021

It will perform really badly(I guess 99%~ slower than one route) if you can get it to work.

This and this pull request is probably needed for acceptable performance.

Even then I wouldn't expect much performance(maybe 70%~ slower than one route). This benchmark roughly shows performance difference between 1 and 26 routes with matchit-router branch.

@fluxxu
Copy link
Author

fluxxu commented Oct 19, 2021

Have you tried the workaround I mention in #200 (comment)? With that boxing once should work.

Yes, I fixed it with Router<Layered<BoxedService<Body>>> + router.or(...).or(...).
Feel free to close the issue if it's not worth it to fix this edge case. I guess nobody on earth will want to box every route once the ongoing PRs get merged 🤣
#393
#385

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug Category: This is a bug.
Projects
None yet
Development

No branches or pull requests

3 participants