Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The performance of Envoy when it as the redis cluster proxy #19629

Closed
hughxia opened this issue Jan 20, 2022 · 15 comments
Closed

The performance of Envoy when it as the redis cluster proxy #19629

hughxia opened this issue Jan 20, 2022 · 15 comments
Labels
question Questions that are neither investigations, bugs, nor enhancements stale stalebot believes this issue/PR has not been touched recently

Comments

@hughxia
Copy link

hughxia commented Jan 20, 2022

Title: The performance of Envoy when it as the redis cluster proxy

Description:

When I use envoy as a redis cluster proxy in K8s(the envoy pod resources: 4CPU and 1Gi memory), the RPS output tested by redis-benchmark is 80K~90K. If i use redis-benchmark directly connect the same redis cluster, the output is about 250K.

The worse thing is the performance output can't be better when i increase the envoy pod resources.

Is this the limit of Envoy?
How can i improve the performance of Envoy when it as the redis cluster proxy?

Envoy Version: 1.20.0 This is my envoy config:

static_resources:
  listeners:
  - name: redis_listener
    address:
      socket_address:
        address: 0.0.0.0
        port_value: 6379
    filter_chains:
    - filters:
      - name: envoy.filters.network.redis_proxy
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy
          stat_prefix: egress_redis
          settings:
            op_timeout: 5s
            enable_redirection: true
            max_buffer_size_before_flush: 1024
            buffer_flush_timeout: 0.003s
          prefix_routes:
            catch_all_route:
              cluster: redis_cluster
          downstream_auth_password:
            inline_string: "xxxxxx" 
  clusters:
  - name: redis_cluster
    cluster_type:
      name: envoy.clusters.redis
    connect_timeout: 10s
    cleanup_interval: 3s
    lb_policy: CLUSTER_PROVIDED
    load_assignment:
      cluster_name: redis_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: xxxxx  
                port_value: 6379
    typed_extension_protocol_options:
      envoy.filters.network.redis_proxy:
        "@type": type.googleapis.com/google.protobuf.Struct
        value:
          auth_password:
            inline_string: "xxxxxx"
admin:
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 8001

[optional Relevant Links:]

Any extra documentation required to understand the issue.

@hughxia hughxia added the triage Issue requires triage label Jan 20, 2022
@Junsheng-Wu
Copy link

I had the same question.

@moderation
Copy link
Contributor

See a recent thread on Redis performance at #19436 (comment). You need to be particularly careful with concurrency levels in Kubernetes environments

@lizan lizan added question Questions that are neither investigations, bugs, nor enhancements and removed triage Issue requires triage labels Jan 20, 2022
@hughxia hughxia closed this as completed Jan 21, 2022
@hughxia
Copy link
Author

hughxia commented Jan 21, 2022

See a recent thread on Redis performance at #19436 (comment). You need to be particularly careful with concurrency levels in Kubernetes environments

I have set the --concurrency as same as the CPU resources, but It's not getting better.

@hughxia hughxia reopened this Jan 21, 2022
@moderation
Copy link
Contributor

As per the thread details, you likely don't want to match the CPU core count. You want to set your concurrency level to a much lower number like 1. Redis is single threaded.

@hughxia
Copy link
Author

hughxia commented Jan 24, 2022

As per the thread details, you likely don't want to match the CPU core count. You want to set your concurrency level to a much lower number like 1. Redis is single threaded.

I tried it, but the result is worse.

At #19436 (comment), The advice is the concurrency level based on the CPU resource that allocated to Envoy. In my test, this is indeed the most effective.

I am confused.

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale stalebot believes this issue/PR has not been touched recently label Feb 23, 2022
@mitchsw
Copy link

mitchsw commented Mar 1, 2022

As per the thread details, you likely don't want to match the CPU core count. You want to set your concurrency level to a much lower number like 1. Redis is single threaded.

I don't think this is good advice when (a) connecting to a Redis cluster and (b) connecting over a network. More concurrency in Envoy will improve throughput. The thread model of the destination Redis processes isn't really relevant.

@github-actions github-actions bot removed the stale stalebot believes this issue/PR has not been touched recently label Mar 1, 2022
@sunbinnnnn
Copy link

The same problem happend to me, any comments?

@sunbinnnnn
Copy link

I got the high performance when I set more clients in redis benchmark(1000+), the same as directly connect, but low performance in less clients scene.

@mukundv18
Copy link

mukundv18 commented Mar 30, 2022

Same problem here. Tried the following settings, but still observed the bad performance.

            max_buffer_size_before_flush: 1024
            buffer_flush_timeout: 0.003s

Passed --concurrency 10 as the parameter when running envoy

Let me know if I am missing something. Our config is mentioned below.

static_resources:
  listeners:
  - name: redis_listener
    address:
      socket_address:
        address: 0.0.0.0
        port_value: 1999
    filter_chains:
    - filters:
      - name: envoy.redis_proxy
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy
          stat_prefix: egress_redis
          downstream_auth_password:
            inline_string: dummy 
          settings:
            op_timeout: 10s
            enable_redirection: True
            enable_hashtagging: False
            read_policy: ANY
          prefix_routes:
            routes:
              - prefix: "" 
                cluster: "redis_cluster"
                request_mirror_policy: [{cluster: "redis_cluster_two", exclude_read_commands: True}]
            catch_all_route:
              cluster: "redis_cluster"
  clusters:
  - name: redis_cluster
    connect_timeout: 10s
    type: static  # static
    load_assignment:
      cluster_name: redis_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: <Cluster-1-IP-Address>
                port_value: <Port>
  - name: redis_cluster_two
    connect_timeout: 10s
    type: static  # static STRICT_DNS
    load_assignment:
      cluster_name: redis_cluster_two
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: <Cluster-2-IP-Address>
                port_value: <Port>
admin:
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 8001
              

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale stalebot believes this issue/PR has not been touched recently label Apr 29, 2022
@github-actions
Copy link

github-actions bot commented May 6, 2022

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted" or "no stalebot". Thank you for your contributions.

@github-actions github-actions bot closed this as completed May 6, 2022
@yangyu66
Copy link

yangyu66 commented Apr 6, 2023

same question ^

@ikenchina
Copy link
Contributor

same problem

1 similar comment
@liudhzhyym
Copy link

same problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Questions that are neither investigations, bugs, nor enhancements stale stalebot believes this issue/PR has not been touched recently
Projects
None yet
Development

No branches or pull requests

10 participants