Load Balancing

  1. 1,935 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    96 comments  ·  Flag idea as inappropriate…  ·  Admin →

    We appreciate all of your feedback on how critical the HTTPHTTPS redirect feature is to your GCP apps. Our networking team has further increased the priority of this feature in our backlog based on your feedback and are aiming to deliver an Alpha in Q4, 2019.

  2. Respect "X-Forwarded-Proto" header from Cloud Flare / other frontends

    Cloud Flare correctly sets the "X-Forwarded-Proto" to "https" when using their "flexible" option (https://support.cloudflare.com/hc/en-us/articles/200170986-How-does-Cloudflare-handle-HTTP-Request-headers-) - this is the same as if we used the GCP load balancer directly to handle HTTPS and proxy via HTTP to our backends.

    I need my backend services, which listen on HTTP, to understand that the request came from a client using HTTPS - but Google Cloud load balancer is overwriting the X-Forward-Proto header to "http", even though the client is actually using HTTPS from our Cloud Flare frontend.

    This is the whole point of the "X-Forwarded-Proto" header, so I'm not sure why…

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  3. Load balancer fault is not informative

    Load balancer has a heartbeat mechanism to moniter the linked instances. In some cases the LB marks an instance as unhealthy but does not provide the reason for marking it as unhealthy. from instance perspective the HB is fine, since port mirroring is not available there is no clue why the LB marks an instance as unhealthy.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  4. Cloud Armor support for TCP load balancers

    Supporting at least a subset of Cloud Armor policies (e.g. Geo-based Access Control) for TCP load balancers would enable, for example, efficient use of Cloud Armor for nginx-ingress on GKE. Thanks!

    12 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  5. Cloud CDN and Security Policies

    According to the documentation "If you try to associate a Cloud Armor Security Policy for a backend service and Cloud CDN is enabled, the config will be rejected."
    This is a confusing restriction. If I need to block some unwanted traffic from a load balancer, I first need to disable CDN. But disabling CDN is not possible because that would completely overwhelm the backend services with traffic is expected to be served by the CDN. Especially during an attack, this would be extremely inconvenient.
    Why is this restriction in place? It would make more sense to remove it if possible.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  6. Cloud armor Security Policy Redirect

    Cloud Armor - Security Policy Rules

    It would be very useful to be able to have more informative error messages to end users in case they are inadvertently restricted by a security policy.

    Currently the only available options are to return 403, 404 or 502, with no possible extra message.

    A simple solution to remedy this case would be to add an option redirect traffic to another URL. So adding an option to return a HTTP 307 (Temporary Redirect) code with a possiblilty to define an URL would be great.

    That way end users could be served a more informative…

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  7. Load balancer rewrite support

    Please support an apache [1] like mod_rewrite on load balancers.

    For example:
    RewriteRule "^puppy.html" "smalldog.html" [NC]

    1: https://httpd.apache.org/docs/2.4/rewrite/intro.html

    32 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  8. Enable GCLB gzip compression

    Right now, GCLB supports gzip compression only if the backend request comes already compressed. It would be great if GCLB could compress non-compressed backend responses too

    40 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  9. Add support for App Engine backend

    It is not possible to select App Engine as a backend in the load balancer that means there is no easy way to provide cross-region failover.
    That basically forces you to use Compute Engine instances instead.

    22 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  10. Only one loadbalancer for http traffic and https traffic independently to the same backend

    I would like to create one HTTP(S) loadbalancer with one backend (instance group) with port 80 behing forward to port 80 on the backend and port 443 behing forward to port 443 on the backend.

    At the moment we have to create 2 seperate loadbalancers with the same external IP to achieve this.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  11. Only one loadbalancer for http traffic and https traffic independently to the same backend

    I would like to create one HTTP(S) loadbalancer with one backend (instance group) with port 80 behing forward to port 80 on the backend and port 443 behing forward to port 443 on the backend.

    At the moment we have to create 2 seperate loadbalancers with the same external IP to achieve this.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  12. 12 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  13. 44 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  14. HTTP(S) Load Balancing: Add IP Geolocation to HTTP headers

    Please add the IP Geolocation headers to requests handled by the HTTP(S) load balancer. Similar/identical to the headers provided by App Engine (''X-AppEngine-country', etc).

    40 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    4 comments  ·  Flag idea as inappropriate…  ·  Admin →
  15. TCP Proxy Load Balancing support he following ports: 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1024-65535

    In https://cloud.google.com/compute/docs/load-balancing/tcp-ssl/tcp-proxy

    TCP Proxy Load Balancing advantages:

    Intelligent routing — the load balancer can route requests to backend locations where there is capacity. In contrast, an L3/L4 load balancer must route to regional backends without paying attention to capacity. Use of smarter routing allows provisioning at N+1 or N+2 instead of x*N.
    Security patching — If vulnerabilities arise in the TCP stack, we will apply patches at the load balancer automatically in order to keep your instances safe.
    TCP Proxy Load Balancing supports the following ports: 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, 5222

    Now…

    9 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  16. PCI Compliant HTTPS load balancer option

    Can we please get a check-box or way to configure so that we don't need to keep dealing with exceptions and complaints from the scanners when following the https://cloud.google.com/solutions/pci-dss guide due to HTTPS load balancers supporting things like "Medium Strength Cipher Suites" and "TLS Version 1.0" or unremoveable HTTP Proxy headers. All of which are heavily discouraged by PCI DSS.

    Thanks!

    62 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    5 comments  ·  Flag idea as inappropriate…  ·  Admin →
  17. Load balancer "URL Map" does not support A/B testing via %traffic with sticky path selection.

    I will have to install NGINX in front of the google load balancer, which is kinda redundant; however, it seems required as the current load balancer is unable to split traffic by either custom rules or by sticky path selection.

    Specifically I'm looking for is a way to introduce a "trial" swap of only page "X" (defined by UrlMap) that would direct 10% of the traffic to a different server/ip/container cluster while recording this choice as a cookie for consistency. This would allow us to do some basic live testing and a/b testing without impacting all consumers.

    43 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  18. TCP Load balance with different frontend and backend ports

    Currently TCP load balancing requires the frontend and backend listeners to share the same port. It would be useful if the backend port could be different to the frontend.

    66 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    3 comments  ·  Flag idea as inappropriate…  ·  Admin →
  19. Improve your Load Balancer Interface. It is confusing. There is no clarity and separation of UI for Network and HTTP(S) Load Balancer

    Improve your Load Balancer Interface. It is confusing. There is no clarity and separation of UI for Network and HTTP(S) Load Balancer.
    Users always get confused when you are creating a Load Balancer.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  20. HTTPS Load Balancing: Support multiple certificates for a single target proxy.

    A single target proxy can be attached to a URL map containing multiple host rules, thereby allowing it to support multiple hostnames. However, a target HTTPS proxy currently only supports a single certificate. Therefore, supporting multiple hostnames using a single HTTPS global forwarding rule requires that each hostname be specified as a separate SAN within the certificate.

    This is not ideal for a number of reasons: for example, some certificate authorities charge more for a single SAN than for a single-hostname certificate. A certificate with multiple SANs is also larger than necessary, potentially leading to a slower TLS handshake.

    It…

    56 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
← Previous 1
  • Don't see your idea?

Load Balancing

Categories

Feedback and Knowledge Base