Container Engine

How can we improve Container Engine?

You've used all your votes and won't be able to post a new idea, but you can still search and comment on existing ideas.

There are two ways to get more votes:

  • When an admin closes an idea you've voted on, you'll get your votes back from that idea.
  • You can remove your votes from an open idea you support.
  • To see ideas you have already voted on, select the "My feedback" filter and select "My open ideas".
(thinking…)

Enter your idea and we'll search to see if someone has already suggested it.

If a similar idea already exists, you can support and comment on it.

If it doesn't exist, you can post your idea so others can support it.

Enter your idea and we'll search to see if someone has already suggested it.

  1. Offer a "Storage as a Service" (STaaS) solution for Kubernetes and GKE

    Right now everyone has to build their own storage service for GKE and Kubernetes, which is not only painful but goes against the cloud principle where everything is served as a service.

    It would be awesome if devops guys wouldn't have to build their own PV solutions for Kubernetes PVCs.

    84 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    5 comments  ·  Flag idea as inappropriate…  ·  Admin →
  2. Support IPv6

    Support IPv6 in a way that we can have IPv6 addresses for external services in Kubernetes. Maybe all the way down into the container, but external facing would be something at least. It's 2017 :) Thanks!

    48 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  3. Add a delay flag for upgrading container clusters to support smaller clusters.

    When upgrading a smaller container cluster (3 ~ 9) nodes there should be a --delay=timeinseconds option for the 'gcloud container clusters upgrade' command. In smaller clusters destroying and recreating nodes can kill some services in my cluster if the new nodes are not starting the pods as fast as new machines are destroyed and creating. Mostly due to docker pull's which take some time to complete, for example during the last upgrade it nearly got my database cluster down. A simple flag with a delay between node deletion and creation would solve this issue since I can specify a time…

    43 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  4. Firewall rules for Kubernetes Master

    It would be interesting and will increase the safety of our cluster if we could add firewall rules to the master for the dashboard and API requests.

    Currently firewall rules only affect computing nodes for containers, not the master.

    40 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  5. Scale up or down number of machines, machine type in a cluster

    The machine type and number of machines should be editable within a cluster, without deleting and recreating the cluster.

    If a service is given a static IP address, that static ip should be retainable after making changes to the hardware in that cluster.

    40 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  6. Make it so that GCEPersistentDisk or another Persistent Volume type is ReadWriteMany

    Coming from Azure there was AzureFile which supports ReadWriteMany. It seems ridiculous that I should have to roll my own NFS server to get this capability on GKE.

    36 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  7. Add Context Name Flag For `gcloud container clusters get-credentials`

    Right now, when you run `gcloud container clusters get-credentials`, gcloud will produce a user spec, a cluster spec, and a context spec in `~/.kube/config`, all with long generated names. For the user spec and the cluster spec, that's great: the kubernetes user will mostly be reading these but not typing them, so the important criteria should be uniqueness and that the name make clear where the thing is from. For the context, spec, however, long generated names are a very poor choice: these exist so the user can quickly switch between configuration spaces by typing, for example, `kubectl config use-context…

    27 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  8. Add Webhooks to pushes to Google Container Registry

    Hi,
    I was wondering if we could get a success and maybe a failure webhook for the container registry when we push? This way to automate rolling updates on Kubernetes, we can use something like Coreos' Krud or something else that listens to webhooks to trigger rolling updates on Kubernetes. This is particularly valuable if say we rolled our own Kubernetes on GCE, which I did.

    27 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  9. Add auto-scaling container clusters

    Container clusters should support auto scaling similar to compute clusters.

    21 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  10. Prevent Kubernetes API Outage During Master Upgrade

    Right now, the Kubernetes API becomes unavailable while a master upgrade is happening. This could become very problematic as infrastructure-level components become Kubernetes-native (i.e. they pull from that API to configure themselves), as this means that if one of those components were to fail during the master upgrade and need to be restarted, they will be unable to configure themselves. For example, a component that consumes endpoints from the API for service discovery could be in trouble if it were to fail and be restarted during a master restart.

    20 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  11. Support changing scopes on GKE cluster without rebuilding

    I would like to be able to change the scopes of a GKE cluster without downtime from the console or command line.

    There doesn't currently appear to be a way to alter the scopes of a GKE cluster without downtime, other than doing a bunch of complex work involving disabling autoscaling, cordoning, draining etc.

    Currently, it is possible to change the scopes of an instance inside a cluster without taking it down. However, if the node is killed, it will be replaced by one that has the main cluster scopes.

    11 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  12. Container Engine should not only one machine type

    A cluster should have multiple machine type for various purpose (maybe CPU-intensive, memory, IO...). I can use label to select the machine type to match my purpose of pod.

    10 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  13. Allow customisation / reduction of system resource reservations

    Currently GKE is extremely inefficient for small clusters (dev, test, early start-up work, hackathons, etc). For example a node with 4GB of RAM only has 3GB marked as "allocatable". In addition system pods have high resource reservations (e.g. kube-dns takes 260 mCPU & ~100 MB memory, calico-typha 200 mCPU) which no doubt makes a lot of sense for many cases, but not so much for low-load environments.. Customisation of these would be great. I've tried editing resource yaml directly but changes get reverted.

    9 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  14. Container Registry Namespacing

    Currently the Container Registry WebUI in the Developer Console only displays Images pushed if they reside at the top level of the storage tree. The proposal it to allow the WebUI to also display images is they are organized in a namespacing like structure.

    Where a push to {project}/web/{image}, {project}/platform/{image}, {project}/ops/{image} properly pushes into the repository the Developer Console displays as if there are not images there.

    5 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  15. Can we stop, the container engine clusters when not in use and restart it when needed ?

    We have created a cluster for development purpose using n1-standard-8 machines. We generally used it in day time for development purpose. Is there any way we can stop the cluster during night time or on weekends ?

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  16. Suggest features should be linked from the developers console feedback menu

    This forum should be linked from the developer console's feedback menu, similar to in the compute section

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  17. google-fluentd Logging Agent - Debian 9 (stretch) support

    Please add support to Debian 9 (stretch) for google-fluentd logging agent.

    At the moment we can't upgrade our k8s pods:
    https://stackoverflow.com/questions/47838777/google-fluentd-error-on-debian-stretch-google-logging-agent

    Thank you

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  18. Improve eviction to enable debugging and/or soft eviction thresholds

    Currently my 1.9 cluster evicts pods due to memory pressure and I'm unable to confirm why. Stackdriver Metrics don't seem to capture it (perhaps due to the 60s sampling interval).

    Currently the kubelet is configured for 'hard' eviction:
    --eviction-hard=memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%

    It'd be nice if GKE would have a way to debug eviction - e.g. dumping pod memory usage to the logs, writing a memory dump, etc.. or at least allow for soft eviction so that this can be done by some custom tool.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  19. Automatically reassign the reserved IPs to nodes after the cluster node upgrades

    After each cluster node upgrade I need to not forget to manually reassign the static IPs to the new nodes. Would be great to do it automatically

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  20. Container engine should accept valid subnet for container address range

    When provisioning container engine, entering any valid subnet into the container address range causes the build to immediately fail on a regex failure. It appears that the decorative /14 UI element never makes it into the data and cluster.cluster_ipv4_cidr, failing to include the /14.

    Further, the cli command suggested by the UI appears to not accept the --container_ipv4_cidr parameter in any form, with our without the /14 and seems the --container_ipv4_cidr argument itself is invalid.

    The 10/8 subnet collides with the far side of a VPN connection and, as we'd already adopted (fortunately) the upper bounds of the 172 range…

    2 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: facebook google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
← Previous 1
  • Don't see your idea?

Container Engine

Feedback and Knowledge Base