Container Engine

  1. Support notification for job while job is fail

    Kubernetes Job might run failed. I think it could add a feature to send notification while the job is failed

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  2. Allow customisation / reduction of system resource reservations

    Currently GKE is extremely inefficient for small clusters (dev, test, early start-up work, hackathons, etc). For example a node with 4GB of RAM only has 3GB marked as "allocatable". In addition system pods have high resource reservations (e.g. kube-dns takes 260 mCPU & ~100 MB memory, calico-typha 200 mCPU) which no doubt makes a lot of sense for many cases, but not so much for low-load environments.. Customisation of these would be great. I've tried editing resource yaml directly but changes get reverted.

    12 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  3. Improve eviction to enable debugging and/or soft eviction thresholds

    Currently my 1.9 cluster evicts pods due to memory pressure and I'm unable to confirm why. Stackdriver Metrics don't seem to capture it (perhaps due to the 60s sampling interval).

    Currently the kubelet is configured for 'hard' eviction:
    --eviction-hard=memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%

    It'd be nice if GKE would have a way to debug eviction - e.g. dumping pod memory usage to the logs, writing a memory dump, etc.. or at least allow for soft eviction so that this can be done by some custom tool.

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  4. Support changing scopes on GKE cluster without rebuilding

    I would like to be able to change the scopes of a GKE cluster without downtime from the console or command line.

    There doesn't currently appear to be a way to alter the scopes of a GKE cluster without downtime, other than doing a bunch of complex work involving disabling autoscaling, cordoning, draining etc.

    Currently, it is possible to change the scopes of an instance inside a cluster without taking it down. However, if the node is killed, it will be replaced by one that has the main cluster scopes.

    11 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  5. Make it so that GCEPersistentDisk or another Persistent Volume type is ReadWriteMany

    Coming from Azure there was AzureFile which supports ReadWriteMany. It seems ridiculous that I should have to roll my own NFS server to get this capability on GKE.

    51 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  6. Support IPv6

    Support IPv6 in a way that we can have IPv6 addresses for external services in Kubernetes. Maybe all the way down into the container, but external facing would be something at least. It's 2017 :) Thanks!

    59 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  7. Offer a "Storage as a Service" (STaaS) solution for Kubernetes and GKE

    Right now everyone has to build their own storage service for GKE and Kubernetes, which is not only painful but goes against the cloud principle where everything is served as a service.

    It would be awesome if devops guys wouldn't have to build their own PV solutions for Kubernetes PVCs.

    88 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    6 comments  ·  Flag idea as inappropriate…  ·  Admin →
  8. Firewall rules for Kubernetes Master

    It would be interesting and will increase the safety of our cluster if we could add firewall rules to the master for the dashboard and API requests.

    Currently firewall rules only affect computing nodes for containers, not the master.

    41 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  9. Add Context Name Flag For `gcloud container clusters get-credentials`

    Right now, when you run gcloud container clusters get-credentials, gcloud will produce a user spec, a cluster spec, and a context spec in ~/.kube/config, all with long generated names. For the user spec and the cluster spec, that's great: the kubernetes user will mostly be reading these but not typing them, so the important criteria should be uniqueness and that the name make clear where the thing is from. For the context, spec, however, long generated names are a very poor choice: these exist so the user can quickly switch between configuration spaces by typing, for example, kubectl

    27 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  10. Add a delay flag for upgrading container clusters to support smaller clusters.

    When upgrading a smaller container cluster (3 ~ 9) nodes there should be a --delay=timeinseconds option for the 'gcloud container clusters upgrade' command. In smaller clusters destroying and recreating nodes can kill some services in my cluster if the new nodes are not starting the pods as fast as new machines are destroyed and creating. Mostly due to docker pull's which take some time to complete, for example during the last upgrade it nearly got my database cluster down. A simple flag with a delay between node deletion and creation would solve this issue since I can specify a time…

    46 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  11. Scale up or down number of machines, machine type in a cluster

    The machine type and number of machines should be editable within a cluster, without deleting and recreating the cluster.

    If a service is given a static IP address, that static ip should be retainable after making changes to the hardware in that cluster.

    40 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  12. Container Engine should not only one machine type

    A cluster should have multiple machine type for various purpose (maybe CPU-intensive, memory, IO...). I can use label to select the machine type to match my purpose of pod.

    11 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  13. Add Webhooks to pushes to Google Container Registry

    Hi,
    I was wondering if we could get a success and maybe a failure webhook for the container registry when we push? This way to automate rolling updates on Kubernetes, we can use something like Coreos' Krud or something else that listens to webhooks to trigger rolling updates on Kubernetes. This is particularly valuable if say we rolled our own Kubernetes on GCE, which I did.

    27 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  14. Add auto-scaling container clusters

    Container clusters should support auto scaling similar to compute clusters.

    24 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  15. Monitor the number of nodes in a node pool via Stack Driver

    There does not seem to be a metric in Stack Driver for how many nodes a node pool has.
    With autoscaling enabled, this would be an interesting metric to show in dashboards.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  16. Automatically reassign the reserved IPs to nodes after the cluster node upgrades

    After each cluster node upgrade I need to not forget to manually reassign the static IPs to the new nodes. Would be great to do it automatically

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  17. Hybrid Public/Private Clusters

    Given the (now-Beta) support for private clusters, it would be great to be able to support hybrid cluster environments, where some node pools only allocate a private IP, and other node pools can allocate a public IP and be accessible directly.

    So far as I can tell, this is sort of possible today, by editing the instances that GKE creates when managing a node pool. I don't see any reason why simply adding a public IP to an instance should break anything (indeed it doesn't seem to on manual changes), so hopefully this is just as simple as exposing the…

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  18. google-fluentd Logging Agent - Debian 9 (stretch) support

    Please add support to Debian 9 (stretch) for google-fluentd logging agent.

    At the moment we can't upgrade our k8s pods:
    https://stackoverflow.com/questions/47838777/google-fluentd-error-on-debian-stretch-google-logging-agent

    Thank you

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  19. Create a way to update scopes on existing container node-pools.

    There is no good way to currently update scopes. There are going to be times where scopes are originally set incorrectly or needs change. Addressing this would be useful.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  20. Can we stop, the container engine clusters when not in use and restart it when needed ?

    We have created a cluster for development purpose using n1-standard-8 machines. We generally used it in day time for development purpose. Is there any way we can stop the cluster during night time or on weekends ?

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
← Previous 1
  • Don't see your idea?

Container Engine

Categories

Feedback and Knowledge Base