Thanks, we’ll look at the change network ability. Please feel free to file a separate feature request for multiple NICs if you desire (that may prove more popular than this other request).
Note: multiple network interfaces is tracked in a separate item. I wanted to note that the feature is now in Beta. Please refer to the GCE Networking docs to get started.
Please provide more detail, see comments.
Hi - "allocate address" is an AWS command. I think I know what you mean though. Do you mean, "gcloud compute addresses create" and its equivalent API call? Are you asking that the command simply report back the IP address value that was created?
Thx, pls see comments for further info.
Typically, modern architectures use one of the many packages available that provides Service Discovery, which is a different thing than DNS. etcd, zookeeper, and consul are all examples you might investigate.
Thanks for the suggestion! This is one that we hear pretty infrequently, usually from users of EC2-Classic networking who have relied on this behavior as a form of service discovery (It is not supported in their newer VPC networking, to my knowledge). We hear this request about once every 6-12 months, and usually from one customer at a time, hence it has not been very high on our backlog. You're the first to mention it in the last 12 months that I'm aware of. :/
I'm not saying we won't consider this, as it has always been on our backlog, but other capabilities like private DNS tend to rank higher. So, if you have friends that want it, have them vote here. :)
Announced today at Cloud Next ’17, PostGresQL Cloud SQL is now in Beta! Thanks to everyone for your enthusiasm and patience!
Response from the Cloud SQL PM team:
Thanks for your feedback! A GA date has not yet been set for Postgres support. Large features, such as replication and automatic failover (HA) will be added to the beta period later this year.
Thanks, glad you love it! I've shared your question with the PM for Cloud SQL.
Sorry Chuck, that just means that I changed the forum for this feature request within UserVoice (in case you were trying to figure out where it went). The CloudSQL product team is well aware of the market interest in this, thanks for adding your voice to the chorus! :)
Hang in there!!! The team is definitely aware of how many customers want this. :)
Thanks, I have shifted this request to the Cloud SQL area.
150 votesunder review · 6 comments · Google Cloud Platform » Feature Request · Flag idea as inappropriate… · Admin →
Good feedback, Mani. In this case, I'm inclined to keep them separate, because it's likely that the cron-like task queue piece could be available on a different schedule than the full slate of GAE-like services. I don't have dates to announce, but this is on our roadmap.
This is in Beta now.
Thanks, Mani. Sorry for the staleness - our UserVoice forums are a pilot project, and not all product teams are onboarded to actively moderate their forums yet. :)
Interesting idea. This might be a bit beyond what would make sense for us to provide as a core platform capability, but it could definitely be done with something like a collectd agent (for example the version Stackdriver uses) and some downstream tech.
Yes, more or less. If you actually want to use it with Stackdriver, you can find out more about installing the agents here . The instructions are quite extensive, but there's a packaged version you can install . Most of the stuff there is about setting up verything to feed the data out to Stackdriver, from what I can tell. For your purposes though, you could probably use the collectd fork from Stackdriver, or just regular collectd and point it wherever you want, then read out that information. I'm not arguing that we or Stackdriver couldn't do this and offer it as a feature, by the way, just giving you some resources in case you want to have a DIY version. :)
Thanks for your feedback. We have both quick-start and tutorials available (see comments for links).
Hi, A quick-start including SSH is available here:
There are also step-by-step tutorials for doing many things. The "build a todo app" tutorial includes using SSH:
This feature is being tested in a private alpha program. If you would like to be invited to test this feature early, please fill out the form here:
This issue was imported from code.google.com, but it is a duplicate. Merging into the main suggestion.
Note, one possible workaround to this is to use a dynamic DNS service like dyndns. We recognize it would be better if instances got auto-generated DNS names, and we're looking at doing this in the future, but thought I'd mention this as it has worked for many people.
Thanks, we’re looking at this.
I'm not sure this would work as you've described it, as a tag is not unique to a single instance. For example, you could label a handful of instances "staging". In that case, there couldn't be a DNS resource that uniquely identifies any single instance. Note, also, non-DNS entities like disks can be tagged.
While this may not be feasible, we are exploring making the instance name itself an automatic DNS entry.
We expect to allow finer grained controls in the future.
Thanks, we're definitely working on this.
A feature for sharing images between projects is now available in beta. Please refer to https://cloud.google.com/compute/docs/images/sharing-images-across-projects for more information.
Yes, it is possible this could change in the future. It's a restriction that we're looking at very closely. The allAuthenticatedUsers role is obviously very broad, and we want to make sure we have the mechanisms in place to guide users to use it in ways that are safe and conscious choices (not just related to image sharing, but broadly speaking). In this context for example sharing *all* your images in the project with all users is a little bit risky, and we'd prefer to make it so you can choose that level of sharing on a per-image basis. Stay tuned for updates. :)
Thanks, we’re considering this use case.
There is a workaround  to do this, but it is *not* what anyone would call "easy." We'd like to make it easier, and the feature request is on our backlog. As with any software development team, we unfortunately do not have infinite resources, and do our best to handle customer requests in the order that gets the most requested items done first (you can see by vote count here on UserVoice what some of those are, and there are many others too, of course). I hope we will be able to improve the export experience in the future.
We are continuing to consider this request as we look at a number of UI updates during our engineering planning work. Since this request was made, Stackdriver has launched with a more advanced monitoring solution, some customers may find it meets the need here. But we also recognize the convenient of lightweight at-a-glance graphs directly inside the web console, so we're considering if we can add this. Thanks!
Thanks for your feedback, we’ll consider an HDD-based instance storage in future product planning.
Please refer to the comments on this entry for more details about how our Local SSD product works and what it costs. I believe you’ll find it superior to the similar offering from AWS (i2), but we don’t currently have an exact equivalent that is HDD-based (d2).
Thanks Roman, this additional feedback is very helpful to us as we consider how to move forward with our product offerings. I'm sharing with our colleagues on the storage team as well, and we'll definitely take your thoughts into consideration in our planning. As far as limits go, we have increased them in the past, so it's possible we would do so in the future. Something like a d2 would be a little bit different, as it involves hardware, but also something we'll keep in mind.
A clarification: our "expensive" internal SSD based instance storage, what we call Local SSD, is not bounded by 240MB/s. The limit you mention is for SSD-based "PD" (durable block storage, similar to EBS). Please refer to  for more information.
You're correct, that we do not currently offer a locally attached HDD product for these types of workloads. To my knowledge we find that most customers are moving towards flash-based ephemeral storage for these types of workloads, but we'll certainly take your feedback into consideration when planning future hardware offerings.
Finally, I would encourage you to check into the "expensive" statement. An i2.4xlarge with 16 vCPU and 53GB of RAM and 4x800SSD is priced at $3.41 at AWS, meanwhile a similar n1-standard-16 with 60GB RAM and 8x375GB SSD (about the same amount...) is only $1.70 per hour on GCP, and that is without applying up to 30% automatic Sustained Use Discount if you run it most of the month. For this lower price, note that SSD on GCP is also quite a bit higher max performance in IOPS than AWS. Our pricing calculator  can help lay out the costs for you. It is of course true that SSD costs more than HDD in general, but this is a universal truth.
Is this consistently reproducible, or does it only happen on one specific instance you have?
Have you had a chance to see if changing the VM size to something larger than an f1-micro makes this problem go away? We believe this is due to insufficient memory.
I'll ask our engineers if we're aware of this. However, please note that the documented minimum RAM configuration for opensuse is 1GB, which means that an f1-micro is not a suitable choice for running this OS, and we've seen problems in this configuration. It would be worth trying to run on a g1-small and see if the problem disappears in that case.
More info on opensuse requirements: https://en.opensuse.org/Hardware_requirements
UPDATE: IPv6 load balancer endpoints are now GA.1 However, we continue to work on instance-level v6 addressing. Since I believe that at least some votes on this issue are for v6 all the way to the instance, I am leaving this issue open.
Yes, we do believe that. And https://www.google.com/intl/en/ipv6/statistics.html shows that fully 14% of the world is "there yet." We are continuing to push forward with implementing IPv6.
Hi Ryan, thanks for your comment, we're definitely aware that the industry is finally moving towards IPv6, and plan on joining that club in GCP as soon as we can. Our team is also well aware that this feature has now regained #1 status on this forum (because we implemented the other one, ha.)
Implementing it to the LB frontend first is indeed something we're looking at, and it's helpful to know that this would address your use case, at least initially.
Thanks, we’ll consider this.
I haven't investigated yet, but it may be possible that this could be done with something like Stackdriver.
Hello all, I’m happy to announce that you can now change the service account or access scopes on a stopped VM. This feature is available to all users via a beta command, as documented at https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes
Thanks for your patience while we completed deploying this feature.
Note, we are still planning to add the ability to change scopes on a running VM in a future update (it’s at the very top of our list, we know it is a highly requested feature).
I made the latest update 11 days ago. Sorry if "soon" was interpreted as meaning "hours", that wasn't the idea. Unfortunately our deployment schedules are very fluid, so we can't give an exact date when this new functionality will be available, but we'll post here when it is.
This feature is at the top of our stack due to the strong demand represented in feedback here and elsewhere. It is currently in limited testing, and will be deployed more widely in an upcoming release. As our deployment schedule is very fluid, I unfortunately can't offer an exact date. Please hang in there, just a bit longer.
I understand that the quota limits can be frustrating. Our free trial model and the boundaries it provides are somewhat different than AWS's model. If you would like to "upgrade" your account (which doesn't initially cost anything) and request a quota increase, support can review your request. The form to request is accessible from the page inside the Console that shows what your quota currently is.