Thanks for your feedback… please stay tuned!
Thanks, Felicity. I didn't see your comment, but moved it there anyway. I moved an older, less complete idea into this one. Stay tuned, we might have some news here soon, I know the product team has been exploring this due to customer demand.
Thanks for your suggestion, this is definitely something we will consider adding in the future!
Thanks, Gary, we're considering this for a future enhancement.
Have you seen this one?
Announced today at Cloud Next ’17, PostGresQL Cloud SQL is now in Beta! Thanks to everyone for your enthusiasm and patience!
Sorry Chuck, that just means that I changed the forum for this feature request within UserVoice (in case you were trying to figure out where it went). The CloudSQL product team is well aware of the market interest in this, thanks for adding your voice to the chorus! :)
Hang in there!!! The team is definitely aware of how many customers want this. :)
Thanks, I have shifted this request to the Cloud SQL area.
Interesting idea. This might be a bit beyond what would make sense for us to provide as a core platform capability, but it could definitely be done with something like a collectd agent (for example the version Stackdriver uses) and some downstream tech.
Yes, more or less. If you actually want to use it with Stackdriver, you can find out more about installing the agents here . The instructions are quite extensive, but there's a packaged version you can install . Most of the stuff there is about setting up verything to feed the data out to Stackdriver, from what I can tell. For your purposes though, you could probably use the collectd fork from Stackdriver, or just regular collectd and point it wherever you want, then read out that information. I'm not arguing that we or Stackdriver couldn't do this and offer it as a feature, by the way, just giving you some resources in case you want to have a DIY version. :)
Thanks for your feedback. We have both quick-start and tutorials available (see comments for links).
Hi, A quick-start including SSH is available here:
There are also step-by-step tutorials for doing many things. The "build a todo app" tutorial includes using SSH:
This feature is being tested in a private alpha program. If you would like to be invited to test this feature early, please fill out the form here:
This issue was imported from code.google.com, but it is a duplicate. Merging into the main suggestion.
Note, one possible workaround to this is to use a dynamic DNS service like dyndns. We recognize it would be better if instances got auto-generated DNS names, and we're looking at doing this in the future, but thought I'd mention this as it has worked for many people.
Currently we do not allow outbound traffic on port 25, but third party mail services like Sendgrid are available to use.
Hi, sorry but UserVoice is for product suggestions and feedback about what we can make better, but is not staffed for technical support responses. Please visit the gce-discussion Google Group, or post on a forum like StackOverflow or StackExchange where our support team can connect with you to help with any issues you may have. Thanks!
I definitely can appreciate your point of view, and we have heard this feedback from a number of customers (definitely more than the 6 votes this item has...). We're looking into a better approach for this that balances the concerns, but don't have anything to announce quite yet. I'll follow up with you directly to discuss the details of your situation a bit more and see if we can get you un-blocked somehow.
16 votesunder review · 5 comments · Compute Engine » Feature Request · Flag idea as inappropriate… · Admin →
Thanks for the feedback in these comments. I think at this point we understand what users are looking to do, and we're investigating changes that would allow this. The evolution of this limitation is due to what Scott described earlier - templates started as a requirement for MIGs, but they have since evolved into a more general "make a VM like this" function of the platform. I hope we can make improvements here in the near future.
Thanks, we’re looking at this.
I'm not sure this would work as you've described it, as a tag is not unique to a single instance. For example, you could label a handful of instances "staging". In that case, there couldn't be a DNS resource that uniquely identifies any single instance. Note, also, non-DNS entities like disks can be tagged.
While this may not be feasible, we are exploring making the instance name itself an automatic DNS entry.
Hello, thanks for your suggestion. Im’ sorry to say that we are only able to support this forum in English at this time, as most of the Product Management team that runs UserVoice is located in the US. please see comments for more details.
If you are able to post some additional comments on how you believe the proposed feature should work, we would be happy to consider this. We're not completely sure what you're proposing based on the title alone. Thanks!
Translated title to English via Google Translate.
We expect to allow finer grained controls in the future.
Thanks, we're definitely working on this.
A feature for sharing images between projects is now available in beta. Please refer to https://cloud.google.com/compute/docs/images/sharing-images-across-projects for more information.
Yes, it is possible this could change in the future. It's a restriction that we're looking at very closely. The allAuthenticatedUsers role is obviously very broad, and we want to make sure we have the mechanisms in place to guide users to use it in ways that are safe and conscious choices (not just related to image sharing, but broadly speaking). In this context for example sharing *all* your images in the project with all users is a little bit risky, and we'd prefer to make it so you can choose that level of sharing on a per-image basis. Stay tuned for updates. :)
Thanks, we’re considering this use case.
There is a workaround  to do this, but it is *not* what anyone would call "easy." We'd like to make it easier, and the feature request is on our backlog. As with any software development team, we unfortunately do not have infinite resources, and do our best to handle customer requests in the order that gets the most requested items done first (you can see by vote count here on UserVoice what some of those are, and there are many others too, of course). I hope we will be able to improve the export experience in the future.
We are continuing to consider this request as we look at a number of UI updates during our engineering planning work. Since this request was made, Stackdriver has launched with a more advanced monitoring solution, some customers may find it meets the need here. But we also recognize the convenient of lightweight at-a-glance graphs directly inside the web console, so we're considering if we can add this. Thanks!
Thanks for your feedback, we’ll consider an HDD-based instance storage in future product planning.
Please refer to the comments on this entry for more details about how our Local SSD product works and what it costs. I believe you’ll find it superior to the similar offering from AWS (i2), but we don’t currently have an exact equivalent that is HDD-based (d2).
Thanks Roman, this additional feedback is very helpful to us as we consider how to move forward with our product offerings. I'm sharing with our colleagues on the storage team as well, and we'll definitely take your thoughts into consideration in our planning. As far as limits go, we have increased them in the past, so it's possible we would do so in the future. Something like a d2 would be a little bit different, as it involves hardware, but also something we'll keep in mind.
A clarification: our "expensive" internal SSD based instance storage, what we call Local SSD, is not bounded by 240MB/s. The limit you mention is for SSD-based "PD" (durable block storage, similar to EBS). Please refer to  for more information.
You're correct, that we do not currently offer a locally attached HDD product for these types of workloads. To my knowledge we find that most customers are moving towards flash-based ephemeral storage for these types of workloads, but we'll certainly take your feedback into consideration when planning future hardware offerings.
Finally, I would encourage you to check into the "expensive" statement. An i2.4xlarge with 16 vCPU and 53GB of RAM and 4x800SSD is priced at $3.41 at AWS, meanwhile a similar n1-standard-16 with 60GB RAM and 8x375GB SSD (about the same amount...) is only $1.70 per hour on GCP, and that is without applying up to 30% automatic Sustained Use Discount if you run it most of the month. For this lower price, note that SSD on GCP is also quite a bit higher max performance in IOPS than AWS. Our pricing calculator  can help lay out the costs for you. It is of course true that SSD costs more than HDD in general, but this is a universal truth.
Is this consistently reproducible, or does it only happen on one specific instance you have?
Have you had a chance to see if changing the VM size to something larger than an f1-micro makes this problem go away? We believe this is due to insufficient memory.
I'll ask our engineers if we're aware of this. However, please note that the documented minimum RAM configuration for opensuse is 1GB, which means that an f1-micro is not a suitable choice for running this OS, and we've seen problems in this configuration. It would be worth trying to run on a g1-small and see if the problem disappears in that case.
More info on opensuse requirements: https://en.opensuse.org/Hardware_requirements
Ladies and gentleman, IPv6 Load Balancer termination is now available in Alpha. Visit our docs page 1 to sign up to participate in the Alpha. We know that some folks will want IPv6 all the way to the guest OS in the VM, and we know some of you will be itching for Beta and GA, but this is an important step forward and we hope it gives you some long awaited relief that we are listening, and we do understand this is important for you all. Happy routing! :)
Yes, we do believe that. And https://www.google.com/intl/en/ipv6/statistics.html shows that fully 14% of the world is "there yet." We are continuing to push forward with implementing IPv6.
Hi Ryan, thanks for your comment, we're definitely aware that the industry is finally moving towards IPv6, and plan on joining that club in GCP as soon as we can. Our team is also well aware that this feature has now regained #1 status on this forum (because we implemented the other one, ha.)
Implementing it to the LB frontend first is indeed something we're looking at, and it's helpful to know that this would address your use case, at least initially.
Thanks, we’ll consider this.
I haven't investigated yet, but it may be possible that this could be done with something like Stackdriver.
Thanks, we are considering this request, it does come up from time to time. In the meantime, https://cloud.google.com/solutions/filers-on-compute-engine provides several options that may be useful for setting up a file server on GCE, depending on your needs.
We appreciate the desire in this request for a scalable filer solution. We are evaluating that as a possible future product. In the meantime I thought I would mention that several of the use cases here are pretty simple as far as data sharing goes, and couple be accomplishable by something like the GCS Fuse tool. Alternately as my teammate Scott mentioned earlier, our partner Avere provides a higher scale, high performance software package for running a filer in GCP.
Thanks for your suggestion. While we evaluate this for possible future offerings, we have many customers that are currently using Avere's products to meet these types of requirements (quite happily).