It would be amazing to in addition to the worker graph be able to directly see Memory and CPU consumption. This makes it much easier to co-relate & debug different stages and also gives a bit of insight into what machine types perform most optimal. It is possible to do this now by making metrics in Stackdriver, but it's very involved especially if a super simple graph could do the trick...just like in Kubernetes/GoogleContainerEngine31 votes
Python Dataflow pipelines fail in parsetable_reference function when you specify a BigQuery Table name with partition decorator for loading. This is very important aspect if you would want to leverage BigQuery Table Partitioning.28 votes
It would be good to use our own container cluster for running dataflow workers as dataflow already use kubernetes for deploying workers. This could even take into consideration of user supplied cluster's current workload and balance workers between user provided cluster and dataflow cluster.14 votes
It could be good to see the cost of the job in job view, I even thought of doing an chrome extension for this cause it's pretty trivial with datas vCPU sec, RAM MB sec, PD MB sec etc.10 votes
Ability to show total worker time, max number of workers, zone information in the overview page. This should be customizable similar to what we see in the app engine versions page1 vote
Should be able to assign labels to data flow jobs and filter by labels in the overview page24 votes
It would be nice if GO was natively supported.6 votes
Would like a way to create a Read Transform that can be scheduled to upload an FTP payload to Google Storage for further processing.4 votes
Please update the docs to describe how machine type affects jobs.
If you have a serial pipeline and don't do any native threading in your DoFn, is a n1-standard-8 going to be any faster than an n1-standard-1?
If you have parallel stages and set a max of 50 workers, will you get work done faster on a n1-standard-8 than a n1-standard-1. i.e. will use 400 cores for workers instead of 50?
[please ignore that n1-standard-8 has more ram and may help groupBy for this discussion]1 vote
Currenlty when BigQueryIO.Write try to insert something wrong in streaming mode log contains only call stack, but not the reason of the error (like wrong format, of wrong field name):
exception: "java.lang.IllegalArgumentException: timeout value is negative
at java.lang.Thread.sleep(Native Method)
job: "2016-10-061514_44-8205894049986167886"3 votes
Dataflow Job Logs are separate from Cloud Logging, so you cannot see Job Logs under Cloud Logging, nor create a Stackdriver alert for failed Dataflow jobs.21 votes
I’m not sure I understood the suggestion — perhaps the post is incomplete?
If you could elaborate further, I’ll be happy to take a look. Thanks!
How can we schedule dataflow pipeline code as a job to cloud in java??26 votes
I'm currently processing log data from multiple days with Cloud Dataflow. According to the defined options it uses 10 to 100 workers and the throughput-based autoscaling algorithm. At the moment there are still 64 workers active, while only one job is still running with around 1500 elements per second. If you look at the CPU graph of the workers you see, that almost all of them are idle for the last 30 minutes. I would prefer a more carefree autoscaling, where I know I always get the optimal cost effectiveness.7 votes
We’ve done a few performance optimizations lately that should result in a much improved experience. Could you share a jobID for us to take a look at? (I’m curious to examine the experience you describe).
It would be nice to have a direct link to the logs of the job from the overview page.
At the moment, you have to:
-Click the job
-Wait for it to load
-Click "Worker Logs"
-Wait for it to load
It should be just one click to get the logs :-)4 votes
Thanks for the feedback!
We are working on improving this experience along this lines. Will be happy to discuss more details next time we sync.
On the main info screen for a particular job, a tab for execution parameters would be very useful for debugging and quantifying job performance.
Pretty much the whole suite of:
that dataflow supports as execution parameters would be great to have to the right of "Step" on a tab called "Job".3 votes
Thanks for the suggestion!
I would like to be able to quickly see the number of jobs that are currently running. Sometimes streaming jobs that have been running for weeks get buried below batch or testing jobs.7 votes
Thanks Andrea, we’re looking into it…
Current darkjh/scalaflow library is pretty basic and the DoFn etc is pretty messy. It would be nice if scala was natively supported.8 votes
Thanks for the suggestion, Ankur!
- Don't see your idea?