If you still have unanswered questions, submit your question here and we’ll update you when the answer gets added to the page!
2. Flat-rate: Existing commitments will remain active until the commitment’s end date, but:
- a. Monthly & Yearly commit: This product will be discontinued, starting July, 5th 2023
- b. Flex slots: This product will be discontinued & transferred into BigQuery Autoscaling, starting July, 5th 2023
It depends on your type of workload. Customers that have a lot of idle slots while using Flat-rate might benefit from a cost reduction by switching to an Edition + Autoscaling Model. This can be combined with the new Compressed Storage Pricing, which will reduce the costs on the other side of
the equation.
Overall, the price impact should be examined from a TCO perspective.
For example, let’s assume you currently have some projects using on-demand pricing, some using flat-rate:
• On-demand costs (for workloads/projects that use it) will go up
• Your spend on commitments may go down because you can commit to a lower baseline of slots and use autoscaling for the rest
• Autoscaling may increase costs vs.
Flex slots
• Storage costs will go down if you can switch to Compressed Storage
Overall, this may still mean a customer’s BQ spend is lower than before, or the increase in spend is reduced vs. if you were just looking at one variable.
A BigQuery commit gives you a better price for the Slot/Hour. For example, a 3-year BigQuery commit will unlock a 40% discount on Analysis costs. Note that Storage pricing is not discounted.
You can use our BigQuery Current Usage Analysis Tool to understand the impact of each of the pricing models on your costs. Just create a duplicate Sheet and follow the instructions.
We highly recommend doing a POC before switching workloads to Editions.
Long-running workloads with very little spikes are best-suited for BigQuery Editions. Even better if they’re reading tons of data.
Spiky and/or short-running workloads are a bad fit for Editions and better-suited to on-demand pricing.
This is because when you use the BigQuery Autoscaler in Editions, you are billed for the slots allocated to the autoscaler — not the slots you actually use. Additionally, the Autoscaler works in increments of 100 slots, so even if your query requires 50 slots, 100 will be allocated and that’s what you’re billed for.
On top of this, there is a scale-down period of 1-minute for the Autoscaler. Meaning, if you have a query that takes 10 seconds to execute, you will still pay for a minute’s-worth of slots.
It depends on how compressible your data is.
Lots of string data? Good.
Lots of integers? Bad.
We wrote a query that’ll help you compare your options, located in this Github gist.
There’s no “one-size-fits-all” approach with BigQuery Editions.
Since you can utilize different BigQuery pricing tiers across projects, consider having dedicated projects for different types of workloads.
For example:
- ETL and ELT workloads should be in a separate project from your R&D workloads
- Spiky, short-running workloads should go in a project that utilizes on-demand pricing
You may be considering using BigQuery’s new Compressed Storage, wondering if querying data from compressed datasets will effect your query performance — but the answer is no.
You’re still using the same APIs and query mechanism under the hood.
Let’s say you committed to 100 slots with a flat-rate commit. But now your consistent usage has increased and suddenly you need 100 additional slots.
Since July 5th, all flat-rate commits are converted to Enterprise Editions at the price you committed to. You’ll pay the flat-rate price for those first 100 slots, but any new slots you add to your reservation will be charged at the new Enterprise Edition price.
Google Cloud has always stored your data as compressed. But when using Logical Storage you are billed based on uncompressed data. With Compressed Storage you are billed on the actual physical size of data on the disk.
Compressed Storage costs you $0.04/GB (active) and $0.02/GB (long-term), which is more than Logical Storage, but since you’re being billed on compressed data, you’re billed on a significantly smaller amount of data (depending on the compression ratios).
Want to know whether you’ll save money by switching to Compressed Storage? Run the query in this Github gist.
On July 5th all existing Flex slot reservations will be automatically migrated to auto-scaling.
Storage was always a bit “too expensive” on BigQuery and Google addressed this by making Compressed Storage Pricing available. It’s an “exclusive” feature, which means that you need to have Editions or pure-on-demand workloads. Other than that, you will always benefit from better storage pricing. However, as the compression-ratio depends on the data structure, so does the savings. Read our Compressed Storage Overview for a comprehensive overview of Compressed Storage and in which situations to enable it.
No, you cannot. Google has stated that the UI and other tooling actively checks if flat-rate reservations exist and will not allow enabling of Compressed Storage.
You will be billed for the uncompressed data volume and not the compressed. This means that there is not a cost advantage to querying compressed data with a query running from an on-demand project.
If on Editions (versus on-demand) then yes, you will be billed for the first full minute of the query.
No, the long-term storage mechanism will stay the same and still apply to both compressed and uncompressed storage.
Compute is now more expensive. And since dbt (when running on BigQuery) is compute-heavy of compute so therefore will be more expensive.
Open Monitoring in the Google Cloud Console and click on Metric Explorer in the navigation pane. From there, search for Slot metrics.
If you’re a DoiT customer you can open a ticket with a BigQuery expert to review this together.
If you want to achieve similar results to Flex slots with the BigQuery autoscaler, setting a maximum slot value above your baseline slots amount will help you scale on flat-rate for the duration of your commit. The autoscaler will scale as needed to that maximum amount.
For example, if you have a 500-slot annual flat-rate commit, and you notice you need 500 more slots for spiky workloads, you’d set a maximum limit of 1,000 slots (and pay the $0.06/slot-hour Enterprise Edition rate for those slots).
Autoscaling charges based on the amount of slots getting scaled. Autoscaling happens in multiples of 100 slots, so if you have a query that requires only 10 slots, you’ll still pay for 100 autoscaled slots whether you use all those slots or not.
Smaller, short-running queries are best used with the on-demand pricing model since you’re only charged for what you use, not the number of slots you scale.
Time travel allows you to go back in time and do a restore if you deleted a table by accident. It can’t be disabled. However, by default you can go back up to 7 days in time, but you can reduce that up to 3 days if needed.
Failsafe storage likely works under-the-hood in a similar manner to Time-travel and stores data for 7 days, but it can be disabled. To restore a table via failsafe storage you file a ticket with Google Cloud support, and they can restore the data back.
Yes, you would set your baseline to a minimum amount of slots you use all the time. Then you would set the maximum to what you know is your theoretical maximum usage (or maximum amount you’re comfortable paying for).
If you have long periods where no queries are executed, you’ll want to set a baseline of 0 to make sure you’re not paying for slots you don’t need (remember, you’re billed for slots allocated to the autoscaler, not slots you use).
Note: BigQuery’s Autoscaler has a scale-down period of one minute. If you have a query that takes 7-8 seconds, you will be billed for a minute due to this downscale time.
Not all workloads are treated equally with BigQuery Editions. Some jobs will be cheaper (or free in some cases) if you run them on-demand vs. on an Edition.
For instance, BigLake meta refreshes. Let’s say you have a table in avro/parquet — BigQuery refreshes the table schema and loads it back in. That uses slot-hours, but has zero bytes processed (meaning it’d be free on-demand).
So in this case, you’d put all BigLake jobs in a separate, on-demand project from other workloads.
Placing different workload types in different projects — and using different BigQuery pricing tiers in those projects — is a great way to reduce your BigQuery costs.
For example, if you’re using Looker and see consistent slot usage throughout the day, Editions are a great choice.
But for more sporadic workloads, like data processing jobs, on-demand pricing may be more appropriate.
Still have questions?
If you’re a DoiT customer you can work with a BigQuery expert on your specific use case by opening a ticket in the DoiT Console.
Not a customer? Get in touch with us about working with DoiT
on BigQuery and beyond!