Start a new topic

Estimating Cloud Credits?

I am in the process of submitting a request for cloud credits on BioData Catalyst using the following form:

https://biodatacatalyst.nhlbi.nih.gov/resources/cloud-credits


Does anyone have any recommendations to share as I develop my justification?

Thanks!


Here are the steps I took:

1.Run one single-variant WGS analysis and one gene-based WGS analysis on one phenotype on about 9000 subjects as a test

2.Find out the cost after it’s done on https://console.cloud.google.com/billing (~$15 for single-variant analysis and $80 for gene-based analyses, total about $100 for one phenotype)

3.Multiple $100 by the number of phenotypes to be tested

4.Add extra budget for running interactive sessions using Jupyter Notebook, creating, and debugging workflows.


1 person likes this

Hi Christopher,


if you are working on an image analysis project using deep learning, these are the steps I would take to estimate costs:

  1. Identify the biggest batch size that can be used for your training/validation on a V100 GPU. 
  2. Identify the correct instance where you will run your analysis based on the CPU memory you require. Also, take into account that instances with more than one GPU will cost more.
  3. Estimate the required time of each epoch by running your training/validation for a few epochs.
  4. Multiply the time required to run a single epoch by the number of epochs you will run your training.
  5. Understand the scalability of your code and take into account the trial & error process required at the beginning to improve your code, refine your neural network and choose the parameters.

Finally, I would
add some extra budget for testing and for Jupyter Notebook sessions.

1 person likes this

Here a few steps I took.

  1. Run several smaller batches in increasing size (e.g 1, 10 ,100 samples) and then use those numbers to extrapolate cost for your proposed project understanding the larger the sample size the less accurate the initial estimates. 

  2. Always leave a buffer to make sure you don’t get stuck in the middle of job

  3. Understand the scalability of your code and if you have certain choke points pay extra attention to their potential cost. 


1 person likes this

I'd add to:

1. Run multiple test samples, especially if using pre-emptible instances, because the cost might vary depending on how often they get pre-empted. 

2. Don't forget to allocate funds for interactive analysis and also storage.


On Terra, I have also recently been using a notebook derived from the "Workflow Cost Estimator" to estimate the cost of my runs, including getting cost estimation at the task level. I think there is also a function to explore the effect of different configurations on the cost.


1 person likes this
Login or Signup to post a comment