You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi There, it's unclear if Yggdrasil supports GPU or TPU acceleration. It seems like if you do fine tuning in JAX maybe it's possible when the model is converted to a JAX function? But it's not clear if that's intentional/expected or not.
The text was updated successfully, but these errors were encountered:
YDF does not support training using GPU or TPU acceleration yet. Our team has experimented in this direction, but we have not yet found a strong (business) incentive to productionize it. Please let us know if you need support and we'll be happy to discuss options.
When converted to a JAX function, the model can run on GPU or TPU (or CPU) for serving and/or fine-tuning. Note that the non-JAX inference on CPU can be quite fast (~1 microsecond) with the right model / configuration. If inference speed is the main concern, it's probably worth considering CPU inference first.
We're confident in CPU inference speed for YDF and would like to avoid GPU/TPU usage for economic reasons in that scenario.
This use case is more focused on time series style problems where standard cross validation isn't viable and we'd have to use something like TimeSeriesSplit for training and evaluation across multiple splits.
Thank you for answering our question, it's much appreciated!
Hi There, it's unclear if Yggdrasil supports GPU or TPU acceleration. It seems like if you do fine tuning in JAX maybe it's possible when the model is converted to a JAX function? But it's not clear if that's intentional/expected or not.
The text was updated successfully, but these errors were encountered: