Best practices: Use double precision in BoTorch! #1444
Replies: 3 comments 2 replies
-
Can confirm, a lot problems and hard to debug issues go away when using BoTorch in double precision! To add to that, if you already have a lot of code that does not explicitly set dtype, the easiest way to migrate to double precision is to do |
Beta Was this translation helpful? Give feedback.
-
Hi @saitcakmak , as a question, when I was using float32 I was being directed here, now I've changed but have warnings of numerical instability (negative variances). |
Beta Was this translation helpful? Give feedback.
-
@saitcakmak Hello! I'm a beginner in BoToRCH and in I'm using the learning document https://botorch.org/docs/getting_started的第一个例子就被引导到这个GitHub页面 so I was wondering if you could tell me how this double precision change should be modified, f thanks a lot! |
Beta Was this translation helpful? Give feedback.
-
The use of single precision arithmetic, i.e., the use of default
torch.float32
tensors, commonly lead to numerical issues when working with Gaussian Processes. For example, when the training inputs (train_X
) are clustered closely in the search space, the resulting covariance matrix ends up being (numerically) singular, leading to errors while computing the Cholesky factorization. To help address these numerical issues, we strongly recommend using double precision in BoTorch.All you need to do to use double precision is to add
dtype=torch.double
in the tensor constructors.Beta Was this translation helpful? Give feedback.
All reactions