You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In downstream tasks, only patches are required (since we added a placeholder during the pretraining time). You can use our models in a way similar to normal ViTs.
Although we cropped 256x256 images from WSIs, the input during pretraining is still 224x224 with random crop and resize.
作者你好,首先感谢您提供了这么好的工作,我的疑问是在下游任务中,模型是否同时输入patch以及对应的region,另外,在Patch classification中的输入大小是224224,与模型预训练过程使用的256256不同,您是如何处理的?
The text was updated successfully, but these errors were encountered: