As we all know, KAN has a very slow training speed and weak task performance. We aim to optimize the training speed of KAN and improve the performance on Various tasks with Transformer-like arch.
Due to the surge in popularity of KAN, we plan to incorporate KAN into Transformer-like models for evaluation on Point cloud & Vision Tasks (Like ViT and PointNet++). Stay tuned to see what challenges we encounter and how we effectively improve them!