Optimization of Performance for Large Text Segmentation in the Knowledge Base #10881
Open
4 of 5 tasks
Labels
💪 enhancement
New feature or request
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
When using the knowledge base, during text segmentation and cleaning, the uploaded text will be read and split. When clicking "Save and Process," the split operation will be repeated. If the text is too large, this repeated operation takes a long time, resulting in poor performance. During the text splitting process, large texts significantly challenge performance. Is it possible to use CUDA for accelerated processing?
2. Additional context or comments
在使用知识库的时候,在文本分段与清洗里面,这个时候会读取上传的文本,进行split分割文本。
当点击保存并处理,此时会重复上面的split分割文本操作,如果文本过大的时候,重复操作耗时十分长,性能太差。
进行split分割文本的时候,文本过大的时候,对性能考验比较大,能否使用CUDA进行加速处理。
3. Can you help us with this feature?
The text was updated successfully, but these errors were encountered: