API
Faiga Fa'alilolilo
Tulaga o Auaunaga
Faafesootai matou
Mulimuli mai ia i matou i luga ole BlueSky
2026 Soundc LLC | Faia e nadermx
Fa'aleleia lau tala i le taimi nei ma maua uma nei mea:
Fa'aleleia lau tala i le taimi nei ma maua uma nei mea:
[Error: All translation engines failed for batch: MADLAD batch translation failed: CUDA out of memory. Tried to allocate 2.00 MiB. GPU 0 has a total capacity of 23.87 GiB of which 3.62 MiB is free. Process 3280094 has 228.00 MiB memory in use. Process 2050901 has 244.00 MiB memory in use. Process 3310941 has 1.43 GiB memory in use. Process 3310930 has 1.56 GiB memory in use. Process 3310934 has 1.06 GiB memory in use. Process 3310933 has 1.12 GiB memory in use. Process 3310931 has 1.10 GiB memory in use. Process 3310938 has 1.53 GiB memory in use. Process 3310945 has 1.19 GiB memory in use. Process 3310935 has 1.02 GiB memory in use. Process 3310940 has 1.06 GiB memory in use. Process 3310929 has 1.23 GiB memory in use. Process 3310947 has 1000.00 MiB memory in use. Process 3358747 has 336.00 MiB memory in use. Including non-PyTorch memory, this process has 9.47 GiB memory in use. Process 3364221 has 336.00 MiB memory in use. Of the allocated memory 9.29 GiB is allocated by PyTorch, and 11.94 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)]
[Error: All translation engines failed for batch: MADLAD batch translation failed: CUDA out of memory. Tried to allocate 2.00 MiB. GPU 0 has a total capacity of 23.87 GiB of which 3.62 MiB is free. Process 3280094 has 228.00 MiB memory in use. Process 2050901 has 244.00 MiB memory in use. Process 3310941 has 1.43 GiB memory in use. Process 3310930 has 1.56 GiB memory in use. Process 3310934 has 1.06 GiB memory in use. Process 3310933 has 1.12 GiB memory in use. Process 3310931 has 1.10 GiB memory in use. Process 3310938 has 1.53 GiB memory in use. Process 3310945 has 1.19 GiB memory in use. Process 3310935 has 1.02 GiB memory in use. Process 3310940 has 1.06 GiB memory in use. Process 3310929 has 1.23 GiB memory in use. Process 3310947 has 1000.00 MiB memory in use. Process 3358747 has 336.00 MiB memory in use. Including non-PyTorch memory, this process has 9.47 GiB memory in use. Process 3364221 has 336.00 MiB memory in use. Of the allocated memory 9.29 GiB is allocated by PyTorch, and 11.70 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)]