Yoshimasa Niwa 5ca4bbf319 Workaround pad problem on mps
When using `torch.nn.functional.pad` with tensor that size is
larger than 2^16 (65526), the output tensor would be broken.

This patch moves tensor to CPU to workaround the problem.
It doesn't too much impacts in terms of speed of vea on mps.
2024-11-05 12:46:13 +09:00
..
2024-10-23 15:34:22 +03:00
2024-11-01 05:22:49 +02:00
2024-11-05 12:46:13 +09:00
2024-11-01 05:22:49 +02:00