(1)
Ouyang, M.; Zhang, F. CUDA-Optimized Inference Engine for Large-Scale Language Models: Design, Kernels, and Latency Improvements. Journal of Theory and Practice in Engineering and Technology 2025, 2, 1-9.