MLLM-UQ is a new framework for uncertainty quantification in numerical prediction based on a multimodal large language model (MLLM), and traditional methods have not fully accounted for prediction uncertainty. MLLM-UQ has the following three innovations: A Hierarchical Multimodal Alignment and Adaptive Gated Fusion mechanism (HMA-AGF) for dynamically integrating tabular, image and text data; a multi-granularity uncertainty decomposition framework based on hierarchical Bayesian inference to quantify different types of uncertainty (epistemic, aleatoric and distributional); and an Uncertainty-Aware Adaptive Loss (UAAL) function that modifies learning according to sample-level uncertainty. Experiments on all the above benchmarks have achieved the expected results, with a reduction in RMSE and MAE of 23.3% and 26.1%, respectively, an increase in 96.0% Prediction Interval Coverage Probability (PICP), and a decrease in Mean Prediction Interval Width (MPIW) by 23.7%; thus, both the accuracy and reliability have been enhanced.