With the growth of company worldwide, the need for business translation in English in cross-cultural communication has been brought up more and more. In this paper, we will examine the impact of cross-cultural factors on business translation in English by using the cross-language pre-trained language model XLM-R to improve machine translation accuracy from the source language side, from the target language side, and from both sides of the neural machine translation, respectively. In order to integrate the pre-trained multilingual embeddings of XLM-R into the encoder, decoder, and both ends of the Transformer model, we have first constructed three different types of networks. Following that, a number of tests were conducted to evaluate the models’ performance in the context of business translation in English. Regarding the model comparison studies, our suggested models’ BLEU score on the testing set is 13.98, whereas the baseline model’s score is 0.43 lower than ours. For this experiment, the best BLEU scores may be obtained by selecting model layers 3 and 6.