Aiming at the problems of difficult high-dimensional state space modeling and complex continuous control strategy optimization in robot navigation, this paper proposes a reinforcement learning framework that integrates topological dimensionality reduction, Radon feature extraction and deep deterministic policy gradient (DDPG). Firstly, the topological dimensionality reduction method is used to construct the rotational mapping graph (RMG) and the feature network (CN), which reduces the path planning complexity by 90%. Second, the Radon transform variant is designed to extract 24-dimensional normalized environment feature vectors to compress the dimensionality of sensory data. Finally, the OU noise equilibrium exploration-utilization is introduced based on the DDPG algorithm to learn a continuous speed control strategy on the fused state space. Simulation validation shows that the state-space dimensionality reduction model reduces the error in signal tracking by 63% and improves the convergence speed by 300%. The DDPG navigation strategy achieves an average reward of 567 in dynamic obstacle environments, exceeding the benchmark algorithm by 45.7%. Only 6.76 million training samples are required to reach 100% navigation success rate, less than 6% of that under SCF and CPDRL algorithms. The training time is 8.07h, and the convergence step size is 61,039 steps, which improves the efficiency by more than 40%. This framework provides an efficient solution for real-time autonomous navigation in complex environments.