Skip to Main Content
Article navigation
Purpose

To address the challenge of maintaining stable contact force when a robot end-effector interacts with an unknown environment, this paper aims to propose a force control algorithm based on radial basis function (RBF) neural network stiffness prediction and reinforcement learning.

Design/methodology/approach

Based on the traditional force controller, reinforcement learning is used to find the optimal control parameters of traditional force control. To enhance the convergence speed of reinforcement learning, an RBF neural network is used to fit the predicted contact environment stiffness, and then the RBF neural network is combined with the Gaussian model, and the predicted stiffness is used to adjust the probability of parameter selection in the selection probability reinforcement learning, thereby accelerating the convergence of the algorithm.

Findings

The tracking force error between the normal force and the desired force is consistently maintained within ±0.5 N. Compared to PD control with fixed parameters and fuzzy iterative algorithms, the proposed method reduces the average absolute force error by 80% and 45%, respectively.

Research limitations/implications

The reinforcement learning for action prediction in this paper only focuses on the selection of kp value, and the impact on kd will be considered later.

Practical implications

This algorithm can be applied to robot processing and inspection scenarios.

Originality/value

The proposed algorithm can improve the search speed for robot force control parameters and enhance force control accuracy.

Licensed re-use rights only
You do not currently have access to this content.
Don't already have an account? Register

Purchased this content as a guest? Enter your email address to restore access.

Please enter valid email address.
Email address must be 94 characters or fewer.
Pay-Per-View Access
$41.00
Rental

or Create an Account

Close Modal
Close Modal