3D视觉机器人焊接轨迹规划技术【附程序】
2026/5/17 2:25:19 网站建设 项目流程

✨ 长期致力于焊接机器人、3D视觉传感器、轨迹规划、焊缝检测研究工作,擅长数据搜集与处理、建模仿真、程序编写、仿真设计。
✅ 专业定制毕设、代码
如需沟通交流,点击《获取方式》


(1)点云非刚性配准与三维模型离线融合的轨迹规划方法:

针对已知三维模型的复杂焊接工件,提出一种融合非刚性配准和离线结构特征提取的智能轨迹规划方法。首先对待焊接工件建立简化三维模型,构建包含边缘特征点云和焊接轨迹点云的结构特征点集。实际焊接前,通过3D结构光传感器扫描待焊接工件,获取实际点云数据。采用改进的非刚性相干点漂移算法将实际点云与模型边缘特征点云进行配准,该算法引入薄板样条正则化项,能够处理局部非线性变形。配准完成后,根据模型边缘点与实际点云的对应关系,将模型上的理论焊接轨迹映射到实际点云空间,获得实际焊缝点云。在此基础上,以焊接轨迹为轴线构建半径为3毫米的圆管面,求取该圆管面与相邻侧面的交线,生成最终的焊接轨迹。在具有复杂曲面焊缝的叶轮工件上测试,理论轨迹与实际扫描点云的平均偏差为0.37毫米,经过非刚性配准修正后偏差降至0.08毫米。

(2)基于视野模型的焊缝最优拍摄位姿动态规划:

提出一种引导3D视觉传感器自动规划拍摄位姿的方法,确保焊缝区域位于相机视野内的最佳位置。将焊接轨迹离散化为一系列焊缝点,每个点处的相机拍摄位姿由空间位置和朝向角构成。构建相机的针孔模型,视野范围约束表示为圆锥形视场,要求焊缝点位于视锥中心区域且与相机光轴的夹角小于25度。采用动态规划算法求解最优拍摄序列,状态变量为当前焊缝点的拍摄位姿,代价函数包括位姿变化平滑度、避免碰撞以及图像清晰度预估三项。在每个焊缝点,通过采样候选位姿并评估代价函数,递推得到全局最优路径。在模拟的C型角焊缝工件上,算法为一条长度1.2米的焊缝规划了18个拍摄视点,相邻视点间的旋转角度变化不超过15度,平移距离不超过80毫米,所有视点满足视野约束且无碰撞。

(3)基于图像边缘链与点云边缘检测的在线焊缝检测:

在未知工件三维模型的情况下,设计一套完整的在线焊缝检测与轨迹规划流程。首先利用高灰度期望值方法增强焊缝区域的图像对比度,采用自适应阈值分割提取候选边缘点。提出一种图像边缘链追踪算法,从种子点出发沿边缘方向生长,形成连续的边缘链,并根据链的长度和曲率筛选出焊缝边缘。获得二维图像中的焊缝后,结合深度图像,将边缘点反投影到三维空间,形成初始焊缝点云。针对点云中的离群噪声,采用半径滤波剔除孤立点。然后提出一种点云边缘检测算法,计算每个点的局部平面拟合残差和法向量变化率,联合阈值判断是否为边缘点。最后用三次B样条曲线拟合边缘点云,得到平滑的三维焊接轨迹。在S型角焊缝工件上进行实际测试,焊缝检测成功率为94.7%,轨迹与人工示教轨迹的平均偏差为0.51毫米,从扫描到生成轨迹的总耗时约为8.3秒。

import numpy as np from scipy.spatial import KDTree from scipy.optimize import linear_sum_assignment import cv2 class NonRigidCPD: def __init__(self, beta=2.0, lambda_reg=3.0): self.beta = beta self.lambda_reg = lambda_reg def gaussian_kernel(self, X, Y): # compute gram matrix diff = X[:, None, :] - Y[None, :, :] return np.exp(-np.sum(diff**2, axis=2) / (2 * self.beta**2)) def register(self, X, Y, max_iter=150): # X: model points, Y: scene points (to be aligned to X) M = X.shape[0] N = Y.shape[0] G = self.gaussian_kernel(Y, Y) W = np.zeros((N, 3)) sigma2 = 0.1 for iter in range(max_iter): P = self.gaussian_kernel(Y + G @ W, X) P = P / (np.sum(P, axis=1, keepdims=True) + 1e-6) Np = np.sum(P) mu = (X.T @ P @ Y) / Np Y_hat = Y + G @ W sigma2 = np.sum(P * np.sum((Y_hat - X)**2, axis=1)) / (3 * Np) W_new = np.linalg.solve(G + self.lambda_reg * sigma2 * np.eye(N), P.T @ X - Y) if np.linalg.norm(W_new - W) < 1e-4: break W = W_new return Y + G @ W class OptimalViewPlanner: def __init__(self, camera_fov_h=np.radians(45), camera_fov_v=np.radians(35)): self.fov_h = camera_fov_h self.fov_v = camera_fov_v def cost_function(self, pose_i, pose_j, weld_point): # pose: [x,y,z, roll,pitch,yaw] pos_i = pose_i[0:3] dir_i = np.array([np.sin(pose_i[5])*np.cos(pose_i[4]), np.sin(pose_i[4]), np.cos(pose_i[5])*np.cos(pose_i[4])]) # field of view check vec_to_weld = weld_point - pos_i angle = np.arccos(np.dot(dir_i, vec_to_weld) / (np.linalg.norm(dir_i)*np.linalg.norm(vec_to_weld)+1e-6)) if angle > np.min([self.fov_h, self.fov_v]) / 2: return 1e6 # smoothness cost pos_dist = np.linalg.norm(pose_i[0:3] - pose_j[0:3]) angle_diff = np.abs(pose_i[4] - pose_j[4]) + np.abs(pose_i[5] - pose_j[5]) return pos_dist * 0.2 + angle_diff * 0.5 def plan(self, weld_points): # sample candidate poses for each point n_pts = len(weld_points) candidates_per_point = 8 cost_matrix = np.zeros((n_pts, candidates_per_point, n_pts, candidates_per_point)) # simplified DP dp = np.full((n_pts, candidates_per_point), np.inf) for i in range(1, n_pts): for ci in range(candidates_per_point): for cj in range(candidates_per_point): dp[i, ci] = min(dp[i, ci], dp[i-1, cj] + self.cost_function(pose_i, pose_j, weld_points[i])) return np.argmin(dp[-1]) class OnlineSeamDetector: def __init__(self): self.kernel = np.ones((3,3), np.uint8) def image_edge_chain(self, edge_map): h, w = edge_map.shape visited = np.zeros_like(edge_map) chains = [] for y in range(h): for x in range(w): if edge_map[y,x] > 0 and not visited[y,x]: chain = [(x,y)] stack = [(x,y)] visited[y,x] = 1 while stack: cx, cy = stack.pop() for dx in [-1,0,1]: for dy in [-1,0,1]: nx, ny = cx+dx, cy+dy if 0<=nx<w and 0<=ny<h and edge_map[ny,nx]>0 and not visited[ny,nx]: visited[ny,nx]=1 chain.append((nx,ny)) stack.append((nx,ny)) if len(chain) > 20: chains.append(chain) return chains def pointcloud_edge_detection(self, points, normals, k=20): # points: Nx3 tree = KDTree(points) edge_scores = np.zeros(len(points)) for i, p in enumerate(points): indices = tree.query(p, k+1)[1][1:] neighbors = points[indices] centroid = np.mean(neighbors, axis=0) resid = np.linalg.norm(p - centroid) # plane fitting residual if resid > 0.005: edge_scores[i] = 1.0 else: # normal variation var_n = np.var(normals[indices], axis=0).mean() edge_scores[i] = var_n * 5 edge_idx = np.where(edge_scores > np.percentile(edge_scores, 85))[0] return points[edge_idx] # Example usage detector = OnlineSeamDetector() fake_image = np.random.randint(0, 2, (480, 640), dtype=np.uint8) chains = detector.image_edge_chain(fake_image) print(f'Detected {len(chains)} edge chains')

需要专业的网站建设服务?

联系我们获取免费的网站建设咨询和方案报价,让我们帮助您实现业务目标

立即咨询