Date of Award
Doctor of Philosophy (PhD)
Electrical and Computer Engineering
David W. Capson
A new strategy for direct visual servoing for robotic position control is described. The approach does not rely on any external position or velocity sensors but directly sets motor current using visual feedback alone. The method is novel in that it can be implemented without specialized version hardware and is capable of processing visual feedback at high frame rates suitable for stable closed-loop position control of practical mechanical systems. A single RS-170 camera has a maximum sampling rate of 60Hz that significantly limits visual servoing performance. This limitation is overcome with multiple RS-170 cameras synchronized over a network in round-robin fashion to capture video fields at different instants in time. "Vision nodes", consisting of a camera and a dedicated computer, continuously process video at field rates to determine robot position. The vision algorithm, which is based on principal component analysis, is demonstrated to be suitable for accurate real-time position determination. Furthermore, the Euclidean distance in eigensepace in the presence of random occlusions is shown to be statistically related to the position measurement error variance. This leads to a novel approach for dealing with occlusions by considering them as "noise", the variance of which can be estimated directly from the Euclidean distance in eigenspace. A Kalman filter is then introduced to provide sensor fusion of the feedback from each vision node by weighting the position estimates from each camera to provide an improved overall position estimate. The Kalman filter also models the vision transport delays to reduce their effects on the visual feedback. Simulation results illustrate improvement in dynamic performance as the number of cameras are increased. Further simulations predict robustness to simulated occlusions. An experiment was designed and performed to experimentally verify the strategy for direct visual servoing. A 1 DOF servo drive equipped qith a rotating link constituted a simple "planar robot" testbed for demonstrating the distributed visioni and control techniques. The testbed was built using "off the shelf components" consisting of a network of four RS-170 cameras and computers connected to a master servo computer over a 100Mbps Ethernet network. An effective visual sampling rate of 240Hz was achieved. Several techniques were developed to achieve deterministic communications over Ethernet. The limitations on the number of cameras was analyzed and it was found that the Ethernet network could theoretically support up to 826 cameras. The main limitation was found to be the processor time on the master servo computer. Experimental results are shown for the system performing direct visual servoing under various different conditions. A direct visual servo emplying four cameras exhibits a step risetime of 190ms which closely matches the performance using traditional encoder feedback. Additional experimental results demonstrate the servo-hold performance and step responses with and without occlusions. Both full occlusions in a subset of cameras and partial occlusions in all cameras were investigated. The experment results validate the simulation results and verify that the strategy is capable of stable direct visual servoing and is robust to occlusions. Additional experiments are included that demonstrate performance under varying illumination conditions and various tele-robotic extensions.
Schuurman, Derek C., "Direct Visual Servoing Using Network-Synchronized Cameras" (2003). Open Access Dissertations and Theses. Paper 1366.