Monday, September 9, 2013 - 16:00

Kyle Schroeder travelled to Aachen Germany to present his paper entitled "Framework for the use of generalized force and torque data in transitional levels of autonomy," at the International Conference on Intelligent Robotics and Applications (ICIRA) in Aachen, Germany held December 6-8th.  Kyle sent an informal summary of the papers to the rest of our research group which is available after the break.

      From Kyle  :     December 6-8, I attended the International Conference on Intelligent Robotics and Applications (ICIRA) in Aachen, Germany.  The conference was focused on applications of automation.  I believe ours was the only radiation/hazardous environment application.  The most prolific topic and the most interesting to our group was vision.  The main issues addressed (or remaining open) were object recognition and tracking.  An open issue related to vision seemed to be computational resources.  For example, when tracking an object, the system must identify when it has lost the object and switch immediately back to a searching/recognition state.  Doing both of these with the same system seemed to be a common issue.  The Kinect was brought up once or twice.  I was surprised to hear it mentioned so seldom but it may be that it is just not as popular there as here, yet.  In one talk about hand gesture tracking and recognition, the Kinect was brought up.  The presenter said its resolution wasn’t high enough to recognize hand features so it may also be that many of these people were more interested in higher resolution tasks.

Presentations/papers I thought might be interesting to our group.

There was a good presentation on an application of skin recognition. [Khanal, B. and D. Sidibe. “Efficient Skin Detection Under Severe Illumination Changes and Shadows.”]  The presented method was able to reliably detect skin tones in different lighting conditions by mapping the pixels to another space (Log-Chromacity Color Space).  In this space, all the pixels of the same color in different lighting conditions were aligned along parallel lines.  Each color could then be identified by its distance from the origin as measured along a line perpendicular to the parallel color lines. Skin tones were identified by a particular color range.  It worked impressively well even with pictures where the subject was partially lit by direct sunlight and partially in heavy shade, e.g. a hat-shaded face.  It also worked at least as well or better than many other algorithms for subjects of different races.  I suppose since the canister recognition problem is related to reflectivity and not color, this probably won’t help.  Perhaps it could be used to identify hazard (radiation, contamination, etc) markings in all lighting conditions by their Log-Chromacity Color Space value?

There was one talk on a socially acceptable way for a robot to approach a human using fast marching.  Lacking mobile robots, this isn't directly applicable to our group, but I think the fast marching technology might be useful.  The presenter had potential fields set up based on rules for approaching humans.  They looked similar to what I remember from Brian’s dose-minimization path planning experiments.  I didn’t understand exactly how the presenter did it, but he used this “fast marching” approach to ensure that there could be no dead-zones or local minima in which to get stuck.  (He presented this as a common weakness of potential fields…)  Regarding the dose minimization, I seem to remember some issue getting over the large potential gradient that existed at the boundary projecting behind the shield.  This fast marching method may warrant an examination as a potential manner to overcome that difficulty.  [Kessler, J., C. Schroeter, and H.-M. Gross. “Approaching a Person in a Socially Acceptable Manner Using a Fast Marching Planner”]

There were also several talks related to applications specific for manufacturing.  There were a few professors at the university in Aachen that hosted the conference (Rheinland Westfalen Technische Hochschule -- RWTH) who were interested in the issues associated with airplane manufacturing tasks.  There were a few related to the handling of very large materials (such as airplane fuselages).   Another paper may be more immediately interesting to Cheryl – “Re-grasping: Improving Capability for Multi-Arm-Robot-System by Dynamic Reconfiguration” [Corves, B., T. Mannheim, and M. Riedel].

One paper focused on path planning for flexible objects.  (The robot was folding boxes.)  The paper presented a method for determining the fold order and subsequent path planning.  Not sure, but it may be applicable as a bridge between Josh’s thesis work and Cheryl’s grasp configuration work; planning the order of tasks, identifying their paths, and determining grasp approaches.  [Liu, H. and H. Lin. “Non-rigid Object Trajectory Generation for Autonomous Robot Handling”]

The presentation of the paper “A vision system for the unfolding of highly non-rigid objects on a table by one manipulator” [Triantafyllou, D. and N. A. Aspragathos] seemed quite interesting and may be applicable to identifying other objects and their orientations/configurations in the glovebox.

The first keynote, that of Dr. Dillmann, was good, but I was still very jetlagged so I can’t remember it as well as the other two. The keynote talk by Dennis Hong was, as always, energizing and interesting.  Not entirely applicable to the current state of our program; his work is mostly related to mobile robotics.  He spoke about his groups competitions in the robocup robotic soccer competitions and the many of his other mobile robotics work.  He has done a lot with bio-inspired robots.  One of the coolest was his car for the blind.  His team developed a car that a blind person could drive -- not an automated vehicle in which a blind person could sit, a car that provided useful feedback that allowed a blind person to actually drive around a track.

The keynote of Bradley Nelson was very interesting but not immediately related to our work either.  His work is in micro and nano robotics specifically for medical applications.  He showed videos of tiny robots his group has developed for injection into eyeballs which are then steered and controlled by magnets from outside the eye.  The robots were very interesting and impressive, the needles in the eyeballs were also very gross.

There were several more interesting vision papers in the “Intelligent Visual Systems” (Wed 1:30-3:00pm and Wed 3:30-5:00pm) and “Image-Processing Applications” (Thurs 1:30-3:00pm) sections.  (See attached conference program.)