About Jason >> Robotics Research
The paper I submitted with Rahul, Illah, and Aroon on our Visual
Odometry algorithm was accepted to ICRA 2005 and I look forward to
presenting it there in April. ICRA is a top-tier robotics conference.
Babu Pillai and I also submitted a paper to ICRA on the Robot
Photographer, and that paper was also accepted!
Meanwhile I have shifted my focus substantially towards the Dynamic
Physical Rendering Project. Two colleagues, Babu Pillai (Intel
Research) and Seth Goldstein (CMU), and I recently submitted a paper
on an entirely new mode of power delivery for densely connected
modular robot ensembles such as those envisioned in DPR. Our work
demonstrates several strong theoretical and practical results which we
think are very exciting, including establishing high-current-capible
power distribution networks within multi-thousand robot ensembles
within a few dozen timesteps. (A timestep might be from 100 nsec to 1
msec depending on the implementation technology, so in any case the
total time described is a small fraction of a second.)
Our work over the past few months on Visual Odometry has been quite
successful and very well received within the community. Certainly others
have built visual odometry systems before, but ours is the first that many
people have been able to see, touch, and experience. It works well enough
that several non-computer-vision researchers have expressed interest in
using the code on their
own robots, and we look forward to helping with that as we prepare the
code for a general open source release this fall. There are several
posters and one
discussing the visual odometry work on my
professional website at Intel.
and I had a fruitful collaboration this
summer thinking about how to wisely manage finite storage capacity
given an infinite-capacity multimedia data source such as a camera or
microphone and several client applications using the storage
subsystem. The resulting algorithm, which we have termed
"Multi-Fidelity Storage" works suprisingly well and is already being
used in multimedia storage and retrieval work here at the lab. We
were able to show a particularly interesting video recording demo at
the recent Open House on Aug. 22. There are more details on my
Intel website, including a
and posters from the demo.
This month I also helped write a workshop paper on Video
Ferret, a video search system presently in development at IRP.
VideoFerret uses Multi-Fidelity Storage (see above) and
Diamond as part of its underlying architecture.
Illah, Rahul, and I are busy preparing the visual odometry demo we
will be presenting at the AAAI conference in San Jose next month. And
our paper on evaluating visual odometry systems has been accepted to
IROS 2004 and we look forward to presenting that in the fall!
I'm also working with a CMU student,
Aroon Pahwa, who has developed
C++ client code to control a the PER robots we use for most of our
robotics research around the lab.
At present I'm working with my collaborators,
Illah Nourbakhsh and
Rahul Sukthankar to
crisply define our research goals for the coming year. More information
will appear here soon!
During the open house at the
Pittsburgh Lablet this month I demonstrated several basic robot
motion-control primitives implemented using computer vision optical
flow techniques. Optical flow is a subfield of computer vision which
seeks to determine camera motion ("ego motion") or viewed-object
motion from a video sequence. For these demos I mounted a webcam on a
simple robot and used the camera as the only sensory input to the
robot's control system while illustrating robust course correction,
accurate (distance) traversal, and reliable detection of precipices.
Optical flow is a well explored field and the work I showed did not
break new theoretical ground. It was really just a warm-up to future
research and a statement about what I regard as a very important point
about sensor economics in the near future: The increasing
computational capacity of embedded processors and rapidly falling
prices of CCD and CMOS imagers will soon allow computer-vision-based
sensing to displace other types of sensing on cost and performance
grounds. In this case I used an unremarkable $50 webcam atop a $300
robot connected to a $2000 laptop (which could have been significantly
cheaper) to perform several tasks that would have challenged vastly
more expensive research robots equipped with encoders, sonar, and
laser rangefinders (at a cost of from $500-$5,000 for the sensors
alone). Vision-based sensors won't be a panacea, but they will offer
an unprecedented jump in quality and quantity of information (vs sonar
or infrared) at bargain-basement prices.
You can get more information about this work, including video clips of
two of the demos, from my
I've accepted a position on the research staff of the Carnegie Mellon
University Robotics Institute, Pittsburgh, PA. There I will be
working closely with RI faculty and staff and researchers at the Intel
Pittsburgh Lablet to meld sensor networks and autonomous robotics.