Garbage collecting persistent data structures
Summer 2007
I spent a summer at Harvey Mudd doing a National Science Foundation REU learning about persistent data structures (data structures that maintain a version history of updates) and how they can be effectively garbage collected.
Most garbage collectors work on the principle that only elements of data structures that no longer are pointed to are garbage and that everything else is live and should be kept. In certain persistent data structures, like those in the seminal 1989 paper by Driscoll et al, this overhead is especially wasteful. In order to make asymptotic performance guarantees, pointers pop up between nearly every element of the data structure, guaranteeing that a normal garbage collector will keep the entire data structure until it can be completely discarded.
With a proper interface abstraction, the data structure can ensure that the end user doesn't have to care about its internal workings and can throw away pointers to versions of the data structure that are no longer interesting. By allowing the data structure to direct its own cleanup, the garbage collector can assure that the theoretical space savings inherent in the structure's design become actual space savings.
Since the garbage collector itself takes time on the order of the size of the data structure to examine the structure, the holy grail is to develop an algorithm that would take an equivalent amount of time and yet pick up all the logical garbage. We have not completely solved the problem but have, along the way, found many other algorithms that exhibit time/completeness tradeoffs with regards to the amount of garbage found.
Following is a draft version of a paper Phil Miller and I wrote on the subject. There is also a set of presentation notes linked to below on this topic and garbage collection in general. The presentation was given by Zvi Effron, Phil Miller, and myself.
Garbage Collection presentation
Garbage Collecting the Split Node Persistent Data Structure paper
Modeling operator/automation authority
Summer 2006
Over the course of a summer LARSS internship at the NASA Langley Research Center, I investigated how a formally-verified architecture of airplane electronics could be extended to include the pilot in its fault-tolerant specification. So, if the pilot becomes unconscious or is otherwise incapacitated, the autopilot could automatically take over. Following are my research paper and poster regarding our initial work. The problem is much too broad to be solved in a single summer with only a few researchers, so research continues in the formal methods group at NASA Langley.
Modeling Operator/Automation Authority poster
Modeling Operator/Automation Authority tech report (figures available upon request)
Cognitive and ecological techniques compared for weapons of mass destruction scenarios
Summer/Fall 2005
Over the summer and fall of 2005 in Vanderbilt's Human-Machine Teaming Laboratory (not frequently updated), I worked on learning enough about robotics and human factors to do a human factors study on identifying potential target area for robotic application from a high-level perspective. Following are my research paper and a poster regarding the starting efforts. Work has since progressed to dealing with developing human-robotics interfaces based on these techniques.
Cognitive and ecological modeling poster
Cognitive and ecological modeling tech report (figures available upon request)
Working at Flying Tiger Development in La Habra, CA
Summer 2003/2004
I designed, tested, implemented many games for lots of different cell phones in J2ME and BREW. I took several games designed for Japanese cell phones and got them working within the relatively limited constraints of American phones. The most significant title I worked on was the Java port of the Pirates of the Carribbean cell phone game as the sole developer for a while. Some pictures of the other various titles I worked on:
Air Raid : Bomber : Bowling : Cball : Klondike : Nuts&Milk : Pong : Tank