Saturday, August 6, 2011

A good open problem?

What makes a open problem a good problem to work on? What are the criteria? Let me take an example and try to argue for it. I think that the 2010 paper by Chlamtac, Krauthgamer and Raghavendra is a good research direction. That paper is only a first step, with much room left for improvements. Here are some questions raised by that paper.

- (Improve) Is there hope that by vastly increasing the number of levels of lifting in the LP, the approximation ratio is reduced to a constant independent of the treewidth, so that the treewidth appears in the runtime only?
- (Simplify) Now that there is one proof, extracting its key ideas, is it possible to design a direct, possibly more efficient argument that would not rely on lift-and-project?
- Is there a version that would work in the framework of branch decomposition instead of treewidth?
- (Generalize)The authors present a table of related results and techniques. The diversity of algorithmic techniques is surprising. Is there a way to unify them and provide a single general-purpose algorithm based on Sherali-Adams lift-and-project, that would simultaneously solve many of cases listed in the table? As a warmup question, how does the algorithm perform in the (previously solved) case of uniform demands?
- (Same inputs, other problems) Can a similar approach be tried for other non-local non-trivial problems on graphs of bounded treewidth, for example Steiner forest (recently Ptas'ed in bounded treewidth graphs by Bateni Hajiaghayi and Marx) or bandwidth problems?
- (Same problem, other inputs) Many planar graph algorithms work by a reduction to graphs of bounded tree width. Can the result be extended to planar graphs?

Here is why I think it's a good research topic. First, network cuts and connectivity problems are the bread and butter of approximation algorithms. That's where much of the action is, where a lot of the significant progress happens, and seems to be an inspiration for exciting research. Second, related to the first, it's an area where many people are interested in whatever new result comes up, and where it's relatively easier than other subfields to get papers accepted. Third, this particular problem has not been looked at by that many people, so it is possible that improving on the Chlamtac et al result is within reach. Fourth, there's a range of questions that come to mind when looking at that paper, some hard, some easier, so it ought to be possible to work on this in a mixed breadth-first-search/depth-first-search manner so as to avoid being fruitlessly stuck with a smooth wall blocking all progress and all ideas: go in depth in the most promising direction, until one gets stuck, then branch off to another less hopeless direction, etc. Fifth, for people doing algorithms, it requires only a modest investment: read in depth a mere couple of background papers in addition to the Chlamtac et al paper. Sixth, if worst comes to worst and you do not get any new result out of your endeavour, at least you will have learnt about lift-and-project and about treewidth. Learning new tools and techniques is a net plus even if no research result comes out of that failed project. (http://terrytao.wordpress.com/career-advice/learn-the-power-of-other-mathematicians-tools/).

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.