## Thursday, March 15, 2012

### Email and dirty dishes

The ever-repeated quandary: is time better spent dealing with emails or actually doing work? When email exceeds my bandwidth I have a depressing accumulation mounting in my inbox, making the start of each day an ordeal. When I give priority to email, the day is spent doing small routine tasks and there is no sense of doing anything of actual value. It's a recurring problem. The problem that will not go away and that only gets worse with delays! It's like dirty dishes piling up in the sink.

### It's all about notation

When designing an algorithm, one often critical step is to find a good representation for the problem input and output, that will help us think about it clearly.

When writing a proof, one often critical step is to find the right notation. Once you have decided what objects are worth having an independent notation, that will direct (or mislead) the proof.

When writing a proof, one often critical step is to find the right notation. Once you have decided what objects are worth having an independent notation, that will direct (or mislead) the proof.

## Tuesday, March 6, 2012

### The morale of today's lecture

Before designing an algorithm: find a good visual representation for the input and output helps you think about the problem.

Case in point: Viterbi's algorithm. Input includes so much data that it's messy to think about, until the right graph is drawn (with colors, too.)

For when the input is a permutation: represent it as a collection of points (i,sigma(i)), and hope that the output of the problem can be interpreted visually.

Case in point: Viterbi's algorithm. Input includes so much data that it's messy to think about, until the right graph is drawn (with colors, too.)

For when the input is a permutation: represent it as a collection of points (i,sigma(i)), and hope that the output of the problem can be interpreted visually.

## Monday, March 5, 2012

### James Q. Wilson on liberal education

James Q. Wilson, whose existence I just learned about upon the event of his death, authored a thought-provoking essay about liberalism and liberal education. He writes:

This reminded me of the unsightly celebrations in the US a few years ago, when Saddam Hussein was executed. Sure, many people are against the death penalty... except for loathsome criminals like Saddam Hussein!

Currently, it brings to mind the preparation of the presidential campaign in France, where putative candidates must obtain the "signature" of 500 mayors before they can officially pose their candidacy (there are about 35000 mayors in the country and each mayor can give their signature to exactly one candidate.) One current question is whether the candidate for the extreme right party (the party of all prejudices) will get 500 signatures. There may be some lobbying to pressure mayors not to give her their signature, even though, in the upcoming election, it appears that she would get the votes of roughly 20 percent of the population. What is more important: to preserve the liberty of presenting candidates for all significant political parties, or to prevent the development of ideas that one detests? That is close to the dilemma presented by Wilson.

*A liberal education is at its best when it strikes this balance: when it makes one aware that principles must ultimately be justified by something more than mere utility, that liberty is as worth preserving when it is attacked by a group one admires as it is when assaulted by a group one detests, and that the bonds of civility upon which the maintenance of society depends are more fragile than we often admit.*This reminded me of the unsightly celebrations in the US a few years ago, when Saddam Hussein was executed. Sure, many people are against the death penalty... except for loathsome criminals like Saddam Hussein!

Currently, it brings to mind the preparation of the presidential campaign in France, where putative candidates must obtain the "signature" of 500 mayors before they can officially pose their candidacy (there are about 35000 mayors in the country and each mayor can give their signature to exactly one candidate.) One current question is whether the candidate for the extreme right party (the party of all prejudices) will get 500 signatures. There may be some lobbying to pressure mayors not to give her their signature, even though, in the upcoming election, it appears that she would get the votes of roughly 20 percent of the population. What is more important: to preserve the liberty of presenting candidates for all significant political parties, or to prevent the development of ideas that one detests? That is close to the dilemma presented by Wilson.

## Sunday, March 4, 2012

### Lecturing on meta-topics

One primary goal of the Algorithms course that I am currently teaching is to teach problem-solving. That is normally done by example, going through a diverse array of algorithmic problems that come up in computer science. However, my way of teaching that has changed a little bit in recent years, with a shift to the meta-level.

Many methods I use to attack problem come without thinking. Through taking courses, reading papers, and collaborating with other researchers, I have absorbed certain ways of going about solving problems, and now I reproduce them in the hope that students will absorb them in turn. However, one of my former PhD students, Warren Schudy, systematically tried to bring out the meta-principle behind our discussions. He would be very explicit about it, reflecting aloud: "So, to no longer be stuck on this question, you are suggesting to try out some small concrete examples; that's your guiding principle" - and I would think: "Yes, of course he's right, that's what I'm doing!" Eventually I started to see the value of making explicit the guidelines that guide how we try to solve problems.

That adds a new element to my teaching: in class, I now try to not only solve the problem of the day, but also add a parenthetical comment to be more explicit about the path we took to problem-solving, the "guiding principle". It does not come naturally, since it can feel like pontificating, but it seems to have some value. Perhaps it helps demystify the "creative" steps and show that much can be achieved simply by systematically following a natural path of exploration - natural, once you have acquired enough experience to know what you're doing. I do not always remember to do it, but, following Warren's example, I try, at the end of each piece of algorithmic design or analysis, to reflect back on how we achieved it, and on the meta-tools that led us to the solution.

Many methods I use to attack problem come without thinking. Through taking courses, reading papers, and collaborating with other researchers, I have absorbed certain ways of going about solving problems, and now I reproduce them in the hope that students will absorb them in turn. However, one of my former PhD students, Warren Schudy, systematically tried to bring out the meta-principle behind our discussions. He would be very explicit about it, reflecting aloud: "So, to no longer be stuck on this question, you are suggesting to try out some small concrete examples; that's your guiding principle" - and I would think: "Yes, of course he's right, that's what I'm doing!" Eventually I started to see the value of making explicit the guidelines that guide how we try to solve problems.

That adds a new element to my teaching: in class, I now try to not only solve the problem of the day, but also add a parenthetical comment to be more explicit about the path we took to problem-solving, the "guiding principle". It does not come naturally, since it can feel like pontificating, but it seems to have some value. Perhaps it helps demystify the "creative" steps and show that much can be achieved simply by systematically following a natural path of exploration - natural, once you have acquired enough experience to know what you're doing. I do not always remember to do it, but, following Warren's example, I try, at the end of each piece of algorithmic design or analysis, to reflect back on how we achieved it, and on the meta-tools that led us to the solution.

## Saturday, March 3, 2012

### The New York review of books on absolute versus relative error

We usually evaluate approximation algorithms by the ratio of the output value to the unknown optimal value. Although it is almost a dogma at this point that this is the most reliable way to evaluate an algorithm, the last issue of the New York Review of Books gives an outstanding counterexample.

The article is debunking a disinformation piece published by 16 scientists in the Wall Street Journal on January 27. The WSJ misleading article was a piece of global warming skepticism.William Nordhaus establishes the fallacy of their arguments one by one. The point that caught my attention was the one about economic analysis. Nordhaus writes:

The article is debunking a disinformation piece published by 16 scientists in the Wall Street Journal on January 27. The WSJ misleading article was a piece of global warming skepticism.William Nordhaus establishes the fallacy of their arguments one by one. The point that caught my attention was the one about economic analysis. Nordhaus writes:

*The sixteen scientists argue, citing my research, that economics does not support policies to slow climate change in the next half-century*, then moves on to offer the following, which can be interpreted as a beautifully clear argument against making a dogma out of approximation ratios.*The authors cite the “benefit-to-cost ratio” to support their argument. Elementary cost-benefit and business economics teach that this is an incorrect criterion for selecting investments or policies. The appropriate criterion for decisions in this context is net benefits (that is, the difference between, and not the ratio of, benefits and costs).*

This point can be seen in a simple example, which would apply in the case of investments to slow climate change. Suppose we were thinking about two policies. Policy A has a small investment in abatement of CO2 emissions. It costs relatively little (say $1 billion) but has substantial benefits (say $10 billion), for a net benefit of $9 billion. Now compare this with a very effective and larger investment, Policy B. This second investment costs more (say $10 billion) but has substantial benefits (say $50 billion), for a net benefit of $40 billion. B is preferable because it has higher net benefits ($40 billion for B as compared with $9 for A), but A has a higher benefit-cost ratio (a ratio of 10 for A as compared with 5 for B). This example shows why we should, in designing the most effective policies, look at benefits minus costs, not benefits divided by costs.This point can be seen in a simple example, which would apply in the case of investments to slow climate change. Suppose we were thinking about two policies. Policy A has a small investment in abatement of CO2 emissions. It costs relatively little (say $1 billion) but has substantial benefits (say $10 billion), for a net benefit of $9 billion. Now compare this with a very effective and larger investment, Policy B. This second investment costs more (say $10 billion) but has substantial benefits (say $50 billion), for a net benefit of $40 billion. B is preferable because it has higher net benefits ($40 billion for B as compared with $9 for A), but A has a higher benefit-cost ratio (a ratio of 10 for A as compared with 5 for B). This example shows why we should, in designing the most effective policies, look at benefits minus costs, not benefits divided by costs.

Subscribe to:
Posts (Atom)