Ken Binmore reviews Axelrod’s 1997 book The Complexity of Cooperation (the followup of his Evolution of Cooperation). He is surprisingly harsh and speaks of the ‘tit-for-tat bubble’, the persistence of which ‘is a mystery to game theorists. Why do science writers continue to use TIT-FOR-TAT as the paradigm for human co-operation?’
Slogans are easy, but often false, or non-distinguishing (e.g., when the slogan and its opposite are both true); on the other hand: truths may be too complex to be useful for guiding our behavior …
There is a similarly critical 1999 reappraisal of multi-agent systems research by Nwana and Ndumu. A central message is this:
A new field is only defined by its problems, not its methods/techniques. We argue strongly that MAS has to some degree been falling into the trap that has befell AI – that of deluding itself that its methods and techniques (e.g. cooperation, rationality theories, agent languages, conceptual and theoretical foundations, multi-agent planning, negotiation) are the real important issues. No! They are not! It is the problems that they are meant to solve as in air-traffic control or electronic commerce that are foremost important.
They quote Donald Schon, who wrote in 1983 that
there is a high, hard ground where practitioners can make effective use of research-based theory and technique, and there is a swampy lowland where situations are confusing “messes” incapable of technical solution. The difficulty is that the problems of the high ground, however great their technical interest, are often relatively unimportant to clients or to the larger society, while in the swamp are the problems of greatest human concern. Shall the practitioner stay on the high, hard ground where he can practice rigorously, as he understands rigor, but where he is constrained to deal with problems of relatively little social importance? Or shall he descend to the swamp where he can engage the most important and challenging problems if he is willing to forsake technical rigor?
Interestingly Binmore and Nwana/Ndumu are both critical about some received wisdom, but from very different perspectives. Binmore focuses on theoretical truth (where Schon’s rigor lives), Nwana/Ndumu on practical problem solving (Schon’s swamp).
One more quote from Nwana/Ndumu about MAS & real problems:
Too many academic MAS researchers with a few notable exceptions do not really “own” (and hence do not appreciate) real MAS problems. Solution merchants looking for problems, drawn in by the hype of the domain? Furthermore, from the viewpoint of practitioners, some of the issues addressed by academics are rather pedestrian, again because of a failure of understanding and/or facing up to the real problems required of multi-agent system designers. We have in mind issues such as agent rationality arguments, logics, formalisation of belief, desire and intentions, etc. – for their own sake – which buys us nothing, and looks dangerously half-baked, hollow and impractical, particularly in the absence of a real problem. We say this with some trepidation because academic researchers require the freedom to do research for its own sake – and they should not be shackled by real world concerns. However, our point here is a rather very important one of premature formalisation. Stuart Russell argues in his 1997 AI journal article against what he calls “premature mathematization” in AI. He writes: “There is always a danger, …, that … can lead to “premature mathemization”, a condition characterised by increasing technical results that have increasingly little to do with the original problem” (Russell, 1997).