Call for Papers: ‘Artisanship in Culture and Enterprise’

Conference on Voluntary Governance
Artisanship in Culture and Enterprise
November 5–7, 2020
Arizona State University | Tempe, AZ

Info: https://csel.asu.edu/VGC?fbclid=IwAR1F6asIwJuBYwAsYipBsgXs80dcQMCC1B2tWkxwJWNpSnIML_PA1L9WIqc

Ideological and political polarization increasingly inhibit creative and consensus-building responses to social problems. Yet, as Vincent Ostrom observed, “human relationships are integrally bound together through artisanship and the artifactual character of human creations that are constitutive of cultures, societies, and civilizations as aggregate patterns of order.” The future of self-governance depends upon a citizenry who refuse to allow government “to be the sole agent and the only arbiter of [their] happiness… to spare them all the care of thinking and all the trouble of living” (Tocqueville). In a free society, robust citizenship requires more than just voting and policy advocacy, but also active engagement in crafting community together.

The 2020 Research Conference on Voluntary Governance will convene scholars interested in exploring how people in ordinary life across a society (and between societies) actually come to coordinate their activities to enable the achievement of individual and collective goals. We seek papers and presentations that:

-Explicate and develop the concept of artisanship as a framework for both creative problem-solving and transcending widening ideological gaps.
-Explicate and develop, through theoretical analysis and/or case studies, the meaning of a “science of culture” and the interplay between common knowledge, shared communities of understanding, patterns of accountability, and mutual trust in the constitution of viable social orders.
-Explore how the self-organizing and self-governing capabilities of citizens in democratic political orders originate and are cultivated and sustained.
-Explore the cultural foundations of creative civilizations.
-Present case studies of historical and contemporary social enterprises that successfully present creative and consensus-building responses to social problems, especially in the areas of education, criminal justice, and welfare-to-work.

ASU_CSEL

Lucas’s ‘overdue’ Nobel prize…?

The other day I had a brief albeit thought-provoking discussion with a colleague of mine on a paper. The paper is written by Julia Kiraly, former vice president of the National Bank of Hungary. Prof. Kiraly is a leading economic politician in my home country, though she has a profound knowledge in modern macroeconomics–admittedly a way more profound than the knowledge economic politician normally have.

The title of the paper, in loose translation, is ‘The end of macroeconomics, or an overdue Nobel prize’ (you can find the original Hungarian version here). Its overarching purpose is to provide a comprehensive overview on Robert E. Lucas’s contribution to business cycle theory and modelling, with his equilibrium approach and the island models in the focus, and to assess these achievements in the context of the overall evoloution of economics. It was published in 1998, a few years after Lucas’s Nobel prize, which is a circumstance we ought to keep in mind.

As far as the overview is considered, I have nothing to say. The author offers a fair and balanced summary, free of the common qualms about the alleged implausbility of the equilibrium approach. In itself it is an achievement. She does her best to explain what it means to approach the problem of large-scale macroeconomic fluctuations as ‘equilibrium’ phenomena, where business cycles are not the failures but the consequences of optimization.

At the same time, I am unconvinced whether Lucas’s Nobel prize was ‘overdue’ in any sense of the word. In the author’s phrasing ‘overdue’ refers to the alleged fact that Lucas received his prize in a day and age when he was over the top. It is admittedly true, though it by no means result in a prize awarded too late. To see how, it pays to have a look at some facts and interpretations.

Lucas received his prize in 1995, when he started abandoning his previous adherence to the idea of monetary business cycles. This year belongs to an era to be characterized with a diverse interest ranging from ‘monetary infected’ real business cycle models (as in his ‘Models of business cycles‘ from 1987) through growth theory (indicated by his selection entitled ‘Lectures on economic growth‘ published in 2002) to an intense interest in the industrial revolution (see his essay entitled ‘The industrial revolution–Past and future‘ from 2004). His most important papers were born in the 1970s: I would mention his island models from 1972, 1973 and 1975, and his ‘Econometric policy evaluation‘ paper together with the related work ‘After Keynesian macroeconomics‘, co-authored by Tom Sargent. The way these parts of Lucas’s career are related to each other, the way one stems from another is a distinct problem.

Out of the question, throughout the 1970s Lucas was the ‘strong man’ of macroeconomics. But note, Milton Friedman received his Nobel prize in 1976, in a year that belonged to the ‘Lucas-era’ so to speak. De Vroey in his recent ‘History‘ divides Friedman’s lifework into three distinctive (albeit occasionially overlapping) periods (p. 67), of which it is the second ranging from 1948 to the early 1970s that is labelled as the ‘most creative years’. Judged by this fact, Friedman’s prize was also overdue in this sense: in 1976 he was admittedly over the top. It is enough to realize that the way he presented the Phillips curve in his Nobel lecture bears close resemblance to the way Lucas addresses the problem. Here Friedman tells a story which sounds exactly like Lucas’s specific signal extraction problem.

The setting fundamentally changed for Lucas in the early 1980s when Edward C. Prescott (and others) came out with the idea of real business cycles. It was so radically a suggestion that it put everything, including Lucas himself as he recalls those years, aside. Walking along the way Lucas paved, in the 1990s it is the RBC-approach that dominated the field of theoretical and empirical macroeconomics, so RBC theorists have figured out how to do economics in a way Lucas suggested in the 1970. It is one thing to say that traditional Keynesian models miss the target by assuming the stability of economic behaviour, it is another thing to say that macroeconomics should pay attention to the dynamic character of behavioral rules, grabbed by the allegedly stable parameters of equations, though it is a third thing to render economics ready to apply these principles.  And we should also bear in mind the fact that Edward Prescott also received his Nobel prize some 20 years after his most productive and powerful years…

lucas-pic

 

Metaekonomia II.

Polish authors are particularly active in methodology, setting a very positive example for the whole profession. A recent outcome of their labours is ‘Metaekonomia II.’, the second volume of the series. This piece is dedicated to the investigation of some methodological problems of macroeconomics, be they recent or age-old, chronic or acute.

I was an invited contributor in the project (the introductory chapter entitled ‘Major methodological problems in contemporary macroeconomics’, translated into Polish, is my paper), the only non-Polish author accompanying Daniel M. Hausman. Working as a part of this illustrious team was a great pleasure and honour. I can hardly wat to receive my copy.

And, of course, thank you guys for this kind invitation.

How Friedman’s F53 fits into his oeuvre

This is a difficult question as the answer requires one to properly understand Friedman’s stance in F53 (what he thought about methodology) and to reconstruct the actual methodology underlying his applied economics (what kind of methodology he actually followed). It is not about reading an unnatural harmony into his oeuvre–suggesting a methodology in F53 he did not actually follow is an option.

These days time and again I find myself in the delightful position of thinking a lot about Friedman’s methodology as one of my students writes her thesis for BA on Friedman’s monetary econonomics and the FED’s Friedmanian experiment. My student is not frightened by digging deep in methodology, so during our long concersations we try to approach the problem from different directions. I do love these chats. She is very talented and ambitious and has higher learning (and, yes) reading abilities. I am especially grateful to her for keeping the topic alive: she has already recognized that a critique against Friedman can best be done on a methodological ground. The other day we talked about how important or ‘representative’ Friedman’s positivist methodology is in terms of his lifework.

It is well-known that F53 can be read in various ways. Even though it is the instrumentalist reading that dominates the literature, Uskali Mäki’s efforts to provide an alternative, realist rendition underlines the fact that F53 is a far cry from a homogeneous text. Mäki was unable to persuade the profession of the plausibility of his readlist reading, though his point was effective enough to show that F53, to say the least, is quite enigmatic. In other words, there are parts in the text that can easily be read in a realist way, whilst there are others that invalidate this reading–these conflicting parts can hardly be reconciled.

So what if we cut the text into pieces to create a body that can rightly be considered as realist? As a matter of fact, it was Mäki’s strategy: to ‘reconstruct’ F53 as a realist manifesto. However, we can do so even without thinking that such a strategy can lead to a proper and coherent interpretation. Cutting the text into parts can help us to reveal the inner conflicts in the text.

Mäki is right to think that some parts (or even the vast majority) of the text can be conciliated with a realist position, whilst there are parts that cannot. At the bottom-line, Friedman did not go farther than saying that theoretical models are descriptively unrealistic and descriptive performance is a wrong basis for theory choice. This is true–all the more so as science from the time of Galileo (at least) has followed this principle. It is beside the point now, but I do think that this was the basic message that made F53 so popular and acceptable and easily digestible. However, there is a problem Friedman disregards: there may be other requirements theoretical assumptions are to meet. There are realist assumptions that are descriptively false, whilst some other descriptively false assumptions have nothing to do with reality. His point is shown in his example of utility-maximizing leaves: this is supposed to be an as-if assumption. We know that leaves are not rational maximizers, whilst their ‘behaviour’ at a phenomenal level might be well described with this assumption.

We could say that Friedman missed the point when talking about rational leaves–I bet this was what Mäki had in mind. So it may be possible that by throwing out this example we can get a homogeneous text for F53. This option is something to consider, whilst at this point Friedman’s oeuvre cuts in. Is it sure that such purely unrealistic and fictitious assumptions are unprecedented in his works? I don’t think so–suffice it to refer to his Phillips-papers (his presidential address and his Nobel lecture) where he cooked up some confusing assumptions only to get to the outcomes he set in advance.

So should we leave his parable on rational leaves out of the reconstruction? I don’t think so. In my view, such an ommission would be justified only if Friedman had never applied such cooked-up assumptions–but he did! Thus such assumptions having nothing to with reality beyond the phenomenal level seem to be effective parts of his theorizing habit. At the same time, far be it for me to say that Friedman was a causal instrumentalist–in other words, that he never thought in causal terms. For instance, he really conceived the money supply as a (proximate) cause of large-scale fluctuations.

So what about Friedman’s methodology and his F53? I wouldn’t say he was an ardent antirealist or instrumentalist, but he hed no problems using antirealist assumptions. With no better labels at hand, this is the attitude of ‘anything goes’…

250px-Portrait_of_Milton_Friedman

Some personal reflections on the Lucas papers

Having completed the work leading to ‘The Friedman-Lucas transition in macroeconomics‘, I found myself in the delightful position of being able to make some personal remarks on the Lucas papers. The whole archival stuff takes up some 13000 pages, so by my covering about 5000 pages does not make it possible for me to tell about the Lucas papers as such–but still, this is a sample extended enough for drawing some interesting conclusions.

The first thing to mention is Lucas’s special sense of humour. It is a recurrent pattern in his notes that for the first attempt he does not cut down on his open and vitriolic sarcasm, and it is only in the second reading that he faces his high rhetorical profile. On such occasions he rubs out complete sentences whilst trying to keep the intended meanings, though in more modest and reserved forms. Of course, it is always the corrected versions that appear in the published texts, so these corrections provide an insider vision on Lucas’ personal character. I love them one by one.

Second, he loves taking notes. The whole Lucas archives is well-organized, so it is easy to identify the publication to which a given note was taken. In such notes he oftentimes goes into problems which are not covered in the final published work–so these notes are preliminary studies Lucas used to clarify some ideas or to dwell upon problems that, as he thought, are only of secondary importance. For instance, the first drafts of his paper ‘Adaptive behavior and economic theory‘ are available, but we can hardly find a single sentence in the drafts that turns up in the published study. These notes are often of methodological character–an interesting detail.

And third, Lucas’s high intellectual power is obviously manifested in the papers. He was cognizant of the importance of his achivements–as he puts it in a paper, he was the one who reinvented macroeconimics after Keynes’s and Friedman’s missteps. I love this tone of voice, admittedly. But at the same time, he is honest and self-critical. To mention a story, the conference at which RBC theory showed up placed him off the pitch–and this is the way he remembers the developments.

Why microfoundations?

In last week’s post on Friedman’s methodological position I mentioned as a misstep the way Friedman neglected the real properties of economic agents and other entities. I know it is a contentious issue, but still, I identified Friedman’s methodological position in his F53 as a principled neglect of the need for real entity properties or, in other words, the need for forming true propositions on the properties of the involved entities. In this post I try to shed some light on why Friedman’s mistake can serve as a strong point in favour of the microfoundations project.

To find the answers we need to leave economics for philosophy. Agents and other entities form a structure, so macro stems from micro in one way or another. This is the hotly debated problem of supervenience and reducibility on which very recently Brian Epstein and Kevin Hoover have published some critiques. Epstein in his ‘Why macroeconomics does not supervene on microeconomics‘ argues that neither supervenience nor reducibility holds, so micro cannot determine the macro–thus something else is needed to drive what macro looks like. Hoover’s opinion is more modest as he admits supervenience, thus any change in macro is conceived to stem from an accompanying change in the micro, though there are said to exist some ontologically distinct macro-level entities. These entities, such as some macro-economic figures like real GDP or the rate of inflation, cannot exist without the micro and, more specifically, economic agents, whilst they belong to a genuinely macroeconomic level. As Hoover in his ‘Microfoundations and the ontology of macroeconomics‘ or ‘Reductionism in economics‘ (and some other writings) argues that macroeconomics has ontological roots in the micro, though there is something in the macro that should be given account of on its own. Microfoundations form only an insufficient ground for the macro.

The point is macro has some anchor in the micro, so we ought to trace back everything as much as possible to the micro–to agents and other microeconomic entities. And this is the point where structuralist philosophies come into play. In the microfoundations project agents are conceived as the basic unit of economies, so they are the ultimate building blocks that make up societies. If Friedman is right, then causally understanding macro-economic events has nothing to do with the real properties of agents: we can understand why leaves have a given density over a tree by assuming that leaves are rational utility maximisers. By contrast, some structuralist philosophies, especially the semirealist version, call attention to the fact that the way things are structured and the way causality works in structures crucially depend on the properties of entities. On this account we cannot understand the density of leaves on a tree in causalist terms by neglecting the real properties of leaves.

Entities have intrinsic properties, and it is these properties that determine how entities connect to each other, what they can do to other fellow entities, and what happens due to such exposures. Structures are formed by these properties, so entities, objects or agents connect only indirectly, that is, through their properties. Furthermore, causality works along the structures, so ‘fake’ entity properties can form ‘fake’ structures only that are of no help when it comes to understanding the world in causal terms. We can build structures on non-existent entity properties at any time, and some causes admittedly work in such structures, but these assumed causes are not the same as that work in reality. This is the reason why Friedman missed the target and why he could not but be an instrumentalist: entity properties do matter.

atoms

How and why Friedman was wrong – a methodological argument

Identifying Friedman’s methodological position is not that easy for a number of reasons. He was rather hostile to making methodology: in a letter he regarded the interest in methodology as a mental disease, something that should be cancelled from the scope of theoretical considerations. As he put it in an interview, one should ‘do’ methodology instead of ‘making’ methodology: your methodology is revealed by your scientific practice, not by the lip-service you pay to how to do economics. To make matters worse, he devoted an enigmatic paper, his famous F53, to methodology, which is difficult to interpret. Here Friedman uses a lot of flexible expressions that hide his real methodological stance. In addition, he wrote a number of papers on various theoretical issues, in which he dwelled upon methodology taken as a topic of secondary importance, so if we want to delineate what Friedman’s methodological stance was, we need to browse a lot of occasionally contradicting papers and other works. In this post I will make an attempt to identify his methodological position by highlighting some mistakes in his thinking.

It is an oftmade point of Friedman that he objected to the use of the causalist terminology–Hammond’s interview with Friedman, mentioned above, is a good title to read in this respect. He was convinced that causal thinking inevitably implies infinite regress, that is, if we talk about a cause of an event, this highlighted cause has further causes, each of these causes has further causes, and so on till infinity. For this reason he insisted on using the term ‘prximate cause’ and other related expressions. He was partly right: causes may have causes, but this is by no means an excuse for refraining from causalist thinking. Newton realized that it is general gravity that holds the universe together, so the orbit of the Jupiter is well explained by gravity, though to this end we do not need to know anything about the causes of gravity. We can stop along any causal chain halfay through.

An other prominent point in his thinking is an emphasis upon poor descriptive performance. In this respect he is admittedly right: causally adequate representations do not need to mirror reality. This is an idea that modern representation theory aptly highlights–suffice it to refer to Chakravartty‘s, Contessa‘s or Suárez‘s recent works. In this post you can find some more details and considerations. It is a commonplace in modern philosophy of science that we can have causally adequate models, we can be causal realists even with highly abstract and idealized theories and models. Thus today it is a commonly shared standard that science is supposed to apply descriptively weak models.

However, there is a fly in Friedman’s anti-descriptivist ointment. From the fact that science is not in need of ‘lifelike’ or realistic theories he infers that reality plays no role in theorizing. In my reading this is the very point he tries to make in his F53. Even if he was interested in causal thinking to an extent (suffice it to highlight the causal role in triggering large-scale fluctuations he attributed to money), he conceived causal thinking as achievable through concepts and assumptions having nothing to do with reality. Do you remember his example in F53 where he suggests a model of the densitiy of leaves on a tree in which leaves were supposed to be rational utility maximizers or to act as if they had been rational utility maximizers? I think this is the point: in his thinking it is possible to achieve causal understanding even when our theoretical entities are totally alien to reality. Note, this is not about abstraction and proper realist idealisation. Abstraction always starts from reality: it is reality some aspects or details of which we disregard when theorizing–the remaining parts, the parts we highlight in theory, always come from reality. And even though idealization means the assuming of something non-existent, via properly outlined representational codes it is possible to stay in touch with reality–for instance, Lucas with his highly formalized island models accentuated a real aspect of real market economies that agents act on local markets and have information deficiencies regarding the supra-market level. This is idealization, but realist idealization.

This is, I believe, the severest mistake Friedman made: to infer from descriptive inaccuracy to a neglect of reality. In a post I will get back this question soon.

friedman_postcard