• Architecture and Design
  • Asian and Pacific Studies
  • Business and Economics
  • Classical and Ancient Near Eastern Studies
  • Computer Sciences
  • Cultural Studies
  • Engineering
  • General Interest
  • Geosciences
  • Industrial Chemistry
  • Islamic and Middle Eastern Studies
  • Jewish Studies
  • Library and Information Science, Book Studies
  • Life Sciences
  • Linguistics and Semiotics
  • Literary Studies
  • Materials Sciences
  • Mathematics
  • Social Sciences
  • Sports and Recreation
  • Theology and Religion
  • Publish your article
  • The role of authors
  • Promoting your article
  • Abstracting & indexing
  • Publishing Ethics
  • Why publish with De Gruyter
  • How to publish with De Gruyter
  • Our book series
  • Our subject areas
  • Your digital product at De Gruyter
  • Contribute to our reference works
  • Product information
  • Tools & resources
  • Product Information
  • Promotional Materials
  • Orders and Inquiries
  • FAQ for Library Suppliers and Book Sellers
  • Repository Policy
  • Free access policy
  • Open Access agreements
  • Database portals
  • For Authors
  • Customer service
  • People + Culture
  • Journal Management
  • How to join us
  • Working at De Gruyter
  • Mission & Vision
  • De Gruyter Foundation
  • De Gruyter Ebound
  • Our Responsibility
  • Partner publishers

meaning of empirical research in economics

Your purchase has been completed. Your documents are now available to view.

Methods Used in Economic Research: An Empirical Study of Trends and Levels

The methods used in economic research are analyzed on a sample of all 3,415 regular research papers published in 10 general interest journals every 5th year from 1997 to 2017. The papers are classified into three main groups by method: theory, experiments, and empirics. The theory and empirics groups are almost equally large. Most empiric papers use the classical method, which derives an operational model from theory and runs regressions. The number of papers published increases by 3.3% p.a. Two trends are highly significant: The fraction of theoretical papers has fallen by 26 pp (percentage points), while the fraction of papers using the classical method has increased by 15 pp. Economic theory predicts that such papers exaggerate, and the papers that have been analyzed by meta-analysis confirm the prediction. It is discussed if other methods have smaller problems.

1 Introduction

This paper studies the pattern in the research methods in economics by a sample of 3,415 regular papers published in the years 1997, 2002, 2007, 2012, and 2017 in 10 journals. The analysis builds on the beliefs that truth exists, but it is difficult to find, and that all the methods listed in the next paragraph have problems as discussed in Sections 2 and 4. Hereby I do not imply that all – or even most – papers have these problems, but we rarely know how serious it is when we read a paper. A key aspect of the problem is that a “perfect” study is very demanding and requires far too much space to report, especially if the paper looks for usable results. Thus, each paper is just one look at an aspect of the problem analyzed. Only when many studies using different methods reach a joint finding, we can trust that it is true.

Section 2 discusses the classification of papers by method into three main categories: (M1) Theory , with three subgroups: (M1.1) economic theory, (M1.2) statistical methods, and (M1.3) surveys. (M2) Experiments , with two subgroups: (M2.1) lab experiments and (M2.2) natural experiments. (M3) Empirics , with three subgroups: (M3.1) descriptive, (M3.2) classical empirics, and (M3.3) newer empirics. More than 90% of the papers are easy to classify, but a stochastic element enters in the classification of the rest. Thus, the study has some – hopefully random – measurement errors.

Section 3 discusses the sample of journals chosen. The choice has been limited by the following main criteria: It should be good journals below the top ten A-journals, i.e., my article covers B-journals, which are the journals where most research economists publish. It should be general interest journals, and the journals should be so different that it is likely that patterns that generalize across these journals apply to more (most?) journals. The Appendix gives some crude counts of researchers, departments, and journals. It assesses that there are about 150 B-level journals, but less than half meet the criteria, so I have selected about 15% of the possible ones. This is the most problematic element in the study. If the reader accepts my choice, the paper tells an interesting story about economic research.

All B-level journals try hard to have a serious refereeing process. If our selection is representative, the 150 journals have increased the annual number of papers published from about 7,500 in 1997 to about 14,000 papers in 2017, giving about 200,000 papers for the period. Thus, the B-level dominates our science. Our sample is about 6% for the years covered, but less than 2% of all papers published in B-journals in the period. However, it is a larger fraction of the papers in general interest journals.

It is impossible for anyone to read more than a small fraction of this flood of papers. Consequently, researchers compete for space in journals and for attention from the readers, as measured in the form of citations. It should be uncontroversial that papers that hold a clear message are easier to publish and get more citations. Thus, an element of sales promotion may enter papers in the form of exaggeration , which is a joint problem for all eight methods. This is in accordance with economic theory that predicts that rational researchers report exaggerated results; see Paldam ( 2016 , 2018 ). For empirical papers, meta-methods exist to summarize the results from many papers, notably papers using regressions. Section 4.4 reports that meta-studies find that exaggeration is common.

The empirical literature surveying the use of research methods is quite small, as I have found two articles only: Hamermesh ( 2013 ) covers 748 articles in 6 years a decade apart studies in three A-journals using a slightly different classification of methods, [1] while my study covers B-journals. Angrist, Azoulay, Ellison, Hill, and Lu ( 2017 ) use a machine-learning classification of 134,000 papers in 80 journals to look at the three main methods. My study subdivide the three categories into eight. The machine-learning algorithm is only sketched, so the paper is difficult to replicate, but it is surely a major effort. A key result in both articles is the strong decrease of theory in economic publications. This finding is confirmed, and it is shown that the corresponding increase in empirical articles is concentrated on the classical method.

I have tried to explain what I have done, so that everything is easy to replicate, in full or for one journal or one year. The coding of each article is available at least for the next five years. I should add that I have been in economic research for half a century. Some of the assessments in the paper will reflect my observations/experience during this period (indicated as my assessments). This especially applies to the judgements expressed in Section 4.

2 The eight categories

Table 1 reports that the annual number of papers in the ten journals has increased 1.9 times, or by 3.3% per year. The Appendix gives the full counts per category, journal, and year. By looking at data over two decades, I study how economic research develops. The increase in the production of papers is caused by two factors: The increase in the number of researchers. The increasing importance of publications for the careers of researchers.

The 3,415 papers

2.1 (M1) Theory: subgroups (M1.1) to (M1.3)

Table 2 lists the groups and main numbers discussed in the rest of the paper. Section 2.1 discusses (M1) theory. Section 2.2 covers (M2) experimental methods, while Section 2.3 looks at (M3) empirical methods using statistical inference from data.

The 3,415 papers – fractions in percent

The change of the fractions from 1997 to 2017 in percentage points

Note: Section 3.4 tests if the pattern observed in Table 3 is statistically significant. The Appendix reports the full data.

2.1.1 (M1.1) Economic theory

Papers are where the main content is the development of a theoretical model. The ideal theory paper presents a (simple) new model that recasts the way we look at something important. Such papers are rare and obtain large numbers of citations. Most theoretical papers present variants of known models and obtain few citations.

In a few papers, the analysis is verbal, but more than 95% rely on mathematics, though the technical level differs. Theory papers may start by a descriptive introduction giving the stylized fact the model explains, but the bulk of the paper is the formal analysis, building a model and deriving proofs of some propositions from the model. It is often demonstrated how the model works by a set of simulations, including a calibration made to look realistic. However, the calibrations differ greatly by the efforts made to reach realism. Often, the simulations are in lieu of an analytical solution or just an illustration suggesting the magnitudes of the results reached.

Theoretical papers suffer from the problem known as T-hacking , [2] where the able author by a careful selection of assumptions can tailor the theory to give the results desired. Thus, the proofs made from the model may represent the ability and preferences of the researcher rather than the properties of the economy.

2.1.2 (M1.2) Statistical method

Papers reporting new estimators and tests are published in a handful of specialized journals in econometrics and mathematical statistics – such journals are not included. In our general interest journals, some papers compare estimators on actual data sets. If the demonstration of a methodological improvement is the main feature of the paper, it belongs to (M1.2), but if the economic interpretation is the main point of the paper, it belongs to (M3.2) or (M3.3). [3]

Some papers, including a special issue of Empirical Economics (vol. 53–1), deal with forecasting models. Such models normally have a weak relation to economic theory. They are sometimes justified precisely because of their eclectic nature. They are classified as either (M1.2) or (M3.1), depending upon the focus. It appears that different methods work better on different data sets, and perhaps a trade-off exists between the user-friendliness of the model and the improvement reached.

2.1.3 (M1.3) Surveys

When the literature in a certain field becomes substantial, it normally presents a motley picture with an amazing variation, especially when different schools exist in the field. Thus, a survey is needed, and our sample contains 68 survey articles. They are of two types, where the second type is still rare:

2.1.3.1 (M1.3.1) Assessed surveys

Here, the author reads the papers and assesses what the most reliable results are. Such assessments require judgement that is often quite difficult to distinguish from priors, even for the author of the survey.

2.1.3.2 (M1.3.2) Meta-studies

They are quantitative surveys of estimates of parameters claimed to be the same. Over the two decades from 1997 to 2017, about 500 meta-studies have been made in economics. Our sample includes five, which is 0.15%. [4] Meta-analysis has two levels: The basic level collects and codes the estimates and studies their distribution. This is a rather objective exercise where results seem to replicate rather well. [5] The second level analyzes the variation between the results. This is less objective. The papers analyzed by meta-studies are empirical studies using method (M3.2), though a few use estimates from (M3.1) and (M3.3).

2.2 (M2) Experimental methods: subgroups (M2.1) and (M2.2)

Experiments are of three distinct types, where the last two are rare, so they are lumped together. They are taking place in real life.

2.2.1 (M2.1) Lab experiments

The sample had 1.9% papers using this method in 1997, and it has expanded to 9.7% in 2017. It is a technique that is much easier to apply to micro- than to macroeconomics, so it has spread unequally in the 10 journals, and many experiments are reported in a couple of special journals that are not included in our sample.

Most of these experiments take place in a laboratory, where the subjects communicate with a computer, giving a controlled, but artificial, environment. [6] A number of subjects are told a (more or less abstract) story and paid to react in either of a number of possible ways. A great deal of ingenuity has gone into the construction of such experiments and in the methods used to analyze the results. Lab experiments do allow studies of behavior that are hard to analyze in any other way, and they frequently show sides of human behavior that are difficult to rationalize by economic theory. It appears that such demonstration is a strong argument for the publication of a study.

However, everything is artificial – even the payment. In some cases, the stories told are so elaborate and abstract that framing must be a substantial risk; [7] see Levitt and List ( 2007 ) for a lucid summary, and Bergh and Wichardt ( 2018 ) for a striking example. In addition, experiments cost money, which limits the number of subjects. It is also worth pointing to the difference between expressive and real behavior. It is typically much cheaper for the subject to “express” nice behavior in a lab than to be nice in the real world.

(M2.2) Event studies are studies of real world experiments. They are of two types:

(M2.2.1) Field experiments analyze cases where some people get a certain treatment and others do not. The “gold standard” for such experiments is double blind random sampling, where everything (but the result!) is preannounced; see Christensen and Miguel ( 2018 ). Experiments with humans require permission from the relevant authorities, and the experiment takes time too. In the process, things may happen that compromise the strict rules of the standard. [8] Controlled experiments are expensive, as they require a team of researchers. Our sample of papers contains no study that fulfills the gold standard requirements, but there are a few less stringent studies of real life experiments.

(M2.2.2) Natural experiments take advantage of a discontinuity in the environment, i.e., the period before and after an (unpredicted) change of a law, an earthquake, etc. Methods have been developed to find the effect of the discontinuity. Often, such studies look like (M3.2) classical studies with many controls that may or may not belong. Thus, the problems discussed under (M3.2) will also apply.

2.3 (M3) Empirical methods: subgroups (M3.1) to (M3.3)

The remaining methods are studies making inference from “real” data, which are data samples where the researcher chooses the sample, but has no control over the data generating process.

(M3.1) Descriptive studies are deductive. The researcher describes the data aiming at finding structures that tell a story, which can be interpreted. The findings may call for a formal test. If one clean test follows from the description, [9] the paper is classified under (M3.1). If a more elaborate regression analysis is used, it is classified as (M3.2). Descriptive studies often contain a great deal of theory.

Some descriptive studies present a new data set developed by the author to analyze a debated issue. In these cases, it is often possible to make a clean test, so to the extent that biases sneak in, they are hidden in the details of the assessments made when the data are compiled.

(M3.2) Classical empirics has three steps: It starts by a theory, which is developed into an operational model. Then it presents the data set, and finally it runs regressions.

The significance levels of the t -ratios on the coefficient estimated assume that the regression is the first meeting of the estimation model and the data. We all know that this is rarely the case; see also point (m1) in Section 4.4. In practice, the classical method is often just a presentation technique. The great virtue of the method is that it can be applied to real problems outside academia. The relevance comes with a price: The method is quite flexible as many choices have to be made, and they often give different results. Preferences and interests, as discussed in Sections 4.3 and 4.4 below, notably as point (m2), may affect these choices.

(M3.3) Newer empirics . Partly as a reaction to the problems of (M3.2), the last 3–4 decades have seen a whole set of newer empirical techniques. [10] They include different types of VARs, Bayesian techniques, causality/co-integration tests, Kalman Filters, hazard functions, etc. I have found 162 (or 4.7%) papers where these techniques are the main ones used. The fraction was highest in 1997. Since then it has varied, but with no trend.

I think that the main reason for the lack of success for the new empirics is that it is quite bulky to report a careful set of co-integration tests or VARs, and they often show results that are far from useful in the sense that they are unclear and difficult to interpret. With some introduction and discussion, there is not much space left in the article. Therefore, we are dealing with a cookbook that makes for rather dull dishes, which are difficult to sell in the market.

Note the contrast between (M3.2) and (M3.3): (M3.2) makes it possible to write papers that are too good, while (M3.3) often makes them too dull. This contributes to explain why (M3.2) is getting (even) more popular and the lack of success of (M3.3), but then, it is arguable that it is more dangerous to act on exaggerated results than on results that are weak.

3 The 10 journals

The 10 journals chosen are: (J1) Can [Canadian Journal of Economics], (J2) Emp [Empirical Economics], (J3) EER [European Economic Review], (J4) EJPE [European Journal of Political Economy], (J5) JEBO [Journal of Economic Behavior & Organization], (J6) Inter [Journal of International Economics], (J7) Macro [Journal of Macroeconomics], (J8) Kyklos, (J9) PuCh [Public Choice], and (J10) SJE [Scandinavian Journal of Economics].

Section 3.1 discusses the choice of journals, while Section 3.2 considers how journals deal with the pressure for publication. Section 3.3 shows the marked difference in publication profile of the journals, and Section 3.4 tests if the trends in methods are significant.

3.1 The selection of journals

They should be general interest journals – methodological journals are excluded. By general interest, I mean that they bring papers where an executive summary may interest policymakers and people in general. (ii) They should be journals in English (the Canadian Journal includes one paper in French), which are open to researchers from all countries, so that the majority of the authors are from outside the country of the journal. [11] (iii) They should be sufficiently different so that it is likely that patterns, which apply to these journals, tell a believable story about economic research. Note that (i) and (iii) require some compromises, as is evident in the choice of (J2), (J6), (J7), and (J8) ( Table 4 ).

The 10 journals covered

Note. Growth is the average annual growth from 1997 to 2017 in the number of papers published.

Methodological journals are excluded, as they are not interesting to outsiders. However, new methods are developed to be used in general interest journals. From studies of citations, we know that useful methodological papers are highly cited. If they remain unused, we presume that it is because they are useless, though, of course, there may be a long lag.

The choice of journals may contain some subjectivity, but I think that they are sufficiently diverse so that patterns that generalize across these journals will also generalize across a broader range of good journals.

The papers included are the regular research articles. Consequently, I exclude short notes to other papers and book reviews, [12] except for a few article-long discussions of controversial books.

3.2 Creating space in journals

As mentioned in the introduction, the annual production of research papers in economics has now reached about 1,000 papers in top journals, and about 14,000 papers in the group of good journals. [13] The production has grown with 3.3% per year, and thus it has doubled the last twenty years. The hard-working researcher will read less than 100 papers a year. I know of no signs that this number is increasing. Thus, the upward trend in publication must be due to the large increase in the importance of publications for the careers of researchers, which has greatly increased the production of papers. There has also been a large increase in the number of researches, but as citations are increasingly skewed toward the top journals (see Heckman & Moktan, 2018 ), it has not increased demand for papers correspondingly. The pressures from the supply side have caused journals to look for ways to create space.

Book reviews have dropped to less than 1/3. Perhaps, it also indicates that economists read fewer books than they used to. Journals have increasingly come to use smaller fonts and larger pages, allowing more words per page. The journals from North-Holland Elsevier have managed to cram almost two old pages into one new one. [14] This makes it easier to publish papers, while they become harder to read.

Many journals have changed their numbering system for the annual issues, making it less transparent how much they publish. Only three – Canadian Economic Journal, Kyklos, and Scandinavian Journal of Economics – have kept the schedule of publishing one volume of four issues per year. It gives about 40 papers per year. Public Choice has a (fairly) consistent system with four volumes of two double issues per year – this gives about 100 papers. The remaining journals have changed their numbering system and increased the number of papers published per year – often dramatically.

Thus, I assess the wave of publications is caused by the increased supply of papers and not to the demand for reading material. Consequently, the study confirms and updates the observation by Temple ( 1918 , p. 242): “… as the world gets older the more people are inclined to write but the less they are inclined to read.”

3.3 How different are the journals?

The appendix reports the counts for each year and journal of the research methods. From these counts, a set of χ 2 -scores is calculated for the three main groups of methods – they are reported in Table 5 . It gives the χ 2 -test comparing the profile of each journal to the one of the other nine journals taken to be the theoretical distribution.

The methodological profile of the journals –  χ 2 -scores for main groups

Note: The χ 2 -scores are calculated relative to all other journals. The sign (+) or (−) indicates if the journal has too many or too few papers relatively in the category. The P -values for the χ 2 (3)-test always reject that the journal has the same methodological profile as the other nine journals.

The test rejects that the distribution is the same as the average for any of the journals. The closest to the average is the EJPE and Public Choice. The two most deviating scores are for the most micro-oriented journal JEBO, which brings many experimental papers, and of course, Empirical Economics, which brings many empirical papers.

3.4 Trends in the use of the methods

Table 3 already gave an impression of the main trends in the methods preferred by economists. I now test if these impressions are statistically significant. The tests have to be tailored to disregard three differences between the journals: their methodological profiles, the number of papers they publish, and the trend in the number. Table 6 reports a set of distribution free tests, which overcome these differences. The tests are done on the shares of each research method for each journal. As the data cover five years, it gives 10 pairs of years to compare. [15] The three trend-scores in the []-brackets count how often the shares go up, down, or stay the same in the 10 cases. This is the count done for a Kendall rank correlation comparing the five shares with a positive trend (such as 1, 2, 3, 4, and 5).

Trend-scores and tests for the eight subgroups of methods across the 10 journals

Note: The three trend-scores in each [ I 1 , I 2 , I 3 ]-bracket are a Kendall-count over all 10 combinations of years. I 1 counts how often the share goes up. I 2 counts when the share goes down, and I 3 counts the number of ties. Most ties occur when there are no observations either year. Thus, I 1 + I 2 + I 3 = 10. The tests are two-sided binominal tests disregarding the zeroes. The test results in bold are significant at the 5% level.

The first set of trend-scores for (M1.1) and (J1) is [1, 9, 0]. It means that 1 of the 10 share-pairs increases, while nine decrease and no ties are found. The two-sided binominal test is 2%, so it is unlikely to happen. Nine of the ten journals in the (M1.1)-column have a majority of falling shares. The important point is that the counts in one column can be added – as is done in the all-row; this gives a powerful trend test that disregards differences between journals and the number of papers published. ( Table A1 )

Four of the trend-tests are significant: The fall in theoretical papers and the rise in classical papers. There is also a rise in the share of stat method and event studies. It is surprising that there is no trend in the number of experimental studies, but see Table A2 (in Appendix).

4 An attempt to interpret the pattern found

The development in the methods pursued by researchers in economics is a reaction to the demand and supply forces on the market for economic papers. As already argued, it seems that a key factor is the increasing production of papers.

The shares add to 100, so the decline of one method means that the others rise. Section 4.1 looks at the biggest change – the reduction in theory papers. Section 4.2 discusses the rise in two new categories. Section 4.3 considers the large increase in the classical method, while Section 4.4 looks at what we know about that method from meta-analysis.

4.1 The decline of theory: economics suffers from theory fatigue [16]

The papers in economic theory have dropped from 59.5 to 33.6% – this is the largest change for any of the eight subgroups. [17] It is highly significant in the trend test. I attribute this drop to theory fatigue.

As mentioned in Section 2.1, the ideal theory paper presents a (simple) new model that recasts the way we look at something important. However, most theory papers are less exciting: They start from the standard model and argue that a well-known conclusion reached from the model hinges upon a debatable assumption – if it changes, so does the conclusion. Such papers are useful. From a literature on one main model, the profession learns its strengths and weaknesses. It appears that no generally accepted method exists to summarize this knowledge in a systematic way, though many thoughtful summaries have appeared.

I think that there is a deeper problem explaining theory fatigue. It is that many theoretical papers are quite unconvincing. Granted that the calculations are done right, believability hinges on the realism of the assumptions at the start and of the results presented at the end. In order for a model to convince, it should (at least) demonstrate the realism of either the assumptions or the outcome. [18] If both ends appear to hang in the air, it becomes a game giving little new knowledge about the world, however skillfully played.

The theory fatigue has caused a demand for simulations demonstrating that the models can mimic something in the world. Kydland and Prescott pioneered calibration methods (see their 1991 ). Calibrations may be carefully done, but it often appears like a numerical solution of a model that is too complex to allow an analytical solution.

4.2 Two examples of waves: one that is still rising and another that is fizzling out

When a new method of gaining insights in the economy first appears, it is surrounded by doubts, but it also promises a high marginal productivity of knowledge. Gradually the doubts subside, and many researchers enter the field. After some time this will cause the marginal productivity of the method to fall, and it becomes less interesting. The eight methods include two newer ones: Lab experiments and newer stats. [19]

It is not surprising that papers with lab experiments are increasing, though it did take a long time: The seminal paper presenting the technique was Smith ( 1962 ), but only a handful of papers are from the 1960s. Charles Plott organized the first experimental lab 10 years later – this created a new standard for experiments, but required an investment in a lab and some staff. Labs became more common in the 1990s as PCs got cheaper and software was developed to handle experiments, but only 1.9% of the papers in the 10 journals reported lab experiments in 1997. This has now increased to 9.7%, so the wave is still rising. The trend in experiments is concentrated in a few journals, so the trend test in Table 6 is insignificant, but it is significant in the Appendix Table A2 , where it is done on the sum of articles irrespective of the journal.

In addition to the rising share of lab experiment papers in some journals, the journal Experimental Economics was started in 1998, where it published 281 pages in three issues. In 2017, it had reached 1,006 pages in four issues, [20] which is an annual increase of 6.5%.

Compared with the success of experimental economics, the motley category of newer empirics has had a more modest success, as the fraction of papers in the 5 years are 5.8, 5.2, 3.5, 5.4, and 4.2, which has no trend. Newer stats also require investment, but mainly in human capital. [21] Some of the papers using the classical methodology contain a table with Dickey-Fuller tests or some eigenvalues of the data matrix, but they are normally peripheral to the analysis. A couple of papers use Kalman filters, and a dozen papers use Bayesian VARs. However, it is clear that the newer empirics have made little headway into our sample of general interest journals.

4.3 The steady rise of the classical method: flexibility rewarded

The typical classical paper provides estimates of a key effect that decision-makers outside academia want to know. This makes the paper policy relevant right from the start, and in many cases, it is possible to write a one page executive summary to the said decision-makers.

The three-step convention (see Section 2.3) is often followed rather loosely. The estimation model is nearly always much simpler than the theory. Thus, while the model can be derived from a theory, the reverse does not apply. Sometimes, the model seems to follow straight from common sense, and if the link from the theory to the model is thin, it begs the question: Is the theory really necessary? In such cases, it is hard to be convinced that the tests “confirm” the theory, but then, of course, tests only say that the data do not reject the theory.

The classical method is often only a presentation devise. Think of a researcher who has reached a nice publishable result through a long and tortuous path, including some failed attempts to find such results. It is not possible to describe that path within the severely limited space of an article. In addition, such a presentation would be rather dull to read, and none of us likes to talk about wasted efforts that in hindsight seem a bit silly. Here, the classical method becomes a convenient presentation device.

The biggest source of variation in the results is the choice of control/modifier variables. All datasets presumably contain some general and some special information, where the latter depends on the circumstances prevailing when the data were compiled. The regression should be controlled for these circumstances in order to reach the general result. Such ceteris paribus controls are not part of the theory, so many possible controls may be added. The ones chosen for publication often appear to be the ones delivering the “right” results by the priors of the researcher. The justification for their inclusion is often thin, and if two-stage regressions are used, the first stage instruments often have an even thinner justification.

Thus, the classical method is rather malleable to the preferences and interests of researchers and sponsors. This means that some papers using the classical technique are not what they pretend, as already pointed out by Leamer ( 1983 ), see also Paldam ( 2018 ) for new references and theory. The fact that data mining is tempting suggests that it is often possible to reach smashing results, making the paper nice to read. This may be precisely why it is cited.

Many papers using the classical method throw in some bits of exotic statistics technique to demonstrate the robustness of the result and the ability of the researcher. This presumably helps to generate credibility.

4.4 Knowledge about classical papers reached from meta-studies

Individual studies using the classical method often look better than they are, and thus they are more uncertain than they appear, but we may think of the value of convergence for large N s (number of observations) as the truth. The exaggeration is largest in the beginning of a new literature, but gradually it becomes smaller. Thus, the classical method does generate truth when the effect searched for has been studied from many sides. The word research does mean that the search has to be repeated! It is highly risky to trust a few papers only.

Meta-analysis has found other results such as: Results in top journals do not stand out. It is necessary to look at many journals, as many papers on the same effect are needed. Little of the large variation between results is due to the choice of estimators.

A similar development should occur also for experimental economics. Experiments fall in families: A large number cover prisoner’s dilemma games, but there are also many studies of dictator games, auction games, etc. Surveys summarizing what we have learned about these games seem highly needed. Assessed summaries of old experiments are common, notably in introductions to papers reporting new ones. It should be possible to extract the knowledge reached by sets of related lab experiments in a quantitative way, by some sort of meta-technique, but this has barely started. The first pioneering meta-studies of lab experiments do find the usual wide variation of results from seemingly closely related experiments. [25] A recent large-scale replicability study by Camerer et al. ( 2018 ) finds that published experiments in the high quality journal Nature and Science exaggerate by a factor two just like regression studies using the classical method.

5 Conclusion

The study presents evidence that over the last 20 years economic research has moved away from theory towards empirical work using the classical method.

From the eighties onward, there has been a steady stream of papers pointing out that the classical method suffers from excess flexibility. It does deliver relevant results, but they tend to be too good. [26] While, increasingly, we know the size of the problems of the classical method, systematic knowledge about the problems of the other methods is weaker. It is possible that the problems are smaller, but we do not know.

Therefore, it is clear that obtaining solid knowledge about the size of an important effect requires a great deal of papers analyzing many aspects of the effect and a careful quantitative survey. It is a well-known principle in the harder sciences that results need repeated independent replication to be truly trustworthy. In economics, this is only accepted in principle.

The classical method of empirical research is gradually winning, and this is a fine development: It does give answers to important policy questions. These answers are highly variable and often exaggerated, but through the efforts of many competing researchers, solid knowledge will gradually emerge.

Home page: http://www.martin.paldam.dk

Acknowledgments

The paper has been presented at the 2018 MAER-Net Colloquium in Melbourne, the Kiel Aarhus workshop in 2018, and at the European Public Choice 2019 Meeting in Jerusalem. I am grateful for all comments, especially from Chris Doucouliagos, Eelke de Jong, and Bob Reed. In addition, I thank the referees for constructive advice.

Conflict of interest: Author states no conflict of interest.

Appendix: Two tables and some assessments of the size of the profession

The text needs some numbers to assess the representativity of the results reached. These numbers just need to be orders of magnitude. I use the standard three-level classification in A, B, and C of researchers, departments, and journals. The connections between the three categories are dynamic and rely on complex sorting mechanisms. In an international setting, it matters that researchers have preferences for countries, notably their own. The relation between the three categories has a stochastic element.

The World of Learning organization reports on 36,000 universities, colleges, and other institutes of tertiary education and research. Many of these institutions are mainly engaged in undergraduate teaching, and some are quite modest. If half of these institutions have a program in economics, with a staff of at least five, the total stock of academic economists is 100,000, of which most are at the C-level.

The A-level of about 500 tenured researchers working at the top ten universities (mainly) publishes in the top 10 journals that bring less than 1,000 papers per year; [27] see Heckman and Moktan (2020). They (mainly) cite each other, but they greatly influence other researchers. [28] The B-level consists of about 15–20,000 researchers who work at 4–500 research universities, with graduate programs and ambitions to publish. They (mainly) publish in the next level of about 150 journals. [29] In addition, there are at least another 1,000 institutions that strive to move up in the hierarchy.

The counts for each of the 10 journals

Counts, shares, and changes for all ten journals for subgroups

Note: The trend-scores are calculated as in Table 6 . Compared to the results in Table 6 , the results are similar, but the power is less than before. However, note that the results in Column (M2.1) dealing with experiments are stronger in Table A2 . This has to do with the way missing observations are treated in the test.

Angrist, J. , Azoulay, P. , Ellison, G. , Hill, R. , & Lu, S. F. (2017). Economic research evolves: Fields and styles. American Economic Review (Papers & Proceedings), 107, 293–297. 10.1257/aer.p20171117 Search in Google Scholar

Bergh, A. , & Wichardt, P. C. (2018). Mine, ours or yours? Unintended framing effects in dictator games (INF Working Papers, No 1205). Research Institute of Industrial Econ, Stockholm. München: CESifo. 10.2139/ssrn.3208589 Search in Google Scholar

Brodeur, A. , Cook, N. , & Heyes, A. (2020). Methods matter: p-Hacking and publication bias in causal analysis in economics. American Economic Review, 110(11), 3634–3660. 10.1257/aer.20190687 Search in Google Scholar

Camerer, C. F. , Dreber, A. , Holzmaster, F. , Ho, T.-H. , Huber, J. , Johannesson, M. , … Wu, H. (27 August 2018). Nature Human Behaviour. https://www.nature.com/articles/M2.11562-018-0399-z Search in Google Scholar

Card, D. , & DellaVigna, A. (2013). Nine facts about top journals in economics. Journal of Economic Literature, 51, 144–161 10.3386/w18665 Search in Google Scholar

Christensen, G. , & Miguel, E. (2018). Transparency, reproducibility, and the credibility of economics research. Journal of Economic Literature, 56, 920–980 10.3386/w22989 Search in Google Scholar

Doucouliagos, H. , Paldam, M. , & Stanley, T. D. (2018). Skating on thin evidence: Implications for public policy. European Journal of Political Economy, 54, 16–25 10.1016/j.ejpoleco.2018.03.004 Search in Google Scholar

Engel, C. (2011). Dictator games: A meta study. Experimental Economics, 14, 583–610 10.1007/s10683-011-9283-7 Search in Google Scholar

Fiala, L. , & Suentes, S. (2017). Transparency and cooperation in repeated dilemma games: A meta study. Experimental Economics, 20, 755–771 10.1007/s10683-017-9517-4 Search in Google Scholar

Friedman, M. (1953). Essays in positive economics. Chicago: University of Chicago Press. Search in Google Scholar

Hamermesh, D. (2013). Six decades of top economics publishing: Who and how? Journal of Economic Literature, 51, 162–172 10.3386/w18635 Search in Google Scholar

Heckman, J. J. , & Moktan, S. (2018). Publishing and promotion in economics: The tyranny of the top five. Journal of Economic Literature, 51, 419–470 10.3386/w25093 Search in Google Scholar

Ioannidis, J. P. A. , Stanley, T. D. , & Doucouliagos, H. (2017). The power of bias in economics research. Economic Journal, 127, F236–F265 10.1111/ecoj.12461 Search in Google Scholar

Johansen, S. , & Juselius, K. (1990). Maximum likelihood estimation and inference on cointegration – with application to the demand for money. Oxford Bulletin of Economics and Statistics, 52, 169–210 10.1111/j.1468-0084.1990.mp52002003.x Search in Google Scholar

Justman, M. (2018). Randomized controlled trials informing public policy: Lessons from the project STAR and class size reduction. European Journal of Political Economy, 54, 167–174 10.1016/j.ejpoleco.2018.04.005 Search in Google Scholar

Kydland, F. , & Prescott, E. C. (1991). The econometrics of the general equilibrium approach to business cycles. Scandinavian Journal of Economics, 93, 161–178 10.2307/3440324 Search in Google Scholar

Leamer, E. E. (1983). Let’s take the con out of econometrics. American Economic Review, 73, 31–43 Search in Google Scholar

Levitt, S. D. , & List, J. A. (2007). On the generalizability of lab behaviour to the field. Canadian Journal of Economics, 40, 347–370 10.1111/j.1365-2966.2007.00412.x Search in Google Scholar

Paldam, M. (April 14th 2015). Meta-analysis in a nutshell: Techniques and general findings. Economics. The Open-Access, Open-Assessment E-Journal, 9, 1–4 10.5018/economics-ejournal.ja.2015-11 Search in Google Scholar

Paldam, M. (2016). Simulating an empirical paper by the rational economist. Empirical Economics, 50, 1383–1407 10.1007/s00181-015-0971-6 Search in Google Scholar

Paldam, M. (2018). A model of the representative economist, as researcher and policy advisor. European Journal of Political Economy, 54, 6–15 10.1016/j.ejpoleco.2018.03.005 Search in Google Scholar

Smith, V. (1962). An experimental study of competitive market behavior. Journal of Political Economy, 70, 111–137 10.1017/CBO9780511528354.003 Search in Google Scholar

Stanley, T. D. , & Doucouliagos, H. (2012). Meta-regression analysis in economics and business. Abingdon: Routledge. 10.4324/9780203111710 Search in Google Scholar

Temple, C. L. (1918). Native races and their rulers; sketches and studies of official life and administrative problems in Nigeria. Cape Town: Argus Search in Google Scholar

© 2021 Martin Paldam, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

  • X / Twitter

Supplementary Materials

  • Supplementary material

Please login or register with De Gruyter to order this product.

Economics

Journal and Issue

Articles in the same issue.

meaning of empirical research in economics

What is Empirical Research? Definition, Methods, Examples

Appinio Research · 09.02.2024 · 35min read

What is Empirical Research Definition Methods Examples

Ever wondered how we gather the facts, unveil hidden truths, and make informed decisions in a world filled with questions? Empirical research holds the key.

In this guide, we'll delve deep into the art and science of empirical research, unraveling its methods, mysteries, and manifold applications. From defining the core principles to mastering data analysis and reporting findings, we're here to equip you with the knowledge and tools to navigate the empirical landscape.

What is Empirical Research?

Empirical research is the cornerstone of scientific inquiry, providing a systematic and structured approach to investigating the world around us. It is the process of gathering and analyzing empirical or observable data to test hypotheses, answer research questions, or gain insights into various phenomena. This form of research relies on evidence derived from direct observation or experimentation, allowing researchers to draw conclusions based on real-world data rather than purely theoretical or speculative reasoning.

Characteristics of Empirical Research

Empirical research is characterized by several key features:

  • Observation and Measurement : It involves the systematic observation or measurement of variables, events, or behaviors.
  • Data Collection : Researchers collect data through various methods, such as surveys, experiments, observations, or interviews.
  • Testable Hypotheses : Empirical research often starts with testable hypotheses that are evaluated using collected data.
  • Quantitative or Qualitative Data : Data can be quantitative (numerical) or qualitative (non-numerical), depending on the research design.
  • Statistical Analysis : Quantitative data often undergo statistical analysis to determine patterns , relationships, or significance.
  • Objectivity and Replicability : Empirical research strives for objectivity, minimizing researcher bias . It should be replicable, allowing other researchers to conduct the same study to verify results.
  • Conclusions and Generalizations : Empirical research generates findings based on data and aims to make generalizations about larger populations or phenomena.

Importance of Empirical Research

Empirical research plays a pivotal role in advancing knowledge across various disciplines. Its importance extends to academia, industry, and society as a whole. Here are several reasons why empirical research is essential:

  • Evidence-Based Knowledge : Empirical research provides a solid foundation of evidence-based knowledge. It enables us to test hypotheses, confirm or refute theories, and build a robust understanding of the world.
  • Scientific Progress : In the scientific community, empirical research fuels progress by expanding the boundaries of existing knowledge. It contributes to the development of theories and the formulation of new research questions.
  • Problem Solving : Empirical research is instrumental in addressing real-world problems and challenges. It offers insights and data-driven solutions to complex issues in fields like healthcare, economics, and environmental science.
  • Informed Decision-Making : In policymaking, business, and healthcare, empirical research informs decision-makers by providing data-driven insights. It guides strategies, investments, and policies for optimal outcomes.
  • Quality Assurance : Empirical research is essential for quality assurance and validation in various industries, including pharmaceuticals, manufacturing, and technology. It ensures that products and processes meet established standards.
  • Continuous Improvement : Businesses and organizations use empirical research to evaluate performance, customer satisfaction, and product effectiveness. This data-driven approach fosters continuous improvement and innovation.
  • Human Advancement : Empirical research in fields like medicine and psychology contributes to the betterment of human health and well-being. It leads to medical breakthroughs, improved therapies, and enhanced psychological interventions.
  • Critical Thinking and Problem Solving : Engaging in empirical research fosters critical thinking skills, problem-solving abilities, and a deep appreciation for evidence-based decision-making.

Empirical research empowers us to explore, understand, and improve the world around us. It forms the bedrock of scientific inquiry and drives progress in countless domains, shaping our understanding of both the natural and social sciences.

How to Conduct Empirical Research?

So, you've decided to dive into the world of empirical research. Let's begin by exploring the crucial steps involved in getting started with your research project.

1. Select a Research Topic

Selecting the right research topic is the cornerstone of a successful empirical study. It's essential to choose a topic that not only piques your interest but also aligns with your research goals and objectives. Here's how to go about it:

  • Identify Your Interests : Start by reflecting on your passions and interests. What topics fascinate you the most? Your enthusiasm will be your driving force throughout the research process.
  • Brainstorm Ideas : Engage in brainstorming sessions to generate potential research topics. Consider the questions you've always wanted to answer or the issues that intrigue you.
  • Relevance and Significance : Assess the relevance and significance of your chosen topic. Does it contribute to existing knowledge? Is it a pressing issue in your field of study or the broader community?
  • Feasibility : Evaluate the feasibility of your research topic. Do you have access to the necessary resources, data, and participants (if applicable)?

2. Formulate Research Questions

Once you've narrowed down your research topic, the next step is to formulate clear and precise research questions . These questions will guide your entire research process and shape your study's direction. To create effective research questions:

  • Specificity : Ensure that your research questions are specific and focused. Vague or overly broad questions can lead to inconclusive results.
  • Relevance : Your research questions should directly relate to your chosen topic. They should address gaps in knowledge or contribute to solving a particular problem.
  • Testability : Ensure that your questions are testable through empirical methods. You should be able to gather data and analyze it to answer these questions.
  • Avoid Bias : Craft your questions in a way that avoids leading or biased language. Maintain neutrality to uphold the integrity of your research.

3. Review Existing Literature

Before you embark on your empirical research journey, it's essential to immerse yourself in the existing body of literature related to your chosen topic. This step, often referred to as a literature review, serves several purposes:

  • Contextualization : Understand the historical context and current state of research in your field. What have previous studies found, and what questions remain unanswered?
  • Identifying Gaps : Identify gaps or areas where existing research falls short. These gaps will help you formulate meaningful research questions and hypotheses.
  • Theory Development : If your study is theoretical, consider how existing theories apply to your topic. If it's empirical, understand how previous studies have approached data collection and analysis.
  • Methodological Insights : Learn from the methodologies employed in previous research. What methods were successful, and what challenges did researchers face?

4. Define Variables

Variables are fundamental components of empirical research. They are the factors or characteristics that can change or be manipulated during your study. Properly defining and categorizing variables is crucial for the clarity and validity of your research. Here's what you need to know:

  • Independent Variables : These are the variables that you, as the researcher, manipulate or control. They are the "cause" in cause-and-effect relationships.
  • Dependent Variables : Dependent variables are the outcomes or responses that you measure or observe. They are the "effect" influenced by changes in independent variables.
  • Operational Definitions : To ensure consistency and clarity, provide operational definitions for your variables. Specify how you will measure or manipulate each variable.
  • Control Variables : In some studies, controlling for other variables that may influence your dependent variable is essential. These are known as control variables.

Understanding these foundational aspects of empirical research will set a solid foundation for the rest of your journey. Now that you've grasped the essentials of getting started, let's delve deeper into the intricacies of research design.

Empirical Research Design

Now that you've selected your research topic, formulated research questions, and defined your variables, it's time to delve into the heart of your empirical research journey – research design . This pivotal step determines how you will collect data and what methods you'll employ to answer your research questions. Let's explore the various facets of research design in detail.

Types of Empirical Research

Empirical research can take on several forms, each with its own unique approach and methodologies. Understanding the different types of empirical research will help you choose the most suitable design for your study. Here are some common types:

  • Experimental Research : In this type, researchers manipulate one or more independent variables to observe their impact on dependent variables. It's highly controlled and often conducted in a laboratory setting.
  • Observational Research : Observational research involves the systematic observation of subjects or phenomena without intervention. Researchers are passive observers, documenting behaviors, events, or patterns.
  • Survey Research : Surveys are used to collect data through structured questionnaires or interviews. This method is efficient for gathering information from a large number of participants.
  • Case Study Research : Case studies focus on in-depth exploration of one or a few cases. Researchers gather detailed information through various sources such as interviews, documents, and observations.
  • Qualitative Research : Qualitative research aims to understand behaviors, experiences, and opinions in depth. It often involves open-ended questions, interviews, and thematic analysis.
  • Quantitative Research : Quantitative research collects numerical data and relies on statistical analysis to draw conclusions. It involves structured questionnaires, experiments, and surveys.

Your choice of research type should align with your research questions and objectives. Experimental research, for example, is ideal for testing cause-and-effect relationships, while qualitative research is more suitable for exploring complex phenomena.

Experimental Design

Experimental research is a systematic approach to studying causal relationships. It's characterized by the manipulation of one or more independent variables while controlling for other factors. Here are some key aspects of experimental design:

  • Control and Experimental Groups : Participants are randomly assigned to either a control group or an experimental group. The independent variable is manipulated for the experimental group but not for the control group.
  • Randomization : Randomization is crucial to eliminate bias in group assignment. It ensures that each participant has an equal chance of being in either group.
  • Hypothesis Testing : Experimental research often involves hypothesis testing. Researchers formulate hypotheses about the expected effects of the independent variable and use statistical analysis to test these hypotheses.

Observational Design

Observational research entails careful and systematic observation of subjects or phenomena. It's advantageous when you want to understand natural behaviors or events. Key aspects of observational design include:

  • Participant Observation : Researchers immerse themselves in the environment they are studying. They become part of the group being observed, allowing for a deep understanding of behaviors.
  • Non-Participant Observation : In non-participant observation, researchers remain separate from the subjects. They observe and document behaviors without direct involvement.
  • Data Collection Methods : Observational research can involve various data collection methods, such as field notes, video recordings, photographs, or coding of observed behaviors.

Survey Design

Surveys are a popular choice for collecting data from a large number of participants. Effective survey design is essential to ensure the validity and reliability of your data. Consider the following:

  • Questionnaire Design : Create clear and concise questions that are easy for participants to understand. Avoid leading or biased questions.
  • Sampling Methods : Decide on the appropriate sampling method for your study, whether it's random, stratified, or convenience sampling.
  • Data Collection Tools : Choose the right tools for data collection, whether it's paper surveys, online questionnaires, or face-to-face interviews.

Case Study Design

Case studies are an in-depth exploration of one or a few cases to gain a deep understanding of a particular phenomenon. Key aspects of case study design include:

  • Single Case vs. Multiple Case Studies : Decide whether you'll focus on a single case or multiple cases. Single case studies are intensive and allow for detailed examination, while multiple case studies provide comparative insights.
  • Data Collection Methods : Gather data through interviews, observations, document analysis, or a combination of these methods.

Qualitative vs. Quantitative Research

In empirical research, you'll often encounter the distinction between qualitative and quantitative research . Here's a closer look at these two approaches:

  • Qualitative Research : Qualitative research seeks an in-depth understanding of human behavior, experiences, and perspectives. It involves open-ended questions, interviews, and the analysis of textual or narrative data. Qualitative research is exploratory and often used when the research question is complex and requires a nuanced understanding.
  • Quantitative Research : Quantitative research collects numerical data and employs statistical analysis to draw conclusions. It involves structured questionnaires, experiments, and surveys. Quantitative research is ideal for testing hypotheses and establishing cause-and-effect relationships.

Understanding the various research design options is crucial in determining the most appropriate approach for your study. Your choice should align with your research questions, objectives, and the nature of the phenomenon you're investigating.

Data Collection for Empirical Research

Now that you've established your research design, it's time to roll up your sleeves and collect the data that will fuel your empirical research. Effective data collection is essential for obtaining accurate and reliable results.

Sampling Methods

Sampling methods are critical in empirical research, as they determine the subset of individuals or elements from your target population that you will study. Here are some standard sampling methods:

  • Random Sampling : Random sampling ensures that every member of the population has an equal chance of being selected. It minimizes bias and is often used in quantitative research.
  • Stratified Sampling : Stratified sampling involves dividing the population into subgroups or strata based on specific characteristics (e.g., age, gender, location). Samples are then randomly selected from each stratum, ensuring representation of all subgroups.
  • Convenience Sampling : Convenience sampling involves selecting participants who are readily available or easily accessible. While it's convenient, it may introduce bias and limit the generalizability of results.
  • Snowball Sampling : Snowball sampling is instrumental when studying hard-to-reach or hidden populations. One participant leads you to another, creating a "snowball" effect. This method is common in qualitative research.
  • Purposive Sampling : In purposive sampling, researchers deliberately select participants who meet specific criteria relevant to their research questions. It's often used in qualitative studies to gather in-depth information.

The choice of sampling method depends on the nature of your research, available resources, and the degree of precision required. It's crucial to carefully consider your sampling strategy to ensure that your sample accurately represents your target population.

Data Collection Instruments

Data collection instruments are the tools you use to gather information from your participants or sources. These instruments should be designed to capture the data you need accurately. Here are some popular data collection instruments:

  • Questionnaires : Questionnaires consist of structured questions with predefined response options. When designing questionnaires, consider the clarity of questions, the order of questions, and the response format (e.g., Likert scale , multiple-choice).
  • Interviews : Interviews involve direct communication between the researcher and participants. They can be structured (with predetermined questions) or unstructured (open-ended). Effective interviews require active listening and probing for deeper insights.
  • Observations : Observations entail systematically and objectively recording behaviors, events, or phenomena. Researchers must establish clear criteria for what to observe, how to record observations, and when to observe.
  • Surveys : Surveys are a common data collection instrument for quantitative research. They can be administered through various means, including online surveys, paper surveys, and telephone surveys.
  • Documents and Archives : In some cases, data may be collected from existing documents, records, or archives. Ensure that the sources are reliable, relevant, and properly documented.

To streamline your process and gather insights with precision and efficiency, consider leveraging innovative tools like Appinio . With Appinio's intuitive platform, you can harness the power of real-time consumer data to inform your research decisions effectively. Whether you're conducting surveys, interviews, or observations, Appinio empowers you to define your target audience, collect data from diverse demographics, and analyze results seamlessly.

By incorporating Appinio into your data collection toolkit, you can unlock a world of possibilities and elevate the impact of your empirical research. Ready to revolutionize your approach to data collection?

Book a Demo

Data Collection Procedures

Data collection procedures outline the step-by-step process for gathering data. These procedures should be meticulously planned and executed to maintain the integrity of your research.

  • Training : If you have a research team, ensure that they are trained in data collection methods and protocols. Consistency in data collection is crucial.
  • Pilot Testing : Before launching your data collection, conduct a pilot test with a small group to identify any potential problems with your instruments or procedures. Make necessary adjustments based on feedback.
  • Data Recording : Establish a systematic method for recording data. This may include timestamps, codes, or identifiers for each data point.
  • Data Security : Safeguard the confidentiality and security of collected data. Ensure that only authorized individuals have access to the data.
  • Data Storage : Properly organize and store your data in a secure location, whether in physical or digital form. Back up data to prevent loss.

Ethical Considerations

Ethical considerations are paramount in empirical research, as they ensure the well-being and rights of participants are protected.

  • Informed Consent : Obtain informed consent from participants, providing clear information about the research purpose, procedures, risks, and their right to withdraw at any time.
  • Privacy and Confidentiality : Protect the privacy and confidentiality of participants. Ensure that data is anonymized and sensitive information is kept confidential.
  • Beneficence : Ensure that your research benefits participants and society while minimizing harm. Consider the potential risks and benefits of your study.
  • Honesty and Integrity : Conduct research with honesty and integrity. Report findings accurately and transparently, even if they are not what you expected.
  • Respect for Participants : Treat participants with respect, dignity, and sensitivity to cultural differences. Avoid any form of coercion or manipulation.
  • Institutional Review Board (IRB) : If required, seek approval from an IRB or ethics committee before conducting your research, particularly when working with human participants.

Adhering to ethical guidelines is not only essential for the ethical conduct of research but also crucial for the credibility and validity of your study. Ethical research practices build trust between researchers and participants and contribute to the advancement of knowledge with integrity.

With a solid understanding of data collection, including sampling methods, instruments, procedures, and ethical considerations, you are now well-equipped to gather the data needed to answer your research questions.

Empirical Research Data Analysis

Now comes the exciting phase of data analysis, where the raw data you've diligently collected starts to yield insights and answers to your research questions. We will explore the various aspects of data analysis, from preparing your data to drawing meaningful conclusions through statistics and visualization.

Data Preparation

Data preparation is the crucial first step in data analysis. It involves cleaning, organizing, and transforming your raw data into a format that is ready for analysis. Effective data preparation ensures the accuracy and reliability of your results.

  • Data Cleaning : Identify and rectify errors, missing values, and inconsistencies in your dataset. This may involve correcting typos, removing outliers, and imputing missing data.
  • Data Coding : Assign numerical values or codes to categorical variables to make them suitable for statistical analysis. For example, converting "Yes" and "No" to 1 and 0.
  • Data Transformation : Transform variables as needed to meet the assumptions of the statistical tests you plan to use. Common transformations include logarithmic or square root transformations.
  • Data Integration : If your data comes from multiple sources, integrate it into a unified dataset, ensuring that variables match and align.
  • Data Documentation : Maintain clear documentation of all data preparation steps, as well as the rationale behind each decision. This transparency is essential for replicability.

Effective data preparation lays the foundation for accurate and meaningful analysis. It allows you to trust the results that will follow in the subsequent stages.

Descriptive Statistics

Descriptive statistics help you summarize and make sense of your data by providing a clear overview of its key characteristics. These statistics are essential for understanding the central tendencies, variability, and distribution of your variables. Descriptive statistics include:

  • Measures of Central Tendency : These include the mean (average), median (middle value), and mode (most frequent value). They help you understand the typical or central value of your data.
  • Measures of Dispersion : Measures like the range, variance, and standard deviation provide insights into the spread or variability of your data points.
  • Frequency Distributions : Creating frequency distributions or histograms allows you to visualize the distribution of your data across different values or categories.

Descriptive statistics provide the initial insights needed to understand your data's basic characteristics, which can inform further analysis.

Inferential Statistics

Inferential statistics take your analysis to the next level by allowing you to make inferences or predictions about a larger population based on your sample data. These methods help you test hypotheses and draw meaningful conclusions. Key concepts in inferential statistics include:

  • Hypothesis Testing : Hypothesis tests (e.g., t-tests, chi-squared tests) help you determine whether observed differences or associations in your data are statistically significant or occurred by chance.
  • Confidence Intervals : Confidence intervals provide a range within which population parameters (e.g., population mean) are likely to fall based on your sample data.
  • Regression Analysis : Regression models (linear, logistic, etc.) help you explore relationships between variables and make predictions.
  • Analysis of Variance (ANOVA) : ANOVA tests are used to compare means between multiple groups, allowing you to assess whether differences are statistically significant.

Inferential statistics are powerful tools for drawing conclusions from your data and assessing the generalizability of your findings to the broader population.

Qualitative Data Analysis

Qualitative data analysis is employed when working with non-numerical data, such as text, interviews, or open-ended survey responses. It focuses on understanding the underlying themes, patterns, and meanings within qualitative data. Qualitative analysis techniques include:

  • Thematic Analysis : Identifying and analyzing recurring themes or patterns within textual data.
  • Content Analysis : Categorizing and coding qualitative data to extract meaningful insights.
  • Grounded Theory : Developing theories or frameworks based on emergent themes from the data.
  • Narrative Analysis : Examining the structure and content of narratives to uncover meaning.

Qualitative data analysis provides a rich and nuanced understanding of complex phenomena and human experiences.

Data Visualization

Data visualization is the art of representing data graphically to make complex information more understandable and accessible. Effective data visualization can reveal patterns, trends, and outliers in your data. Common types of data visualization include:

  • Bar Charts and Histograms : Used to display the distribution of categorical or discrete data.
  • Line Charts : Ideal for showing trends and changes in data over time.
  • Scatter Plots : Visualize relationships and correlations between two variables.
  • Pie Charts : Display the composition of a whole in terms of its parts.
  • Heatmaps : Depict patterns and relationships in multidimensional data through color-coding.
  • Box Plots : Provide a summary of the data distribution, including outliers.
  • Interactive Dashboards : Create dynamic visualizations that allow users to explore data interactively.

Data visualization not only enhances your understanding of the data but also serves as a powerful communication tool to convey your findings to others.

As you embark on the data analysis phase of your empirical research, remember that the specific methods and techniques you choose will depend on your research questions, data type, and objectives. Effective data analysis transforms raw data into valuable insights, bringing you closer to the answers you seek.

How to Report Empirical Research Results?

At this stage, you get to share your empirical research findings with the world. Effective reporting and presentation of your results are crucial for communicating your research's impact and insights.

1. Write the Research Paper

Writing a research paper is the culmination of your empirical research journey. It's where you synthesize your findings, provide context, and contribute to the body of knowledge in your field.

  • Title and Abstract : Craft a clear and concise title that reflects your research's essence. The abstract should provide a brief summary of your research objectives, methods, findings, and implications.
  • Introduction : In the introduction, introduce your research topic, state your research questions or hypotheses, and explain the significance of your study. Provide context by discussing relevant literature.
  • Methods : Describe your research design, data collection methods, and sampling procedures. Be precise and transparent, allowing readers to understand how you conducted your study.
  • Results : Present your findings in a clear and organized manner. Use tables, graphs, and statistical analyses to support your results. Avoid interpreting your findings in this section; focus on the presentation of raw data.
  • Discussion : Interpret your findings and discuss their implications. Relate your results to your research questions and the existing literature. Address any limitations of your study and suggest avenues for future research.
  • Conclusion : Summarize the key points of your research and its significance. Restate your main findings and their implications.
  • References : Cite all sources used in your research following a specific citation style (e.g., APA, MLA, Chicago). Ensure accuracy and consistency in your citations.
  • Appendices : Include any supplementary material, such as questionnaires, data coding sheets, or additional analyses, in the appendices.

Writing a research paper is a skill that improves with practice. Ensure clarity, coherence, and conciseness in your writing to make your research accessible to a broader audience.

2. Create Visuals and Tables

Visuals and tables are powerful tools for presenting complex data in an accessible and understandable manner.

  • Clarity : Ensure that your visuals and tables are clear and easy to interpret. Use descriptive titles and labels.
  • Consistency : Maintain consistency in formatting, such as font size and style, across all visuals and tables.
  • Appropriateness : Choose the most suitable visual representation for your data. Bar charts, line graphs, and scatter plots work well for different types of data.
  • Simplicity : Avoid clutter and unnecessary details. Focus on conveying the main points.
  • Accessibility : Make sure your visuals and tables are accessible to a broad audience, including those with visual impairments.
  • Captions : Include informative captions that explain the significance of each visual or table.

Compelling visuals and tables enhance the reader's understanding of your research and can be the key to conveying complex information efficiently.

3. Interpret Findings

Interpreting your findings is where you bridge the gap between data and meaning. It's your opportunity to provide context, discuss implications, and offer insights. When interpreting your findings:

  • Relate to Research Questions : Discuss how your findings directly address your research questions or hypotheses.
  • Compare with Literature : Analyze how your results align with or deviate from previous research in your field. What insights can you draw from these comparisons?
  • Discuss Limitations : Be transparent about the limitations of your study. Address any constraints, biases, or potential sources of error.
  • Practical Implications : Explore the real-world implications of your findings. How can they be applied or inform decision-making?
  • Future Research Directions : Suggest areas for future research based on the gaps or unanswered questions that emerged from your study.

Interpreting findings goes beyond simply presenting data; it's about weaving a narrative that helps readers grasp the significance of your research in the broader context.

With your research paper written, structured, and enriched with visuals, and your findings expertly interpreted, you are now prepared to communicate your research effectively. Sharing your insights and contributing to the body of knowledge in your field is a significant accomplishment in empirical research.

Examples of Empirical Research

To solidify your understanding of empirical research, let's delve into some real-world examples across different fields. These examples will illustrate how empirical research is applied to gather data, analyze findings, and draw conclusions.

Social Sciences

In the realm of social sciences, consider a sociological study exploring the impact of socioeconomic status on educational attainment. Researchers gather data from a diverse group of individuals, including their family backgrounds, income levels, and academic achievements.

Through statistical analysis, they can identify correlations and trends, revealing whether individuals from lower socioeconomic backgrounds are less likely to attain higher levels of education. This empirical research helps shed light on societal inequalities and informs policymakers on potential interventions to address disparities in educational access.

Environmental Science

Environmental scientists often employ empirical research to assess the effects of environmental changes. For instance, researchers studying the impact of climate change on wildlife might collect data on animal populations, weather patterns, and habitat conditions over an extended period.

By analyzing this empirical data, they can identify correlations between climate fluctuations and changes in wildlife behavior, migration patterns, or population sizes. This empirical research is crucial for understanding the ecological consequences of climate change and informing conservation efforts.

Business and Economics

In the business world, empirical research is essential for making data-driven decisions. Consider a market research study conducted by a business seeking to launch a new product. They collect data through surveys, focus groups, and consumer behavior analysis.

By examining this empirical data, the company can gauge consumer preferences, demand, and potential market size. Empirical research in business helps guide product development, pricing strategies, and marketing campaigns, increasing the likelihood of a successful product launch.

Psychological studies frequently rely on empirical research to understand human behavior and cognition. For instance, a psychologist interested in examining the impact of stress on memory might design an experiment. Participants are exposed to stress-inducing situations, and their memory performance is assessed through various tasks.

By analyzing the data collected, the psychologist can determine whether stress has a significant effect on memory recall. This empirical research contributes to our understanding of the complex interplay between psychological factors and cognitive processes.

These examples highlight the versatility and applicability of empirical research across diverse fields. Whether in medicine, social sciences, environmental science, business, or psychology, empirical research serves as a fundamental tool for gaining insights, testing hypotheses, and driving advancements in knowledge and practice.

Conclusion for Empirical Research

Empirical research is a powerful tool for gaining insights, testing hypotheses, and making informed decisions. By following the steps outlined in this guide, you've learned how to select research topics, collect data, analyze findings, and effectively communicate your research to the world. Remember, empirical research is a journey of discovery, and each step you take brings you closer to a deeper understanding of the world around you. Whether you're a scientist, a student, or someone curious about the process, the principles of empirical research empower you to explore, learn, and contribute to the ever-expanding realm of knowledge.

How to Collect Data for Empirical Research?

Introducing Appinio , the real-time market research platform revolutionizing how companies gather consumer insights for their empirical research endeavors. With Appinio, you can conduct your own market research in minutes, gaining valuable data to fuel your data-driven decisions.

Appinio is more than just a market research platform; it's a catalyst for transforming the way you approach empirical research, making it exciting, intuitive, and seamlessly integrated into your decision-making process.

Here's why Appinio is the go-to solution for empirical research:

  • From Questions to Insights in Minutes : With Appinio's streamlined process, you can go from formulating your research questions to obtaining actionable insights in a matter of minutes, saving you time and effort.
  • Intuitive Platform for Everyone : No need for a PhD in research; Appinio's platform is designed to be intuitive and user-friendly, ensuring that anyone can navigate and utilize it effectively.
  • Rapid Response Times : With an average field time of under 23 minutes for 1,000 respondents, Appinio delivers rapid results, allowing you to gather data swiftly and efficiently.
  • Global Reach with Targeted Precision : With access to over 90 countries and the ability to define target groups based on 1200+ characteristics, Appinio empowers you to reach your desired audience with precision and ease.

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

What is Market Share? Definition, Formula, Examples

15.04.2024 | 32min read

What is Market Share? Definition, Formula, Examples

What is Data Analysis Definition Tools Examples

11.04.2024 | 34min read

What is Data Analysis? Definition, Tools, Examples

What is a Confidence Interval and How to Calculate It

09.04.2024 | 29min read

What is a Confidence Interval and How to Calculate It?

  • Our Purpose
  • Leadership & Staff
  • INET in the News
  • Working Papers
  • Global Commission on Economic Transformation
  • Books with Cambridge University Press
  • Partnerships
  • The Pandemic and the Economic Crisis: A Global Agenda for Urgent Action
  • Addressing COVID-19 in Africa: Challenges and Leadership in a Context of Global Economic Transformation
  • Collections
  • Agriculture
  • Business & Industry
  • Complexity Economics
  • Development
  • Economic Geography
  • Economics Profession
  • Environment
  • Government & Politics
  • Human Behavior
  • Imperfect Knowledge
  • Inequality & Distribution
  • Macroeconomics
  • Math & Statistics
  • Microeconomics
  • Philosophy & Ethics
  • Private Debt
  • Technology & Innovation
  • Perspectives
  • Podcast: Economics & Beyond

Is there really an empirical turn in economics?

By Beatrice Cherrier

Sep 29, 2016 | History

The idea that economics has recently gone through an empirical turn –that it went from theory to data – is all over the place. Economists have been trumpeting this transformation on their blogs in the last few years, more rarely qualifying it. It is now showing up in economists’ publications. Not only in articles by those who claim to have been the architects of a “credibility revolution,” but by others as well . This narrative is also finding its way into historical works, for instance Avner Offer and Gabriel Söderberg’s new book on the history of the Nobel Prize in economics .

The structure of the argument is often the same. The chart below, taken from a paper by Dan Hamermesh , is used to argue that there has been a growth in empirical research and a decline of theory in economics in the past decades.

meaning of empirical research in economics

A short explanation is then offered: this transformation has been enabled by computerization + new, more abundant and better data. On this basis, the ” death of theory ” or at least a “ paradigm shif t” is announced. The causality is straight, maybe a little too straight. My goal, here, is not to deny that a sea change in the way data are produced and treated by economists has been taking place. It is to argue that the transformation has been oversimplified and mischaracterized.

What is this transformation we are talking about? An applied rather than an empirical turn

What does Hamermesh’s figure show exactly? Not that there is more empirical work in economics today than 30 or 50 ago. It indicates that there is more empirical work in top-5 journals. That is, that empirical work has become more prestigious . Of course, the quantity of empirical work has probably boomed as well, if only because the amount of time needed to input data and run a regression has gone from a week in the 1950s, 8 hours in the 70s to a blink today. But more precise measurement of both the scope of the increase and an identification of the key period(s?) in which the trend accelerated are necessary. And this requires supplementing the bibliometric work done on databases like Econlit, WoS or Jstor with hand-scrapped data on other types of economic writings. For economics has in fact nurtured a strong tradition in empirical work since at least the beginning of the XXth century , much of it being published outside the major academic journals. Whole fields, like agricultural economics , business cycles (remember Mitchell ?), labor economics , national accounting, then input-output analysis in the 1930s to 1960s, then cost-benefit policy evaluation from the 1960s onward, and finance all the way up have been built not only upon new tools and theories, but also upon large projects aimed at gathering, recording and making sense of data. But many of these databases and associated research were either proprietary (when funded by finance firms, commercial businesses, insurance companies, the military or governmental agencies), or published in reports, books and outlets such as central bank journals and Brookings Papers.

Neither does Hamermesh’s chart show that theory is dying. Making such a case requires tracing the growth of the number of empirical papers where a theoretical framework is altogether absent. Hamermesh and Jeff Biddle have recently done so . They highlight an interesting trend. In the 1970s, all microeconomic empirical papers exhibited a theoretical framework, but there has since been a limited but significant resurgence of a-theoretical works in the 2000s. But it hasn’t yet matched the proportion of a-theoretical papers published in the 1950s. If there is an emancipation of empirical under way, economics is not there yet. All in all, it seems that 1) theory dominating empirical work has been the exception in the history of economics rather than the rule 2) there is currently a reequilibration of theoretical and empirical work. But an emancipation of the latter? Not yet. And 3) what is dying, rather, is exclusively theoretical papers. This characterization is closer to how Dani Rodrik describes the transformation the discipline Economic Rules :

“these days, it is virtually impossible to publish in top journals [in some fields] without including some serious empirical analysis […] The standards of the profession now require much greater attention to the quality of data, to causal inference from evidence and a variety of statistical pitfalls. All in all, this empirical turn has been good for the profession.”

The death of theory-only papers was, in fact, pronounced decades ago, when John Pencavel revised he JEL codes in 1988 and decided to remove the “theory” category . He replaced it with micro, macro and tools categories, explaining that “good research in economics is a blend of theory and empirical work.” And indeed, another word that has gained wide currency in the past decades is “applied theory.” The term denotes that theoretical models are conceived in relation to specific issues (public, labor, urban, environmental), sometimes in relation to specific empirical validation techniques. It seems, thus, that economics has not really gone “from theory to data,” but has rather experienced a profound redefinition of the relationship of theoretical to empirical work. And there is yet another aspect of this transformation. Back in the 1980s, JEL classifiers did not merely merge theory with empirical categories. They also added policy work, on the assumption that most of what economists produced was ultimately policy-oriented. This is why the transformation of economics is the last decades is better characterized as an “applied turn” rather than an “empirical turn.”

Applied has indeed become a recurring work in John Bates Clark citations . “Roland Fryer is an influential applied microeconomist,” the 2015 citation begins. Matt Gentzkow (2014) is a leader “in the new generation of microeconomists applying economic methods.” Raj Chetty (2013) is “arguably the best applied microeconomist of his generation.” Amy Finkelstein (2012) is “one of the most accomplished applied microeconomists of her generation” (2011). And so forth. The citations of all laureates since Susan Athey (“an applied theorist”) have made extensive used of the “applied” wording, and have made clear that the medal was awarded not merely for “empirical” work, new identification strategies or the like, but for path-breaking combination of new theoretical insights with new empirical methodologies, with the aim of shedding light on policy issues. Even the 2016 citation for Yuliy Sannikov, the most theoretical medal in a long time, emphasizes that his work “had substantial impact on applied theory.”

Is this transformation really new ?

The timing of the transformation is difficult to grasp. Some writers argue that the empirical turn began after the war , other see a watershed in the early 1980s , with the introduction of the PC. Other mention the mid-1990s , with the rise of quasi-experimental techniques and the seeds of the “credibility revolution,” or the 2010s , with the boom in administrative and real-time recorded microeconomic business data, the interbreeding of econometrics with machine learning, and economists deserting academia to work at Google , Amazon and other IT firms.

Dating when economics became an applied science might be a meaningless task. The pre-war period is characterized by a lack of pecking order between theory and applied work. The rise of a theoretical “core” in economics between the 1940s and the 1970s is undeniable, but it encountered fierce resistance in the profession. Well-known artifacts of this tension include the Measurement without Theory controversy or Oskar Morgenstern’s attempt to make empirical work a compulsory criterion to be nominated as fellow of the Economic Society . And the development of new empirical techniques in micro (panel data, lab experiments, field experiments) has been slow yet constant.

Again, a useful proxy to track the transformation in the prestige hierarchy of the discipline is the John Bates Clark medal, as it signals what economists currently see as the most promising research agenda. The first 7 John Bates were perceived as theoretically oriented enough for part of the profession to r equest the establishment of a second meda l, named after Mitchell, to reward empirical, field and policy-oriented work. Roger Backhouse and I have shown that such contributions have been increasingly singled out from the mid-1960s onward. Citations for Zvi Griliches (1965), Marc Nerlove (1969), Dale Jorgensen (1971), Franklin Fisher (1973) and Martin Feldstein (1977) all emphasized contributions to empirical economics, and reinterpreted their work as “applied.” Feldstein, Fisher and later, Jerry Hausman (1985), are viewed as an “applied econometrician.” It was the mid-1990s medals – Lawrence Summers, David Card, Kevin Murphy – that emphasized empirical work more clearly. Summers’ citation notes a “remarkable resurgence of empirical economics over the past decade [which has] restored the primacy of actual economies over abstract models in much of economic thinking.” And as mentioned already, the last 8 medals systematically emphasize efforts to weave together new theoretical insights, empirical techniques and policy thinking.

All in all, not only is “applied” a more appropriate label than “empirical,” but “turn” might be a bit overdone a term to describe a transformation that seems made up of overlapping stages of various intensities and qualities. But what caused this re-equilibration and possible emancipation of applied work in the last decades? As befit economic stories, there are supply and demand factors.

Supply side explanations: new techniques, new data, computerization

A first explanation for the applied turn in economics is the rise of new and diverse techniques to confront models with data. Amidst a serious confidence crisis , the new macroeconometric techniques (VARs, Bayesian estimation, calibration) developed in the 1970s were spread alongside the new models they were supposed to estimate (by Sims, Kydland, Prescott, Sargent and others). The development of laboratory experiments contributed to the redefinition of the relationship between theory and data in microeconomics. Says Svorenčík (p15):

“By creating data that were specifically produced to satisfy conditions set by theory in controlled environments that were capable of being reproduced and repeated, [experimentalists] sought […] to turn experimental data into a trustworthy partner of economic theory. This was in no sense a surrender of data to the needs of theory. The goal was to elevate data from their denigrated position, acquired in postwar economics, and put them on the same footing as theory.”

The rise of quasi-experimental techniques , including natural and randomized controlled experiments, was also aimed at achieving a re-equilibration with (some would say emancipation from) theory. Whether it actually enabled economists to reclaim inductive methods is fiercely debated . Other techniques blurred the demarcation between theory and applied work by constructing real-world economic objects rather than studying them. That was the case of mechanism design. Blurred frontiers also resulted from the growing reliance upon simulations such as agent-based modeling, in which algorithms stand for theories or application, both or neither.

A second related explanation is the ” data revolution .” Though the recent explosion of real-time large scale multi-variable digital databases is mind-boggling and has the allure of a revolution, the availability of economic data has also evolved constantly since the Second World War. There is a large literature on the making of public statistics , such as national accounting or the cost of living indexes produced by the BLS, and new microeconomic surveys were started in the 1960s ( the Panel Survey on Income Dynamics ) and the 1970s (the National Longitudinal Survey). Additionally, administrative databases were increasingly opened for research. The availability of tax data, for instance, transformed public economics. In his 1964 AEA presidential address, Georges Stigler was thus claiming:

The age of quantification is now full upon us. We are armed with a bulging arsenal of techniques of quantitative analysis, and of a power - as compared to untrained common sense- comparable to the displacement of archers by cannon […] The desire to measure economic phenomena is now in the ascendent […] It is a scientific revolution of the very first magnitude.

A decade later, technological advanced allowed a redefinition of the information architecture of financial markets , and asset prices, as well as a range of business data (credit card information, etc.), could be recorded in real-time. The development of digital markets eventually generated new large databases on a wide range of microeconomic variables.

Rather than a revolution in the 80s, 90s or 2010s, the history of economics therefore seems one of constant adjustment to new types of data. The historical record belies Liran Einav and Jonathan Levin’s statement that “even 15 or 20 years ago, interesting and unstudied data sets were a scarce resource.” In a 1970 book on the state of economics edited by Nancy Ruggles , Dale Jorgenson explained that

“the database for econometric research is expanding much more rapidly than econometric research itself. National accounts and interindustry transactions data are now available for a large number of countries. Survey data on all aspects of economic behavior are gradually becoming incorporated into regular economic reporting. Censuses of economic activity are becoming more frequent and more detailed. Financial data related to securities market are increasing in reliability, scope and availability.”

And Guy Orcutt , the architect of economic simulation, explained that the current issue was that “the enormous body of data to work with” was “inappropriate” for scientific use because the economist was not controlling data collection. With a very different qualitative and quantitative situation, they were making the same statements and issuing the same complaints as today.

This dramatic improvement in data collection and storage has been enabled by the improvement in computer technology . Usually seen as the single most important factor behind the applied turn, the computer has affected much more than just economic data. It has enabled the implementation of validation techniques economists could only dreamed of in the previous decades. But a with economic data, the end of history has been repeatedly pronounced: in the 1940s, Wassily Leontief predicted that the ENIAC could soon tell how much public work was needed to cure a depression. In the 1960s, econometrician Daniel Suit wrote that the IBM 1920 enabled the estimation of models of “indefinite size.” In the 1970s, two RAND researchers explained that computers had provided a “bridge” between “formal theory” and “databases.” And in the late 1980s, Jerome Friedman claimed that statisticians could substitute computer power for unverifiable assumptions. If a revolution under way, then, it’s the fifth in fifty years.

But the problem is not only with replacing “revolutions” with more continuous processes. The computer argument seems more deeply flawed . First, because the two most computer-intensive techniques of the 1970s, Computable General Equilibrium and large-scale Keynesian macroeconometrics, were marginalized at the very moment they were finally getting the equipment needed to run their models quickly and with fewer restrictions. Fights erupted as to the proper way to solve CGE models (estimation or calibration), Marianne Johnson and Charles Ballard explain , and the models were seen as esoteric “black-boxes.” Macroeconometric models were swept away from academia by the Lucas critique, and found refuge in the forecasting business. And what has become the most fashionable approach to empirical work three decades later, Randomized Control Trials, merely requires the kind of means and variances calculations that could have been performed on 1970s machines. Better computers are therefore neither sufficient nor necessary for an empirical approach to become dominant.

Also flawed is the idea that the computer was meant to stimulate empirical work, thus weaken theory. In physics , evolutionary biology or linguistics, computers transformed theory as well as empirical work. This did not happen in economics, is spite of attempts to disseminate automated theorem proving, numerical methods and simulations. One explanation is that economists stuck with concepts of proof which required that results are analytically derived rather than approximated. With changes in epistemology, the computer could even made any demarcation between theory and applied irrelevant. Finally, while hardware improvements are largely exogenous to the discipline, software development is endogenous. The way computing affected economists practices was therefore dependent on economists’ scientific strategies.

The better integration of theoretical and empirical work oriented toward policy prescription and evaluation which characterized the “applied turn” was not merely driven by debates internal to the profession, however. They also largely came in responses to the new demands and pressures public and private clients were placing on economists.

Demand side explanation: new patrons, new policy regimes, new business demands

An easy explanation to the rise of applied economics is the troubled context of the 1970s: the social agitation, the urban crisis, the ghettos, the pollution, the congestion, the stagflation, the energy crisis and looming environmental crisis, the Civil Right movement resulted in the rise of radicalism and neoconservatism alike, students’ protests and demand for more relevance in their scientific education. Both economic education and research were brought to bear on real-world issues. But this raises the same why is this time different? kind of objection mentioned earlier. What about the Great Depression, World War II and the Cold War? These eras pervaded by a similar sense of emergency, a similar demand for relevance. What was different, then, was the way economists’ patrons and clients conceived the appropriate cures for social diseases.

Patrons of economic research have changed. In the 1940s and 1950s, the military and the Ford Foundation were the largest patrons. Both wanted applied research of the quantitative, formalized and interdisciplinary kind. Economists were however eager to secure distinct streams of money. They felt that being often looped together with other social sciences was a problem. Suspicion toward social sciences was as high among politicians – isn’t there a systematic affinity with socialism– as among natural scientists –social sciences can’t produce laws. Accordingly, when the NSF was established in 1950 , it had no social science division. By the 1980s, however, it has become a key player in the funding of economic research. As it rose to dominance, the NSF imposed policy-benefits, later christened “broader impact,” as a criterion whereby research projects would be selected. This orientation was embodied in the Research Applied to National Needs office, which, in the wake of its creation in the 1970s, supported research on social indicators, data and evaluation methods for welfare programs. The requirement that the social benefits of economics research be emphasized in applications was furthered by Ronald Reagan’s threat to slash the NSF budget for social sciences by 75% in 1981, and the tension between pure and applied research, and policy benefits has since remained a permanent feature of its economic program.

As exemplified by the change in NSF’s strategy, science patrons’ demands have sometimes been subordinated to changes in policy regimes. From Lyndon Johnson’s to Reagan’s, all government pressured scientists to produce applied knowledge to help the design, and, this was new, the evaluation of economic policies. The policy orientation of the applied turn was already apparent in Stigler’s 1964 AEA presidential address, characteristically titled The Economist and the State: “our expanding theoretical and empirical studies will inevitably and irresistibly enter into the subject of public policy, and we shall develop a body of knowledge essential to intelligent policy formulation,” he wrote. Henry Ford’s interest in having the efficiency of some welfare policies and regulations evaluated fostered the generalization of cost-benefit analysis. It also pushed Heather Ross , then MIT graduate student, to undertake with Princeton’s William Baumol and Albert Rees a large randomized social experiment to test negative income tax in New Jersey and Pennsylvania. The motivation to undertake experiment throughout the 70s and 80s was not so much to emulate medical science, but to allow the evaluation of policies on pilot projects. As described by Elisabeth Berman , Reagan’s deregulatory movement furthered this “economicization” of public policy. The quest to emulate markets created a favorable reception to the mix of mechanism design and lab experiments economists had developed in the late 1970s. Some of its achievements, the FCC auction or the kidney matching algorithm have become the flagship of a science capable to yield better living . These skills have equally been in demand by private firms, especially as the rise of digital markets involved the design of pricing mechanisms and the study of regulatory issues. Because it is as much the product of external pressures as of epistemological and technical evolutions, it is not always easy to disentangle rhetoric from genuine historical trend in the “applied turn.”

Note : this post relies on the research Roger Backhouse and I have been carrying in the past three years . It is an attempt to unpack my interpretation of the data we have accumulated, as we are writing up a summary of our finding. Though it is heavily influenced by Roger’s thinking, it only reflects my own views.

meaning of empirical research in economics

Share your perspective

More from beatrice cherrier.

meaning of empirical research in economics

Why Economists Should Think of Themselves as Plumbers

Article By Beatrice Cherrier

Jan 23, 2017

meaning of empirical research in economics

Remembering Tony Atkinson as the Architect of Modern Public Economics

Jan 19, 2017

meaning of empirical research in economics

The strange fate of economists' interest in collective decision-making

Aug 9, 2016

More articles

meaning of empirical research in economics

Overdraft Fees, Credit Card Late Fees, and the Lump of Profit Fallacy

Article By Darren Bush , Mark Glick , Gabriel Lozada , and Hal Singer

Apr 15, 2024

meaning of empirical research in economics

Bernanke and Blanchard’s Obsession with the Wage-Price Spiral

Article By Servaas Storm

Apr 8, 2024

meaning of empirical research in economics

Europe's New Fiscal Rules Harm Working People and Women, Boost Right-Wing Radicals

Article By Lynn Parramore

Apr 5, 2024

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

The case for economics — by the numbers

Press contact :, media download.

A new study examines 140,000 economics papers published from 1970 to 2015, tallying the “extramural” citations that economics papers received in 16 other academic fields, including sociology, medicine, and public health.

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

A new study examines 140,000 economics papers published from 1970 to 2015, tallying the “extramural” citations that economics papers received in 16 other academic fields, including sociology, medicine, and public health.

Previous image Next image

In recent years, criticism has been levelled at economics for being insular and unconcerned about real-world problems. But a new study led by MIT scholars finds the field increasingly overlaps with the work of other disciplines, and, in a related development, has become more empirical and data-driven, while producing less work of pure theory.

The study examines 140,000 economics papers published over a 45-year span, from 1970 to 2015, tallying the “extramural” citations that economics papers received in 16 other academic fields — ranging from other social sciences such as sociology to medicine and public health. In seven of those fields, economics is the social science most likely to be cited, and it is virtually tied for first in citations in another two disciplines.

In psychology journals, for instance, citations of economics papers have more than doubled since 2000. Public health papers now cite economics work twice as often as they did 10 years ago, and citations of economics research in fields from operations research to computer science have risen sharply as well.

While citations of economics papers in the field of finance have risen slightly in the last two decades, that rate of growth is no higher than it is in many other fields, and the overall interaction between economics and finance has not changed much. That suggests economics has not been unusually oriented toward finance issues — as some critics have claimed since the banking-sector crash of 2007-2008. And the study’s authors contend that as economics becomes more empirical, it is less dogmatic.

“If you ask me, economics has never been better,” says Josh Angrist, an MIT economist who led the study. “It’s never been more useful. It’s never been more scientific and more evidence-based.”

Indeed, the proportion of economics papers based on empirical work — as opposed to theory or methodology — cited in top journals within the field has risen by roughly 20 percentage points since 1990.

The paper, “Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship,” appears in this month’s issue of the Journal of Economic Literature .

The co-authors are Angrist, who is the Ford Professor of Economics in MIT Department of Economics; Pierre Azoulay, the International Programs Professor of Management at the MIT Sloan School of Management; Glenn Ellison, the Gregory K. Palm Professor Economics and associate head of the Department of Economics; Ryan Hill, a doctoral candidate in MIT’s Department of Economics; and Susan Feng Lu, an associate professor of management in Purdue University’s Krannert School of Management.

Taking critics seriously

As Angrist acknowledges, one impetus for the study was the wave of criticism the economics profession has faced over the last decade, after the banking crisis and the “Great Recession” of 2008-2009, which included the finance-sector crash of 2008. The paper’s title alludes to the film “Inside Job” — whose thesis holds that, as Angrist puts it, “economics scholarship as an academic enterprise was captured somehow by finance, and that academic economists should therefore be blamed for the Great Recession.”

To conduct the study, the researchers used the Web of Science, a comprehensive bibliographic database, to examine citations between 1970 and 2015. The scholars developed machine-learning techniques to classify economics papers into subfields (such as macroeconomics or industrial organization) and by research “style” —  meaning whether papers are primarily concerned with economic theory, empirical analysis, or econometric methods.

“We did a lot of fine-tuning of that,” says Hill, noting that for a study of this size, a machine-learning approach is a necessity.

The study also details the relationship between economics and four additional social science disciplines: anthropology, political science, psychology, and sociology. Among these, political science has overtaken sociology as the discipline most engaged with economics. Psychology papers now cite economics research about as often as they cite works of sociology.

The new intellectual connectivity between economics and psychology appears to be a product of the growth of behavioral economics, which examines the irrational, short-sighted financial decision-making of individuals — a different paradigm than the assumptions about rational decision-making found in neoclassical economics. During the study’s entire time period, one of the economics papers cited most often by other disciplines is the classic article “Prospect Theory: An Analysis of Decision under Risk,” by behavioral economists Daniel Kahneman and Amos Tversky.

Beyond the social sciences, other academic disciplines for which the researchers studied the influence of economics include four classic business fields — accounting, finance, management, and marketing — as well as computer science, mathematics, medicine, operations research, physics, public health, and statistics.

The researchers believe these “extramural” citations of economics are a good indicator of economics’ scientific value and relevance.

“Economics is getting more citations from computer science and sociology, political science, and psychology, but we also see fields like public health and medicine starting to cite economics papers,” Angrist says. “The empirical share of the economics publication output is growing. That’s a fairly marked change. But even more dramatic is the proportion of citations that flow to empirical work.”

Ellison emphasizes that because other disciplines are citing empirical economics more often, it shows that the growth of empirical research in economics is not just a self-reinforcing change, in which scholars chase trendy ideas. Instead, he notes, economists are producing broadly useful empirical research.  

“Political scientists would feel totally free to ignore what economists were writing if what economists were writing today wasn’t of interest to them,” Ellison says. “But we’ve had this big shift in what we do, and other disciplines are showing their interest.”

It may also be that the empirical methods used in economics now more closely match those in other disciplines as well.

“What’s new is that economics is producing more accessible empirical work,” Hill says. “Our methods are becoming more similar … through randomized controlled trials, lab experiments, and other experimental approaches.”

But as the scholars note, there are exceptions to the general pattern in which greater empiricism in economics corresponds to greater interest from other fields. Computer science and operations research papers, which increasingly cite economists’ research, are mostly interested in the theory side of economics. And the growing overlap between psychology and economics involves a mix of theory and data-driven work.

In a big country

Angrist says he hopes the paper will help journalists and the general public appreciate how varied economics research is.

“To talk about economics is sort of like talking about [the United States of] America,” Angrist says. “America is a big, diverse country, and economics scholarship is a big, diverse enterprise, with many fields.”

He adds: “I think economics is incredibly eclectic.”

Ellison emphasizes this point as well, observing that the sheer breadth of the discipline gives economics the ability to have an impact in so many other fields.  

“It really seems to be the diversity of economics that makes it do well in influencing other fields,” Ellison says. “Operations research, computer science, and psychology are paying a lot of attention to economic theory. Sociologists are paying a lot of attention to labor economics, marketing and management are paying attention to industrial organization, statisticians are paying attention to econometrics, and the public health people are paying attention to health economics. Just about everything in economics is influential somewhere.”

For his part, Angrist notes that he is a biased observer: He is a dedicated empiricist and a leading practitioner of research that uses quasiexperimental methods. His studies leverage circumstances in which, say, policy changes random assignments in civic life allow researchers to study two otherwise similar groups of people separated by one thing, such as access to health care.

Angrist was also a graduate-school advisor of Esther Duflo PhD ’99, who won the Nobel Prize in economics last fall, along with MIT’s Abhijit Banerjee — and Duflo thanked Angrist at their Nobel press conference, citing his methodological influence on her work. Duflo and Banerjee, as co-founders of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL), are advocates of using field experiments in economics, which is still another way of producing empirical results with policy implications.

“More and more of our empirical work is worth paying attention to, and people do increasingly pay attention to it,” Angrist says. “At the same time, economists are much less inward-looking than they used to be.”

Share this news article on:

Related links.

  • Paper: “Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship”
  • Josh Angrist
  • Glenn Ellison
  • Pierre Azoulay
  • Department of Economics
  • Article: “The Natural Experimenter”

Related Topics

  • MIT Sloan School of Management
  • School of Humanities Arts and Social Sciences
  • History of science
  • Social sciences

Related Articles

MIT economists Abhijit Banerjee and Esther Duflo stand outside their home after learning that they have been named co-winners of the 2019 Nobel Prize in economic sciences. They will share the prize with Michael Kremer of Harvard University.

MIT economists Esther Duflo and Abhijit Banerjee win Nobel Prize

Left to right: Erik Demaine, Graham Jones, T.L. Taylor, Joshua Angrist

2019 MacVicar Faculty Fellows named

A study co-authored by MIT professor Pierre Azoulay has shown that in many areas of the life sciences, the deaths of prominent researchers are often followed by a surge in highly cited research by newcomers to those fields.

New science blooms after star researchers die, study finds

The cover of "Mastering ’Metrics: The Path from Cause to Effect" (Princeton University Press), by MIT economist Joshua Angrist (pictured) and Jörn-Steffen Pischke of the London School of Economics

The “metrics” system

Previous item Next item

More MIT News

A man moves three large boxes on a handtruck while a woman standing in back of an open van takes inventory

3 Questions: Enhancing last-mile logistics with machine learning

Read full story →

Four women sit on a stage, one with a raised fist, in front of a projected slide headlined "Women in STEM."

Women in STEM — A celebration of excellence and curiosity

Stylized drawing of a computer monitor with a black screen, surrounded by green beams of light and a completed task list on each side. Behind these objects are two IBM quantum computers, shown as cylinders connected to wires

A blueprint for making quantum computers easier to program

A diagram shows a box of rows of long silver tubes stacked on top of each other. Tiny brown objects representing carbon nanotubes are in between the layers. An inset enlarges the brown objects and they are an array of tree-like scaffolding.

“Nanostitches” enable lighter and tougher composite materials

Mark Harnett stands with arms crossed in a dark lab lit with red lighting.

From neurons to learning and memory

Headshot of a woman in a colorful striped dress.

A biomedical engineer pivots from human movement to women’s health

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

This website uses cookies.

By clicking the "Accept" button or continuing to browse our site, you agree to first-party and session-only cookies being stored on your device to enhance site navigation and analyze site performance and traffic. For more information on our use of cookies, please see our Privacy Policy .

  • Journal of Economic Perspectives
  • Spring 2010

The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics

  • Joshua D. Angrist
  • Jörn-Steffen Pischke
  • Article Information
  • Comments ( 0 )

JEL Classification

  • C01 Econometrics
  • Reference Manager
  • Simple TEXT file

People also looked at

Hypothesis and theory article, is economics an empirical science if not, can it become one.

meaning of empirical research in economics

  • 1 Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, NY, USA
  • 2 De Vinci Finance Lab, École Supérieure d'Ingénieurs Léonard de Vinci, Paris, France

Today's mainstream economics, embodied in Dynamic Stochastic General Equilibrium (DSGE) models, cannot be considered an empirical science in the modern sense of the term: it is not based on empirical data, is not descriptive of the real-world economy, and has little forecasting power. In this paper, I begin with a review of the weaknesses of neoclassical economic theory and argue for a truly scientific theory based on data, the sine qua non of bringing economics into the realm of an empirical science. But I suggest that, before embarking on this endeavor, we first need to analyze the epistemological problems of economics to understand what research questions we can reasonably ask our theory to address. I then discuss new approaches which hold the promise of bringing economics closer to being an empirical science. Among the approaches discussed are the study of economies as complex systems, econometrics and econophysics, artificial economics made up of multiple interacting agents as well as attempts being made inside present main stream theory to more closely align the theory with the real world.

Introduction

In this paper we analyze the status of economics as a science: Can neoclassical economics be considered an empirical science, eventually only in the making? Are there other approaches that might better bring economics into the realm of empirical science, with the objective of allowing us to forecast (at least probabilistically) economic and market phenomena?

While our discussion is centered on economic theory, our considerations can be extended to finance. Indeed, mainstream finance theory shares the basic framework of mainstream economic theory, which was developed in the 1960s, 1970s in what is called the “rational expectations revolution.” The starting point was the so-called Lucas critique. University of Chicago professor Robert Lucas observed that changes in government policy are made ineffective by the fact that economic agents anticipate these changes and modify their behavior. He therefore advocated giving a micro foundation to macroeconomics—that is, explaining macroeconomics in function of the behavior of individual agents. Lucas was awarded the 1995 Nobel Prize in Economics for his work.

The result of the Lucas critique was a tendency, among those working within the framework of mainstream economic theory, to develop macroeconomic models based on a multitude of agents characterized by rational expectations, optimization and equilibrium 1 . Following common practice we will refer to this economic theory as neoclassical economics or mainstream economics. Mainstream finance theory adopted the same basic principles as general equilibrium economics.

Since the 2007–2009 financial crisis and the ensuing economic downturn—neither foreseen (not even probabilistically) by neoclassical economic theory—the theory has come under increasing criticism. Many observe that mainstream economics provides little knowledge from which we can make reliable forecasts—the objective of science in the modern sense of the term (for more on this, see Fabozzi et al. [ 1 ]).

Before discussing these questions, it is useful to identify the appropriate epistemological framework(s) for economic theory. That is, we need to understand what questions we can ask our theory to address and what types of answers we might expect. If economics is to become an empirical science, we cannot accept terms such as volatility, inflation, growth, recession, consumer confidence, and so on without carefully defining them: the epistemology of economics has to be clarified.

We will subsequently discuss why we argue that neoclassical economic and finance theory is not an empirical science as presently formulated—nor can it become one. We will then discuss new ideas that offer the possibility of bringing economic and finance theory closer to being empirical sciences,—in particular, economics (and finance) based on the analysis of financial time series (e.g., econometrics) and on the theory of complexity. These new ideas might be referred to collectively as “scientific economics.”

We suggest that the epistemological framework of economics is not that of physics but that of complex systems. After all, economies are hierarchical complex systems made up of human agents—complex systems in themselves—and aggregations of human agents. We will argue that giving a micro foundation to macroeconomics is a project with intrinsic limitations, typical of complex systems. These limitations constrain the types of questions we can ask. Note that the notion of economies as complex systems is not really new. Adam Smith's notion of the “invisible hand” is an emerging property of complex markets. Even mainstream economics represents economic systems as complex systems made up of a large number of agents—but it makes unreasonable simplifying assumptions.

For this and other reasons, economics is a science very different from the science of physics. It might be that a true understanding of economics will require a new synthesis of the physical and social sciences given that economies are complex systems made up of human individuals and must therefore take into account human values, a feature that does not appear in other complex systems. This possibility will be explored in the fourth part of our discussion, but let's first start with a review of the epistemological foundations of modern sciences and how our economic theory eventually fits in.

The Epistemological Foundations of Economics

The hallmark of modern science is its empirical character: modern science is based on theories that model or explain empirical observations. No a priori factual knowledge is assumed; a priori knowledge is confined to logic and mathematics and, perhaps, some very general principles related to the meaning of scientific laws and terms and how they are linked to observations. There are philosophical and scientific issues associated with this principle. For example, in his Two Dogmas of Empiricism , van Orman Quine [ 2 ] challenged the separation between logical analytic truth and factual truth. We will herein limit our discussion to the key issues with the bearing on economics. They are:

a. What is the nature of observations?

b. How can empirical theories be validated or refuted?

c. What is the nature of our knowledge of complex systems?

d. What scientific knowledge can we have of mental processes and of systems that depend on human mental processes?

Let's now discuss each of these issues.

What is the Nature of Observations?

The physical sciences have adopted the principles of operationalism as put forward by Bridgman [ 3 ], recipient of 1946 Nobel Prize in Physics, in his book The Logic of Modern Physics . Operationalism holds that the meaning of scientific concepts is rooted in the operations needed to measure physical quantities 2 . Operationalism rejects the idea that there are quantities defined a priori that we can measure with different (eventually approximate) methods. It argues that the meaning of a scientific concept is in how we observe (or measure) it.

Operationalism has been criticized on the basis that science, in particular physics, uses abstract terms such as “mass” or “force,” that are not directly linked to a measurement process. See, for example, Hempel [ 4 ]. This criticism does not invalidate operationalism but requires that operationalism as an epistemological principle be interpreted globally . The meaning of a physical concept is not given by a single measurement process but by the entire theory and by the set of all observations. This point of view has been argued by many philosophers and scientists, including Feyerabend [ 5 ], Kuhn [ 6 ], and van Orman Quine [ 2 ].

But how do we define “observations”? In physics, where theories have been validated to a high degree of precision, we accept as observations quantities obtained through complex, theory-dependent measurement processes. For example, we observe temperature through the elongation of a column of mercury because the relationship between the length of the column of mercury and temperature is well-established and coherent with other observations such as the change of electrical resistance of a conductor. Temperature is an abstract term that enters in many indirect observations, all coherent.

Contrast the above to economic and finance theory where there is a classical distinction between observables and hidden variables. The price of a stock is an observable while the market state of low or high volatility is a hidden variable. There are a plethora of methods to measure volatility, including: the ARCH/GARCH family of models, stochastic volatility, and implied volatility. All these methods are conceptually different and yield different measurements. Volatility would be a well-defined concept if it were a theoretical term that is part of a global theory of economics or finance. But in economics and finance, the different models to measure volatility use different concepts of volatility.

There is no global theory that effectively links all true observations to all hidden variables. Instead, we have many individual empirical statements with only local links through specific models. This is a significant difference with respect to physical theories; it weakens the empirical content of economic and finance theory.

Note that the epistemology of economics is not (presently) based on a unified theory with abstract terms and observations. It is, as mentioned, based on many individual facts. Critics remark that mainstream economics is a deductive theory, not based on facts. This is true, but what would be required is a deductive theory based on facts . Collections of individual facts, for example financial time series, have, as mentioned, weak empirical content.

How Can Empirical Theories Be Validated or Refuted?

Another fundamental issue goes back to the eighteenth century, when the philosopher-economist David Hume outlined the philosophical principles of Empiricism. Here is the issue: No finite series of observations can justify the statement of a general law valid in all places and at all times, that is, scientific laws cannot be validated in any conclusive way. The problem of the validation of empirical laws has been widely debated; the prevailing view today is that scientific laws must be considered hypotheses validated by past data but susceptible of being invalidated by new observations. That is, scientific laws are hypotheses that explain (or model) known data and observations but there is no guarantee that new observations will not refute these laws. The attention has therefore shifted from the problem of validation to the problem of rejection.

That scientific theories cannot be validated but only refuted is the key argument in Carl Popper's influential Conjectures and Refutations: The Growth of Scientific Knowledge . Popper [ 7 ] argued that scientific laws are conjectures that cannot be validated but can be refuted. Refutations, however, are not a straightforward matter: Confronted with new empirical data, theories can, to some extent, be stretched and modified to accommodate the new data.

The issue of validation and refutation is particularly critical in economics given the paucity of data. Financial models are validated with a low level of precision in comparison to physical laws. Consider, for example, the distribution of returns. It is known that returns at time horizons from minutes to weeks are not normally distributed but have tails fatter than those of a normal distribution. However, the exact form of returns distributions is not known. Current models propose a range from inverse power laws with a variety of exponents to stretched exponentials, but there is no consensus.

Economic and financial models—all probabilistic models—are validated or refuted with standard statistical procedures. This leaves much uncertainty given that the choice among the models and the parameterization of different models are subject to uncertainty. And in most cases there is no global theory.

Economic and financial models do not have descriptive power. This point was made by Friedman [ 8 ], University of Chicago economist and recipient of the 1976 Nobel Prize in Economics. Friedman argued that economic models are like those in physics, that is to say, mathematical tools to connect observations. There is no intrinsic rationality in economic models. We must resist the temptation to think that there are a priori truths in economic reasoning.

In summary, economic theories are models that link observations without any pretense of being descriptive. Their validation and eventual rejection are performed with standard statistical methods. But the level of uncertainty is great. As famously observed by Black [ 9 ] in his article “Noise,” “Noise makes it very difficult to test either practical or academic theories about the way that financial or economic markets work. We are forced to act largely in the dark.” That is to say, there is little evidence that allows us to choose between different economic and financial models.

What is the Nature of Our Knowledge of Complex Systems?

The theory of complex systems has as its objective to explain the behavior of systems made up of many interacting parts. In our Introduction, we suggested that the theory of complexity might be relevant to the analysis of economies and financial time series. The key theoretical questions are:

The first question is essentially the following: Can we give a micro foundation to economics? The second question asks: How do economies develop, grow, and transform themselves? The last question is: Using the theory of complex systems, what type of economic theory we can hope to develop?

The principle of scientific reductionism holds that the behavior of any physical system can be reduced to basic physical laws. In other words, reductionism states that we can logically describe the behavior of any physical system in terms of its basic physical laws. For example, the interaction of complex molecules (such as molecules of drugs and target molecules with which they are supposed to interact) can in principle be described by quantum mechanical theories. The actual computation might be impossibly long in practice but, in theory, the computation should be possible.

Does reductionism hold for very complex physical systems? Can any property of a complex system be mathematically described in terms of basic physical laws? Philip Warren Anderson, co-recipient of the 1977 Nobel Prize in Physics, conjectured in his article “More is different” [ 10 ] that complex systems might exhibit properties that cannot be explained in terms of microscopic laws. This does not mean that physical laws are violated in complex systems; rather it means that in complex systems there are aggregate properties that cannot be deduced with a finite chain of logical deductions from basic laws. This impossibility is one of the many results on the limits of computability and the limits of logical deductions that were discovered after the celebrated theorem of Goedel on the incompleteness of formal logical systems.

Some rigorous results can be obtained for simple systems. For example, Gu et al. [ 11 ] demonstrated that an infinite Ising lattice exhibits properties that cannot be computed in any finite time from the basic laws governing the behavior of the lattice. This result is obtained from well-known results of the theory of computability. In simple terms, even if basic physical laws are valid for any component of a complex system, in some cases the chains of deduction for modeling the behavior of aggregate quantities become infinite. Therefore, no finite computation can be performed.

Reductionism in economic theory is the belief that we can give a micro foundation to macroeconomics, that is, that we can explain aggregate economic behavior in terms of the behavior of single agents. As mentioned, this was the project of economic theory following the Lucas critique. As we will see in Section What Is the Cognitive Value of Neoclassical Economics, Neoclassical Finance? this project produced an idealized concept of economies, far from reality.

We do not know how economic agents behave, nor do we know if and how their behavior can be aggregated to result in macroeconomic behavior. It might well be that the behavior of each agent cannot be computed and that the behavior of the aggregates cannot be computed in terms of individuals. While the behavior of agents has been analyzed in some experimental setting, we are far from having arrived at a true understanding of agent behavior.

This is why a careful analysis of the epistemology of economics is called for. If our objective is to arrive at a science of economics, we should ask only those questions that we can reasonably answer, and refrain from asking questions and formulating theories for which there is no possible empirical evidence or theoretical explanation.

In other words, unless we make unrealistic simplifications, giving a micro foundation to macroeconomics might prove to be an impossible task. Neoclassical economics makes such unrealistic simplifications. A better approximation to a realistic description of economics might be provided by agent-based systems, which we will discuss later. But agent-based systems are themselves complex systems: they do not describe mathematically, rather they simulate economic reality. A truly scientific view of economics should not be dogmatic, nor should it assume that we can write an aggregate model based on micro-behavior.

Given that it might not be possible to describe the behavior of complex systems in terms of the laws of their components, the next relevant question is: So how can we describe complex systems? Do complex systems obey deterministic laws dependent on the individual structure of each system, which might be discovered independently from basic laws (be they deterministic or probabilistic)? Or do complex systems obey statistical laws? Or is the behavior of complex systems simply unpredictable?

It is likely that there is no general answer to these questions. A truly complex system admits many different possible descriptions in function of the aggregate variables under consideration. It is likely that some aggregate properties can be subject to study while others are impossible to describe. In addition, the types of description might vary greatly. Consider the emission of human verbal signals (i.e., speech). Speech might indeed have near deterministic properties in terms of the formation rules, grammar, and syntax. If we move to the semantic level, speech has different laws in function of cultures and domains of interest which we might partially describe. But modeling the daily verbal emissions of an individual likely remains beyond any mathematical and computational capability. Only broad statistics can be computed.

If we turn to economics, when we aggregate output in terms of prices, we see that the growth of the aggregate output is subject to constraints, such as the availability of money, that make the quantitative growth of economies at least partially forecastable. Once more, it is important to understand for what questions we might reasonably obtain an answer.

There are additional fundamental questions regarding complex systems. For example, as mentioned above: Can global properties spontaneously emerge in non-complex systems and if so, how? There are well-known examples of simple self-organizing systems such as unsupervised neural networks. But how can we explain the self-organizing properties of systems such as economic systems or financial markets? Can it be explained in terms of fundamental laws plus some added noise? While simple self-organizing behavior has been found in simulated systems, explaining the emergence and successive evolution of very complex systems remains unresolved.

Self-organization is subject to the same considerations made above regarding the description of complex systems. It is well possible that the self-organization of highly complex systems cannot be described in terms of the basic laws or rules of behavior of the various components. In some systems, there might be no finite chain of logical deductions able to explain self-organization. This is a mathematical problem, unrelated to the failure or insufficiency of basic laws. Explaining self-organization becomes another scientific problem.

Self-organization is a key concept in economics. Economies and markets are self-organizing systems whose complexity has increased over thousands of years. Can we explain this process of self-organization? Can we capture the process that makes economies and markets change their own structure, adopt new models of interaction?

No clear answer is yet available. Chaitin [ 12 ] introduced the notion of metabiology and has suggested that it is possible to provide a mathematical justification for Darwinian evolution. Arguably one might be able to develop a “metaeconomics” and provide some clue as to how economies or markets develop 3 .

Clearly there are numerous epistemological questions related to the self-organization of complex systems. Presently these questions remain unanswered at the level of scientific laws. Historians, philosophers, and social scientists have proposed many explanations of the development of human societies. Perhaps the most influential has been Hegel's Dialectic, which is the conceptual foundation of Marxism. But these explanations are not scientific in the modern sense of the term.

It is not obvious that complex systems can be handled with quantitative laws. Laws, if they exist, might be of a more general logical nature (e.g., logical laws). Consider the rules of language—a genuine characteristic of the complex system that is the human being: there is nothing intrinsically quantitative. Nor are DNA structures intrinsically quantitative. So with economic and market organization: they are not intrinsically quantitative.

What Scientific Knowledge Can We Have of Our Mental Processes and of Systems That Depend on Them?

We have now come to the last question of importance to our discussion of the epistemological foundations of modern science: What is the place of mental experience in modern science? Can we model the process through which humans make decisions? Or is human behavior essentially unpredictable? The above might seem arcane philosophical or scientific speculation, unrelated to economics or finance. Perhaps, Except that whether or not economics or finance can be studied as a science depends, at least to some extent, on if and how human behavior can be studied as a science.

Human decision-making shapes the course of economies and financial markets: economics and finance can become a science if human behavior can be scientifically studied, at least at some level of aggregation or observability. Most scientific efforts on human behavior have been devoted to the study of neurodynamics and neurophysiology. We have acquired a substantial body of knowledge on how mental tasks are distributed to different regions of the brain. We also have increased our knowledge of the physiology of nervous tissues, of the chemical and electrical exchanges between nervous cells. This is, of course, valuable knowledge from both the practical and the theoretical points of view.

However, we are still far from having acquired any real understanding of mental processes. Even psychology, which essentially categorizes mental events as if they were physical objects, has not arrived at an understanding of mental events. Surely we now know a lot on how different chemicals might affect mental behavior, but we still have no understanding of the mental processes themselves. For example, a beam of light hits a human eye and the conscious experience of a color is produced. How does this happen? Is it the particular structure of molecules in nerve cells that enables vision? Can we really maintain that structure “generates” consciousness? Might consciousness be generated through complex structures, for example with computers? While it is hard to believe that structure in itself creates consciousness, consciousness seems to appear only in association with very complex structures of nerve cells.

John von Neumann [ 13 ] was the first to argue that brains, and more in general any information processing structure, can be represented as computers. This has been hotly debated by scientists and philosophers and has captured popular imagination. In simple terms, the question is: Can computers think? In many discussions, there is a more or less explicit confusion between thinking as the ability to perform tasks and thinking as having an experience.

It was Alan Turing who introduced a famous test to determine if a machine is intelligent. Turing [ 14 ] argued that if a machine can respond to questions as a human would do, then the machine has to be considered intelligent. But Turing's criterion says nothing as regards the feelings and emotions of the machine.

In principle we should be able to study human behavior just as we study the behavior of a computer as both brain and computer are made of earthly materials. But given the complexity of the brain, it cannot be assumed that we can describe the behavior of functions that depend on the brain, such as economic or financial decision-making, with a mathematical or computational model.

A key question in studying behavior is whether we have to include mental phenomena in our theories. The answer is not simple. It is common daily experience that we make decisions based on emotions. In finance, irrational behavior is a well-known phenomenon (see Shiller [ 15 ]). Emotions, such as greed or fear, drive individuals to “herd” in and out of investments collectively, thereby causing the inflation/deflation of asset prices.

Given that we do not know how to represent and model the behavior of complex systems such as the brain, we cannot exclude that mental phenomena are needed to explain behavior. For example, a trader sees the price of a stock decline rapidly and decides to sell. It is possible that we will not be able to explain mathematically this behavior in terms of physics alone and have to take into account “things” such as fear. But for the moment, our scientific theories cannot include mental events.

Let's now summarize our discussion on the study of economies or markets as complex systems. We suggested that the epistemological problems of the study of economies and markets are those of complex systems. Reductionism does not always work in complex systems as the chains of logical deductions needed to compute the behavior of aggregates might become infinite. Generally speaking, the dynamics of complex systems needs to be studied in itself, independent of the dynamics of the components. There is no unique way to describe complex systems: a multitude of descriptions corresponding to different types of aggregation are possible. Then there are different ways of looking at complex systems, from different levels of aggregation and from different conceptual viewpoints. Generally speaking, truly complex systems cannot be described with a single set of laws. The fact that economies and markets are made up of human individuals might require the consideration of mental events and values. If, as we suggest, the correct paradigm for understanding economies and finance is that of complex systems, not that of physics, the question of how to do so is an important one. We will discuss this in Section Econophysics and Econometrics.

Table 1 summarizes our discussion of the epistemology of economics.

www.frontiersin.org

Table 1. Similarities and differences between the physical sciences, mainstream economics, and hypothetical future scientific economics .

What Is the Cognitive Value of Neoclassical Economics, Neoclassical Finance?

As discussed above, there are difficult epistemological questions related to the study of economics and finance, questions that cannot be answered in a naïve way, questions such as “What do we want to know?”, “What can we know?”, or questions that cannot be answered at all or that need to be reformulated. Physics went through a major conceptual crisis when it had to accept that physical laws are probabilistic and do not describe reality but are subject to the superposition of events. In economics and finance, we have to think hard to (re)define our questions—they might not be as obvious we think: economies and markets are complex systems; the problems must be carefully conceptualized. We will now analyze the epistemological problems of neoclassical economics and, by extension, of neoclassical finance.

First, what is the cognitive value of neoclassical economics? As observed above, neoclassical economics is not an empirical science in the modern sense of the term; rather neoclassical economics is a mathematical model of an idealized object that is far from reality. We might say that the neoclassical economics has opened the way to the study of economics as a complex system made of agents which are intelligent processors of information, capable of making decisions on the basis of their forecasts. However, its spirit is very different from that of complex systems theory.

Neoclassical economics is essentially embodied in Dynamic Stochastic General Equilibrium (DSGE) models, the first of which was developed by Fynn Kidland and Edward Prescott, co-recipients of the 2004 Nobel Prize in Economics. DSGEs were created to give a micro foundation to macroeconomics following the Lucas critique discussed above. In itself sensible, anchoring macro behavior to micro-behavior, that is, explaining macroeconomics in terms of the behavior of individuals (or agents), is a challenging scientific endeavor. Sensible because effectively the evolution of economic quantities depends ultimately on the decisions made by individuals using their knowledge but also subject to emotions. Challenging, however, because (1) rationalizing the behavior of individuals is difficult, perhaps impossible and (2) as discussed above, it might not be possible to represent mathematically (and eventually to compute) the aggregate behavior of a complex system in terms of the behavior of its components.

These considerations were ignored in the subsequent development of economic and finance theory. Instead of looking with scientific humility to the complexity of the problem, an alternative idealized model was created. Instead of developing as a science, mainstream economics developed as highly sophisticated mathematical models of an idealized economy. DSGEs are based on three key assumptions:

From a scientific point of view, the idealizations of neoclassical economics are unrealistic. No real agent can be considered a rational-expectations agent: real agents do not have perfect knowledge of future expectations. The idea of rational expectations was put forward by Muth [ 16 ] who argued that, on average, economic agents make correct forecasts for the future—clearly a non-verifiable statement: we do not know the forecasts made by individual agents and therefore cannot verify if their mean is correct. There is little conceptual basis in arguing that, on average, people make the right forecasts of variables subject to complex behavior. It is obvious that, individually, we are uncertain about the future; we do not even know what choices we will have to make in the future.

There is one instance where average expectations and reality might converge, at least temporarily. This occurs when expectations change the facts themselves, as happens with investment decision-making where opinions lead to decisions that confirm the opinions themselves. This is referred to as “self-reflectivity.” For example, if on average investors believe that the price of a given stock will go up, they will invest in that stock and its price will indeed go up, thereby confirming the average forecast. Investment opinions become self-fulfilling prophecies. However, while investment opinions can change the price of a stock, they cannot change the fundamentals related to that stock. If investment opinions were wrong because based on a wrong evaluation of fundamentals, at a certain point opinions, and subsequently the price, will change.

Forecasts are not the only problem with DSGEs. Not only do we not know individual forecasts, we do not know how agents make decisions. The theoretical understanding of the decision-making process is part of the global problem of representing and predicting behavior. As observed above, individual behavior is the behavior of a complex system (in this case, the human being) and might not be predictable, not even theoretically. Let's refine this conclusion: We might find that, at some level of aggregation, human behavior is indeed predictable—at least probabilistically.

One area of behavior that has been mathematically modeled is the process of rational decision-making. Rational decision-making is a process of making coherent decisions. Decisions are coherent if they satisfy a number of theoretical axioms such as if choice A is preferred to choice B and choice B is preferred to choice C, then choice A must be preferred to choice A. Coherent decisions can be mathematically modeled by a utility function, that is a function defined on every choice such that choice A is preferred to choice B if the utility of A is higher than the utility of B. Utility is purely formal. It is not unique: there are infinite utility functions that correspond to the same ordering of decisions.

The idea underlying the theory of decisions and utility functions is that the complexity of human behavior, the eventual free will that might characterize the decision-making process, disappears when we consider simple business decisions such as investment and consumption. That is, humans might be very complex systems but when it comes to questions such as investments, they all behave in the same formal way, i.e., they all maximize the utility function.

There are several difficulties with representing agent decision-making as utility maximization. First, real agents are subjects to many influences, mutual interactions and other influences that distort their decisions. Numerous studies of empirical finance have shown that people do not behave according to the precepts of rational decision making. Although DSGEs can be considered a major step forward in economics, the models are, in fact, mere intellectual constructions of idealized economies. Even assuming that utility maximization does apply, there is no way to estimate the utility function(s) of each agent. DSGEs do not describe real agents nor do they describe how expectations are formed. Real agents might indeed make decisions based on past data from which they might make forecasts; they do not make decisions based on true future expectations. Ultimately, a DSGE model in its original form cannot be considered scientific.

Because creating models with a “realistic” number of agents (whatever that number might be) would be practically impossible, agents are generally collapsed into a single representative agent by aggregating utility functions. However, as shown by Sonnensheim-Mantel-Debreu [ 17 ], collapsing agents into a single representative agent does not preserve the conditions that lead to equilibrium. Despite this well-known theoretical result, models used by central banks and other organizations represent economic agents with a single aggregate utility functional.

Note that the representative agent is already a major departure from the original objective of Lucas, Kidland, and Prescott to give a micro foundation to macroeconomics. The aggregate utility functional is obviously not observable. It is generally assumed that the utility functional has a convenient mathematical formulation. See for example, Smets and Wouters [ 18 ] for a description of one such model.

There is nothing related to the true microstructure of the market in assuming a global simple utility function for an entire economy. In addition, in practice DSGE models assume simple processes, such as AutoRegressive processes, to make forecasts of quantities, as, for example, in Smets and Wouters [ 18 ] cited above.

What remains of the original formulation of the DSGE is the use of Euler equations and equilibrium conditions. But this is only a mathematical formalism to forecast future quantities. Ultimately, in practice, DSGE models are models where a simple utility functional is maximized under equilibrium conditions, using additional ad hoc equations to represent, for example, production.

Clearly, in this formulation DSGEs are abstract models of an economy without any descriptive power. Real agents do not appear. Even the forward-looking character of rational expectations is lost because these models obviously use only past data which are fed to algorithms that include terms such as utility functions that are assumed, not observed.

How useful are these models? Empirical validation is limited. Being equilibrium models, DSGEs cannot predict phenomena such as boom-bust business cycles. The most cogent illustration has been their inability to predict recent stock market crashes and the ensuing economic downturns.

Econophysics and Econometrics

In Section The Epistemological Foundations of Economics, we discussed the epistemological issues associated with economics; in Section What Is the Cognitive Value of Neoclassical Economics, Neoclassical Finance? We critiqued mainstream economics, concluding that it is not an empirical scientific theory. In this and the following section we discuss new methods, ideas, and results that are intended to give economics a scientific foundation as an empirical science. We will begin with a discussion of econophysics and econometrics.

The term Econophysics was coined in 1995 by the physicist Eugene Stanley. Econophysics is an interdisciplinary research effort that combines methods from physics and economics. In particular, it applies techniques from statistical physics and non-linear dynamics to the study of economic data and it does so without the pretense of any a priori knowledge of economic phenomena.

Econophysics obviously overlaps the more traditional discipline of econometrics. Indeed, it is difficult to separate the two in any meaningful way. Econophysics also overlaps economics based on artificial markets formed by many interacting agents. Perhaps a distinguishing feature of econophysics is its interdisciplinarity, though one can reasonably argue that any quantitative modeling of financial or economic phenomena shares techniques with other disciplines. Another distinguishing feature is its search for universal laws; econometrics is more opportunistic. However, these distinctions are objectively weak. Universality in economics is questionable and econometrics uses methods developed in pure mathematics.

To date, econophysics has focused on analyzing financial markets. The reason is obvious: financial markets generate huge quantities of data. The availability of high-frequency data and ultra-high-frequency data (i.e., tick-by-tick data) has facilitated the use of the methods of physics. For a survey of Econophysics, see in particular Lux [ 19 ] and Chakraborti et al. [ 20 ]; see Gallegati et al. [ 21 ] for a critique of econophysics from inside.

The main result obtained to date by econophysics is the analysis and explanation of inverse power law distributions empirically found in many economic and financial phenomena. Power laws have been known and used for more than a century, starting with the celebrated Pareto law of income distribution. Power law distributions were proposed in finance in the 1950s, for example by Mandelbrot [ 22 ]. More recently, econophysics has performed a systematic scientific study of power laws and their possible explanations.

Time series or cross sectional data characterized by inverse power law distributions have special characteristics that are important for economic theory as well as for practical applications such as investment management. Inverse power laws characterize phenomena such that very large events are not negligibly rare. The effect is that individual events, or individual agents, become very important.

Diversification, which is a pillar of classical finance and investment management, becomes difficult or nearly impossible if distributions follow power laws. Averages lose importance as the dynamics of phenomena is dominated by tail events. Given their importance in explaining economic and financial phenomena, we will next briefly discuss power laws.

Let's now look at the mathematical formulation of distributions characterized by power laws and consider some examples of how they have improved our understanding of economic and financial phenomena.

The tails of the distribution of a random variable r follow an inverse power law if the probability of the tail region decays hyperbolically:

The properties of the distribution critically depend on the magnitude of the exponent α. Values α < 2 characterize Levy distributions with infinite variance and infinite mean if α < 1; values α > 2 characterize distributions with finite mean and variance.

Consider financial returns. Most studies place the value of α for financial returns at around 3 [ 19 ]. This finding is important because values α < 2 would imply invariance of the distribution, and therefore of the exponent, with respect to summation of variables. That is, the sum of returns would have the same exponent of the summands. This fact would rule out the possibility of any diversification and would imply that returns at any time horizon have the same distribution. Instead, values of α at around 3 imply that variables become normal after temporal aggregation on sufficiently long time horizons. This is indeed what has been empirically found: returns become normal over periods of 1 month or more.

Power laws have also been found in the autocorrelation of volatility. In general the autocorrelation of returns is close to zero. However, the autocorrelation of volatility, (measured by the autocorrelation of the absolute value of returns, or the square of returns), decays as an inverse power law:

The typical exponent found empirically is 0.3. Such a small exponent implies a long-term dependence of volatility.

Power laws have been found in other phenomena. Trading volume decays as a power law; power laws have also been found in the volume and number of trades per time unit in high-frequency data; other empirical regularities have been observed, especially for high-frequency data.

As for economic phenomena more in general, power laws have been observed in the distribution of loans, the market capitalization of firms, and income distribution - the original Pareto law. Power laws have also been observed in non-financial phenomena such as the distribution of the size of cities 4 .

Power law distributions of returns and of volatility appear to be a universal feature in all liquid markets. But Gallegati et al. [ 21 ] suggest that the supposed universality of these empirical findings (which do not depend on theory) is subject to much uncertainty. In addition, there is no theoretical reason to justify the universality of these laws. It must be said that the level of validation of each finding is not extraordinarily high. For example, distributions other than power laws have been proposed, including stretched exponentials and tempered distributions. There is no consensus on any of these findings. As observed in Section The Epistemological Foundations of Economics, the empirical content of individual findings is low; it is difficult to choose between the competing explanations.

Attempts have been made to offer theoretical explanations for the findings of power laws. Econophysicists have tried to capture the essential mechanisms that generate power laws. For example, it has been suggested that power laws in one variable naturally lead to power laws in other variables. In particular, power law distributions of the size of market capitalization or similar measures of the weight of investors explain most other financial power laws [ 23 ]. The question is: How were the original power laws generated?

Two explanations have been proposed. The first is based on non-linear dynamic models. Many, perhaps most, non-linear models create unconditional power law distributions. For example, the ARCH/GARCH models create unconditional power laws of the distribution of returns though the exponents do not fit empirical data. The Lux-Marchesi dynamic model of trading [ 24 ] was the first model able to explain power laws of both returns and autocorrelation time. Many other dynamic models have since been proposed.

A competing explanation is based on the properties of percolation structures and random graph theory, as originally proposed by Cont and Bouchaud [ 25 ]. When the probability of interaction between adjacent nodes approaches a critical value that depends on the topology of the percolation structure or the random graph, the distribution of connected components follows a power law. Assuming financial agents can be represented by the nodes of a random graph, demand created by aggregation produces a fat-tailed distribution of returns.

Random Matrices

Econophysics has also obtained important results is in the analysis of large covariance and correlation matrices, separating noise from information. For example, financial time series of returns are weakly autocorrelated but strongly correlated. Correlation matrices play a fundamental role in portfolio management and many other financial applications.

However, estimating correlation and covariance matrices for large markets is problematic due to the fact that the number of parameters to estimate (i.e., the entries of the covariance matrix) grows with the square of the number of time series, while the number of available data is only proportional to the number of time series. In practice, the empirical estimator of a large covariance matrix is very noisy and cannot be used. Borrowing from physics, econophysicists suggest a solution based on the theory of random matrices which has been applied to solve problems in quantum physics.

The basic idea of random matrices is the following. Consider a sample of N time series of length T . Suppose the series are formed by independent and identically distributed zero mean normal variables. In the limit of N and T going to infinity with a constant ratio Q = T/N , the distribution of the eigenvalues of these series was determined by Marchenko Pastur [ 26 ]. Though the law itself is a simple algebraic function, the demonstration is complicated. The remarkable finding is that there is a universal theoretical distribution of eigenvalues in an interval that depends only on Q .

This fact suggested a method for identifying the number of meaningful eigenvalues of a large covariance matrix: Only those eigenvalues that are outside the interval of the Marcenko-Pastur law are significant (see Plerou et al. [ 27 ]). A covariance matrix can therefore be made robust computing the Principal Components Analysis (PCA) and using only those principal components corresponding to meaningful eigenvalues. Random matrix theory has been generalized to include correlated and autocorrelated time series and non-normal distributions (see Burda et al. [ 28 ]).

Econometrics and VAR Models

As mentioned above, the literature on econophysics overlaps with the econometric literature. Econometricians have developed methods to capture properties of time series and model their evolution. Stationarity, integration and cointegration, and the shifting of regimes are properties and models that come from the science of econometrics.

The study of time series has opened a new direction in the study of economics with the use of Vector Auto Regressive (VAR) models. VAR models were proposed by Christopher Sims in the 1980s (for his work, Sims shared the 2011 Nobel Prize in Economics with Thomas Sargent). Given a vector of variables, a VAR model represents the dynamic of the variables as the regression of each variable over lagged values of all variables:

The use of VAR models in economics is typically associated with dimensionality reduction techniques. As currently tens or even hundreds of economic time series are available, PCA or similar techniques are used to reduce the number of variables so that the VAR parameters can be estimated.

What are the similarities and differences between econometrics and econophysics? Both disciplines try to find mathematical models of economic and/or financial variables. One difference between the two disciplines is perhaps the fact that econophysics attempts to find universal phenomena shared by every market. Econometricians, on the other hand, develop models that can be applied to individual time series without considering their universality. Hence econometricians focus on methods of statistical testing because the applicability of models has to be tested in each case. This distinction might prove to be unimportant as there is no guarantee that we can find universal laws. Thus far, no model has been able to capture all the features of financial time series: Because each model requires an independent statistical validation, the empirical content is weak.

New Directions in Economics

We will now explore some new directions in economic theory. Let's start by noting that we do not have a reasonably well-developed, empirically validated theory of economics. Perhaps the most developed field is the analysis of instabilities as well as economic simulation. The main lines of research, however, are clear and represent a departure from the neoclassical theory. They can be summarized thus:

1. Social values and objectives must be separated from economic theory, that is, we have to separate political economics from pure economic theory. Economies are systems in continuous evolution. This fact is not appreciated in neoclassical economics which considers only aggregated quantities.

2. The output of economies is primarily the creation of order and complexity, both at the level of products and social structures. Again, this fact is ignored by neoclassical economics, which takes a purely quantitative approach without considering changes in the quality of the output or the power structure of economies.

3. Economies are never in a state of equilibrium, but are subject to intrinsic instabilities.

4. Economic theory needs to consider economies as physical systems in a physical environment; it therefore needs to take into consideration environmental constraints.

Let's now discuss how new directions in economic theory are addressing the above.

Economics and Political Economics

As mentioned above, economic theory should be clearly separated from political economics. Economies are human artifacts engineered to serve a number of purposes. Most economic principles are not laws of nature but reflect social organization. As in any engineering enterprise, the engineering objectives should be kept separate from the engineering itself and the underlying engineering principles and laws. Determining the objectives is the realm of political economics; engineering the objectives is the realm of economic theory.

One might object that there is a contradiction between the notion of economies as engineered artifacts and the notion of economies as evolving systems subject to evolutionary rules. This contradiction is akin to the contradiction between studying human behavior as a mechanistic process and simultaneously studying how to improve ourselves.

We will not try to solve this contradiction at a fundamental level. Economies are systems whose evolution is subject to uncertainty. Of course the decisions we make about engineering our economies are part of the evolutionary process. Pragmatically, if not philosophically, it makes sense to render our objectives explicit.

For example, Acemoglu, Robinson, and Verdier wrote an entry in the VOX CEPR's Policy Portal ( http://www.voxeu.org/article/cuddly-or-cut-throat-capitalism-choosing-models-globalised-world ) noting that we have the option to choose between different forms of capitalism (see, for example, in Hall and Soskice [ 29 ]), in particular, between what they call “cuddly capitalism” or “cut-throat capitalism.” It makes sense, pragmatically, to debate what type of system, in this case of capitalism, we want. An evolutionary approach, on the other hand, would study what decisions were/will be made.

The separation between objectives and theory is not always made clear, especially in light of political considerations. Actually, there should be multiple economic theories corresponding to different models of economic organization. Currently, however, the mainstream model of free markets is the dominant model; any other model is considered either an imperfection of the free-market competitive model or a failure, for example Soviet socialism. This is neither a good scientific attitude nor a good engineering approach. The design objectives of our economies should come first, then theory should provide the tools to implement the objectives.

New economic thinking is partially addressing this need. In the aftermath of the 2007–2009 financial crisis and the subsequent questioning of mainstream economics, some economists are tackling socially-oriented issues, in particular, the role and functioning of the banking system, the effect of the so-called austerity measures, and the social and economic implications of income and wealth inequality.

There is a strain of economic literature, albeit small, known as meta-economics, that is formally concerned with the separation of the objectives and the theory in economics. The term metaeconomics was first proposed by Karl Menger, an Austrian mathematician and member of the Vienna Circle 5 . Influenced by David Hilbert's program to give a rigorous foundation to mathematics, Menger proposed metaeconomics as a theory of the logical structure of economics.

The term metaeconomics was later used by Schumacher [ 30 ] to give a social and ethical foundation to economics, and is now used in this sense by behavioral economists. Metaeconomics, of course, runs contrary to mainstream economics which adheres to the dogma of optimality and excludes any higher-level discussion of objectives.

Economies as Complex Evolving Systems

What are the characteristics of evolutionary complex systems such as our modern economies? An introduction can be found in Beinhocker [ 31 ]. Associated with the Institute for New Economic Thinking (INET) 6 , Beinhocker attributes to Nicholas Georgescu-Roegen many of the new ideas in economics that are now receiving greater attention.

Georgescu-Roegen [ 32 ] distinguishes two types of evolution, slow biological evolution and the fast cultural evolution typical of modern economies. Thus, the term bioeconomics . The entropy accounting of the second law of thermodynamics implies that any local increase of order is not without a cost: it requires energy and, in the case of the modern economies, produces waste and pollution. Georgescu-Roegen argued that because classical economics does not take into account the basic laws of entropy, it is fundamentally flawed.

When Georgescu-Roegen first argued his thesis back in the 1930s, economists did not bother to respond. Pollution and depletion of natural resources were not on any academic agenda. But if economics is to become a scientific endeavor, it must consider the entropy accounting of production. While now much discussed, themes such as energy sources, sustainability, and pollution are still absent from the considerations of mainstream economics.

It should be clear that these issues cannot be solved with a mathematical algorithm. As a society, we are far from being able, or willing, to make a reasonable assessment of the entropy balance of our activities, economic and other. But a science of economics should at least be able to estimate (perhaps urgently) the time scales of these processes.

Economic growth and wealth creation are therefore based on creating order and complexity. Understanding growth, and eventually business cycles and instabilities, calls for an understanding of how complexity evolves—a more difficult task than understanding the numerical growth of output.

Older growth theories were based on simple production functions and population growth. Assuming that an economy produces a kind of composite good, with appropriate production functions, one can demonstrate that, setting aside capital, at any time step the economy increases its production capabilities and exhibits exponential growth. But this is a naïve view of the economy. An increase of complexity is the key ingredient of economic growth.

The study of economic complexity is not new. At the beginning of the twentieth century, the Austrian School of Economics introduced the idea, typical of complex systems, that order in market systems is a spontaneous, emerging property. As mentioned above, this idea was already present in Adam Smith's invisible hand that coordinates markets. The philosopher-economist Friedrick Hayek devoted much theoretical thinking to complexity and its role in economics.

More recently, research on economies as complex systems started in the 1980s at The Santa Fe Institute (Santa Fe, New Mexico). There, under the direction of the economist Bryan Arthur, researchers developed one of the first artificial economies. Some of the research done at the Santa Fe Institute is presented in three books titled The Economy as an Evolving Complex System , published by The Santa Fe Institute.

At the Massachusetts Institute of Technology (MIT), the Observatory on Economic Complexity gathers and publishes data on international trade and computes various measures of economic complexity, including the Economic Complexity Index (ECI) developed by Cesar Hidalgo and Ricardo Hausmann. Complexity economics is now a subject of research at many universities and economic research centers.

How can systems increase their complexity spontaneously, thereby evolving? Lessons from biology might help. Chaitin [ 12 ] proposed a mathematical theory based on the theory of algorithmic complexity that he developed to explain Darwinian evolution. Chaitin's work created a whole new field of study—metabiology—though his results are not universally accepted as proof that Darwinian evolution works in creating complexity.

While no consensus exists, and no existing theory is applicable to economics, it is nevertheless necessary to understand how complexity is created if we want to understand how economies grow or eventually fail to grow.

Assuming the role of complexity in creating economic growth and wealth, how do we compare the complexity of objects as different as pasta, washing machines and computers? And how do we measure complexity? While complexity can be measured by a number of mathematical measures, such as those of the algorithmic theory of complexity, there is no meaningful way to aggregate these measures to produce a measure of the aggregate output.

Mainstream economics uses price—the market value of output—to measure the aggregate output. But there is a traditional debate on value, centered on the question of whether price is a measure of value. A Marxist economist would argue that value is the amount of labor necessary to produce that output. We will stay within market economies and use price to measure aggregate output. The next section discusses the issues surrounding aggregation by price.

The Myth of Real Output

Aggregating so many (eventually rapidly changing) products 7 quantitatively by physical standards is an impossible task. We can categorize products and services, such as cars, computers, and medical services but what quantities do we associate to them?

Economics has a conceptually simple answer: products are aggregated in terms of price, the market price in free-market economies as mentioned above, or centrally planned prices in planned economies. The total value of goods produced in a year is called the nominal Gross National Product (GNP). But there are two major problems with this.

First, in practice, aggregation is unreliable: Not all products and services are priced; many products and services are simply exchanged or self-produced; black and illegal economies do exist and are not negligible; data collection can be faulty. Therefore, any number which represents the aggregate price of goods exchanged has to be considered uncertain and subject to error.

Second, prices are subject to change. If we compare prices over long periods of time, changes in prices can be macroscopic. For example, the price of an average car in the USA increased by an order of magnitude from a few thousand dollars in the 1950s to a tens of thousands of dollars in the 2010s. Certainly cars have changed over the years, adding features such as air conditioning, but the amount of money in circulation has also changed.

The important question to address is whether physical growth corresponds to the growth of nominal GNP. The classical answer is no, as the level of prices changes. But there is a contradiction here: to measure the eventual increase in the price level we should be able to measure the physical growth and compare it with the growth of nominal GNP. But there is no way to measure realistically physical growth; any parameter is arbitrary.

The usual solution to this problem is to consider the price change (increase or decrease) of a panel of goods considered to be representative of the economy. The nominal GNP is divided by the price index to produce what is called real GNP. This process has two important limitations. First, the panel of representative goods does not represent a constant fraction of the total economy nor does it represent whole sectors, such as luxury products or military expenditures. Second, the panel of representative goods is not constant as products change, sometimes in very significant ways.

Adopting an operational point of view, the meaning of the real GNP is defined by how it is constructed: it is the nominal GNP weighted with the price of some average panel of goods. Many similar constructions would be possible in function of different choices of the panel of representative goods. There is therefore a fundamental arbitrariness in how real GNP is measured. The growth of the real GNP represents only one of many different possible concepts of growth. Growth does exist in some intuitive sense, but quantifying it in some precise way is largely arbitrary. Here we are back to the fundamental issue that economies are complex systems.

Describing mathematically the evolution of the complexity of an economy is a difficult, perhaps impossible, task. When we aggregate by price, the problem becomes more tractable because there are constraints to financial transactions essentially due to the amount of money in circulation and rules related to the distribution of money to different agents.

But it does not make sense to aggregate by price the output of an entire country. We suggest that it is necessary to model different sectors and understand the flows of money. Some sectors might extend over national boundaries. Capital markets, for example, are truly international (we do not model them as purely national); the activity of transnational corporations can span a multitude of countries. What is required is an understanding of what happens under different rules.

Here we come upon what is probably the fundamental problem of economics: the power structure. Who has the power to make decisions? Studying human structures is not like studying the behavior of a ferromagnet. Decisions and knowledge are intertwined in what the investor George Soros has called the reflexivity of economies.

Finance, the Banking System, and Financial Crises

In neoclassical economics, finance is transparent; in real-world economies, it is far from being the case. Real economies produce complexity and evolve in ways that are difficult to understand. Generally speaking, the financial and banking systems allow a smooth evolution of the economic system, providing the money necessary to sustain transactions, thereby enabling the sale and purchase of goods and services. While theoretically providing the money needed to sustain growth, the financial and banking systems might either provide too little money and thereby constrain the economy, or provide too much money and thereby produce inflation, especially asset inflation.

Asset inflation is typically followed by asset deflation as described by Minsky [ 33 ] in his financial instability hypothesis. Minsky argued that capitalist economies exhibit asset inflations due to the creation of excess money, followed by debt deflations that, because of the fragile financial systems, can end in financial and economic crises. Since Minsky first formulated his financial instability hypothesis, many changes and additional analysis have occurred.

First, it has become clear that the process of money creation is endogenous, either by the central banks or commercial banks. What has become apparent, especially since the 2007–2009 financial crisis, is that central banks can create money greatly in excess of economic growth and that this money might not flow uniformly throughout the economy but follow special, segregated paths (or flows), eventually remaining in the financial system, thereby producing asset inflation but little to no inflation in the real economy.

Another important change has been globalization, with the free flow of goods and capital in and out of countries, in function of where it earns the highest returns or results in the lowest tax bill. As local economies lost importance, countries have been scrambling to transform themselves to compete with low-cost/low-tax countries. Some Western economies have been successful in specializing in added-value sectors such as financial services. These countries have experienced huge inflows of capital from all over the world, creating an additional push toward asset inflation. In recent years, indexes such as the S&P500 have grown at multiples of the nominal growth of their reference economies.

But within a few decades of the beginning of globalization, some of those economies that produced low-cost manufactured goods have captured the entire production cycle from design, engineering, manufacturing, and servicing. Unable to compete, Western economies started an unprecedented process of printing money on a large scale with, as a result, the recurrence of financial crashes followed by periods of unsustainable financial growth.

Studying such crises is a major objective of economics. ETH-Zurich's Didier Sornette, who started his career as a physicist specialized in forecasting rare phenomena such as earthquakes, made a mathematical analysis of financial crises using non-linear dynamics and following Minsky's financial instability hypothesis. Together with his colleague Peter Cauwels, Sornette and Cauwels [ 34 ] hypothesize that financial crises are critical points in a process of superexponential growth of the economy.

Artificial Economies

As discussed above, the mathematical analysis of complex system is difficult and might indeed be an impossible task. To overcome this problem, an alternative route is the development of agent-based artificial economies. Artificial economies are computer programs that simulate economies. Agent-based artificial economies simulate real economies creating sets of artificial agents whose behavior resembles the behavior of real agents.

The advantage of artificial economies is that they can be studied almost empirically without the need to perform mathematical analysis, which can be extremely difficult or impossible. The disadvantage is that they are engineered systems whose behavior depends on the engineering parameters. The risk is that one finds exactly what one wants to find. The development of artificial markets with zero-intelligence agents was intended to overcome this problem, studying those market properties that depend only on the trading mechanism and not on agent characteristics.

There is by now a considerable literature on the development of artificial economies and the design of agents. See Chakraborti et al. [ 20 ] for a recent review. Leigh Tesfatsion at Iowa State University keeps a site which provides a wealth of information on agent-based systems: http://www2.econ.iastate.edu/tesfatsi/ace.htm .

Evolution of Neoclassical Economics

Among classical economists, efforts are underway to bring the discipline closer to an empirical science. Among the “new” classical economists is David Colander, who has argued that the term “mainstream” economics does not reflect current reality because of the many ramifications of mainstream theories.

Some of the adjustments underway are new versions of DSGE theories which now include a banking system and deviations from perfect rationality as well as the question of liquidity. As observed above, DSGE models are a sort of complex system made up of many intelligent agents. It is therefore possible, in principle, to view complex systems as an evolution of DSGEs. However, most basic concepts of DSGEs, and in particular equilibrium, rational expectations, and the lack of interaction between agents, have to be deeply modified. Should DSGEs evolve as modern complex systems, the new generations of models will be very different from the current generation of DSGEs.

Conclusions

In this paper we have explored the status of economics as an empirical science. We first analyzed the epistemology of economics, remarking on the necessity to carefully analyze what we consider observations (e.g., volatility, inflation) and to pose questions that can be reasonably answered based on observations.

In physics, observables are processes obtained through the theory itself, using complex instruments. Physical theory responds to empirical tests in toto ; individual statements have little empirical content. In economics, given the lack of a comprehensive theory, observations are elementary observations, such as prices, and theoretical terms are related to observables in a direct way, without cross validation. This weakens the empirical content of today's prevailing economic theory.

We next critiqued neoclassical economics, concluding that it is not an empirical science but rather the study of an artificial idealized construction with little connection to real-world economies. This conclusion is based on the fact that neoclassical economics is embodied in DSGE models which are only weakly related to empirical reality.

We successively explored new ideas that hold the promise of developing economics more along the lines of an empirical science. Econophysics, an interdisciplinary effort to place economics on a sure scientific grounding, has produced a number of results related to the analysis of financial time series, in particular the study of inverse power laws. But while econophysics has produced a number of models, it has yet to propose a new global economic theory.

Other research efforts are centered on looking at economies as complex evolutionary systems that produce order and increasing complexity. Environmental constraints due to the accounting of energy and entropy are beginning to gain attention in some circles. As with econophysics, the study of the economy as a complex system has yet produced no comprehensive theory.

The most developed area of new research efforts is the analysis of instabilities, building on Hyman Minsky's financial instability hypothesis. Instabilities are due to interactions between a real productive economy subject to physical constraints, and a financial system whose growth has no physical constraints.

Lastly, efforts are also being made among classical economists to bring their discipline increasingly into the realm of an empirical science, adding for example the banking system and boundedly rational behavior to the DSGE.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

1. ^ By “neoclassical economics,” we refer to an economic theory based on the notions of optimization, the efficient market hypothesis, and rational expectations. Among the major proponents of neoclassical economic thinking are Robert Lucas and Eugene Fama, both from the University of Chicago and both recipients of the Nobel Prize in Economics. Because neoclassical economic (and finance) theory is presently the dominating theory, it is also often referred to as “mainstream” theory or “the prevailing” theory. Attempts are being made to address some of the shortfalls of neoclassical economics, such as the consideration of the banking system, money creation and liquidity.

2. ^ For example, in the Special Relativity Theory, the concept of simultaneity of distant events is not an a priori concept but depends on how we observe simultaneity through signals that travel at a finite speed. To determine simultaneity, we perform operations based on sending and receiving signals that travel at finite speed. Given the invariance of the speed of light, these operations make simultaneity dependent on the frame of reference.

3. ^ The term “metaeconomics” is currently used in a different sense. See Section Econophysics and Econometrics below. Here we use metaeconomics in analogy with Chaitin's metabiology.

4. ^ Power laws are ubiquitous in physics where many phenomena, such as the size of ferromagnetic domains, are characterized by power laws.

5. ^ Karl Menger was the son of the economist Carl Menger, the founder of the Austrian School of Economics.

6. ^ The Institute for New Economic Thinking (INET) is a not-for profit think tank whose purpose is to support academic research and teaching in economics “outside the dominant paradigms of efficient markets and rational expectations.” Founded in 2009 with the financial support of George Soros, INET is a response to the global financial crisis that started in 2007.

7. ^ Beinhocker [ 31 ] estimates that, in the economy of a city like New York, the number of Stock Keeping Units or SKUs, with each SKU corresponding to a different product, to be in the order of tens of billions.

1. Fabozzi FJ, Focardi SM, Jonas C. Investment Management: A Science to Teach or an Art to Learn? Charlottesville: CFA Institute Research Foundation (2014).

Google Scholar

2. van Orman Quine W. Two Dogmas of Empiricism (1951). Available online at: http://www.ditext.com/quine/quine.html )

3. Bridgman WP. The Logic of Modern Physics (1927). Available online at: https://archive.org/details/logicofmodernphy00brid

4. Hempel CG. Aspects of Scientific Explanation and Other Essays in the Philosophy of Science . New York: Free Press (1970).

5. Feyerabend P. Against Method . 4th ed. London: Verso (1975).

6. Kuhn TS. The Structure of Scientific Revolutions . 3rd ed. Chicago: University of Chicago Press (1962).

7. Popper C. Conjectures and Refutations: The Growth of Scientific Knowledge . London: Routledge Classics (1963).

8. Friedman M. Essays in Positive Economics . Chicago: Chicago University Press (1953).

9. Black F. Noise. J Finance (1953) 41 :529–43.

10. Anderson PW. More is different. Science (1972) 177 :393–6. doi: 10.1126/science.177.4047.393

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Gu M, Weedbrook C, Perales A, Nielsen MA. More really is different. Phys D (2009) 238 :835–9. doi: 10.1016/j.physd.2008.12.016

CrossRef Full Text | Google Scholar

12. Chaitin G. Proving Darwin: Making Biology Mathematical. New York: Pantheon Books (2012).

13. von Neumann J. The Computer and the Brain . New Haven; London: Yale University Press (1958).

14. Turing A. Computing machinery and intelligence. Mind (1950) 49 :433–60. doi: 10.1093/mind/LIX.236.433

15. Shiller RJ. Rational Exuberance . 1st ed. Princeton: Princeton University Press (2015).

16. Muth JF. Rational expectations and the theory of price movements. Econometrica (1961) 29 :315–35. doi: 10.2307/1909635

17. Mantel R. On the characterization of aggregate excess demand. J Econ Theory (1974) 7 :348–53. doi: 10.1016/0022-0531(74)90100-8

18. Smets F, Wouters R. An estimated stochastic dynamic general equilibrium model of the euro area. In: Working Paper No. 171, European Central Bank Working Paper Series. Frankfurt (2002). doi: 10.2139/ssrn.1691984

19. Lux T. Applications of statistical physics in finance and economics. In: Kiel Working Papers 1425. Kiel: Kiel Institute for the World Economy (2008).

20. Chakraborti A, Muni-Toke I, Patriarca M, Abergel F. Econophysics review: I. empirical facts. Quant Finance (2011) 11 :991–1012. doi: 10.1080/14697688.2010.539248

21. Gallegati M, Keen S, Lux T, Ormerod P. Worrying trends in econophysics. Phys A (2006) 370 :1–6 doi: 10.1016/j.physa.2006.04.029

22. Mandelbrot B. Stable Paretian random functions and the multiplicative variation of income. Econometrica (1961) 29 :517–43. doi: 10.2307/1911802

23. Gabaix X, Gopikrishnan P, Plerou V, Stanley HE. A theory of power-law distributions in financial market fluctuations. Nature (2003) 423 :267–70. doi: 10.1038/nature01624

24. Lux T, Marchesi M. Scaling and criticality in a stochastic multi-agent model of a financial market. Nature (1999) 397 :498–500.

25. Cont R, Bouchaud J-P. Herd behavior and aggregate fluctuations in financial markets. Macroecon Dyn. (2000) 4 :170–96. doi: 10.1017/s1365100500015029

26. Marchenko VA, Pastur LA. Distribution of eigenvalues for some sets of random matrices. Mathematics (1967) 72 :507–536.

27. Plerou V, Gopikrishnan P, Rosenow B, Amaral LAN, Guhr T, Stanley HE. Random matrix approach to cross correlations in financial data. Phys Rev E (2002) 65 :066126-1—18. doi: 10.1103/physreve.65.066126

28. Burda Z, Jurkiewicz J, Nowak MA, Papp G, Zahed I. Levy matrices and financial covariances (2001). arXiv:cond-mat/0103108

29. Hall P, Soskice D. Varieties of Capitalism: The Institutional Foundations of Comparative Advantage . Oxford: Oxford University Press (2001). doi: 10.1093/0199247757.001.0001

30. Schumacher F. Small is Beautiful: A Study of Economics as Though People Mattered. London: Blond & Briggs Ltd. (1973). 352 p.

31. Beinhocker ED. The Origin of Wealth: The Radical Remaking of Economics and What It Means for Business and Society . Boston: Harvard Business Review Press (2007).

32. Georgescu-Roegen N. The Entropy Law and the Economic P5rocess . Cambridge: MA University Press (1971). doi: 10.4159/harvard.9780674281653

33. Minsky HP. Stabilizing an Unstable Economy . New Haven: Yale University Press (1986).

34. Cauwels P, Sornette D. The Illusion of the Perpetual Money Machine, Paper No. 12-40 . Geneva: Swiss Finance Institute Research (2012). Available online at: http://ssrn.com/abstract=2191509 (Accessed October 27, 2012).

Keywords: economics, empirical science, epistemology of economics, econophysics, complex systems, financial crises

Citation: Focardi SM (2015) Is economics an empirical science? If not, can it become one? Front. Appl. Math. Stat . 1:7. doi: 10.3389/fams.2015.00007

Received: 07 April 2015; Accepted: 29 June 2015; Published: 21 July 2015.

Reviewed by:

Copyright © 2015 Focardi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sergio M. Focardi, Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, NY 11794, USA, [email protected]

Improving economic policy

Suggested keywords:

  • decarbonisation
  • climate change
  • monetary policy
  • / Publications

The empirical shift in economics

Rather than being unified by the application of the common behavioral model of the rational agent, economists increasingly recognize themselves in the

Share this page:

What’s at stake: Rather than being unified by the application of the common behavioral model of the rational agent, economists increasingly recognize themselves in the careful application of a common empirical toolkit used to tease out causal relationships, creating a premium for papers that mix a clever identification strategy with access to new data.

Economics imperialism in methods

Noah Smith writes that t he ground has fundamentally shifted in economics – so much that the whole notion of what "economics" means is undergoing a dramatic change. In the mid-20th century, economics changed from a literary to a mathematical discipline. Now it might be changing from a deductive, philosophical field to an inductive, scientific field. The intricacies of how we imagine the world must work are taking a backseat to the evidence about what is actually happening in the world. Matthew Panhans and John Singleton  write that ­­while historians of economics have noted the transition in the character of economic research since the 1970s toward applications, less understood is the shift toward quasi-experimental work.

Matthew Panhans and John Singleton  write that the missionary's Bible is less Mas-Colell and more Mostly Harmless Econometrics. In 1984, George Stigler pondered the “imperialism" of economics. The key evangelists named by Stigler in each mission field, from Ronald Coase and Richard Posner (law) to Robert Fogel (history), Becker (sociology), and James Buchanan (politics), bore University of Chicago connections. Despite the diverse subject matters, what unified the work for Stigler was the application of a common behavioral model. In other words, what made the analyses “economic" was the postulate of rational pursuit of goals. But rather than the application of a behavioral model of purposive goal-seeking, “economic" analysis is increasingly the empirical investigation of causal effects for which the quasi-experimental toolkit is essential.

 what made past analyses “economic" was the postulate of rational pursuit of goals.

Nicola Fuchs-Schuendeln and Tarek Alexander Hassan writes that, even in macroeconomics, a growing literature relies on natural experiments to establish causal effects. The “natural” in natural experiments indicates that a researcher did not consciously design the episode to be analyzed, but researchers can nevertheless use it to learn about causal relationships. Whereas the main task of a researcher carrying out a laboratory or field experiment lies in designing it in a way that allows causal inference, the main task of a researcher analyzing a natural experiment lies in arguing that in fact the historical episode under consideration resembles an experiment. To show that the episode under consideration resembles an experiment, identifying valid treatment and control groups, that is, arguing that the treatment is in fact randomly assigned, is crucial.

RTEmagicC_Blog_review_The_empirical_shift_in_economics_01.png

Source: Nicola Fuchs-Schuendeln and Tarek Alexander Hassan

Data collection, clever identification and trendy topics

Daniel S. Hamermesh writes that top journals are publishing many fewer papers that represent pure theory, regardless of subfield, somewhat less empirical work based on publicly available data sets, and many more empirical studies based on data collected by the author(s) or on laboratory or field experiments. The methodological innovations that have captivated the major journals in the past two decades – experimentation, and obtaining one’s own unusual data to examine causal effects – are unlikely to be any more permanent than was the profession’s fascination with variants of micro theory, growth theory, and publicly avail-able data in the 1960s and 1970s.

Barry Eichengreen writes that, as recently as a couple of decades ago, empirical analysis was informed by relatively small and limited data sets.  While older members of the economics establishment continue to debate the merits of competing analytical frameworks, younger economists are bringing to bear important new evidence about how the economy operates. A first approach relies on big data. A second approach relies on new data. Economists are using automated information-retrieval routines, or “bots,” to scrape bits of novel information about economic decisions from the World Wide Web. A third approach employs historical evidence. Working in dusty archives has become easier with the advent of digital photography, mechanical character recognition, and remote data-entry services.

Tyler Cowen writes that top plaudits are won by quality empirical work, but lots of people have good skills.  Today, there is thus a premium on a mix of clever ideas — often identification strategies — and access to quality data.   Over time, let’s say that data become less scarce, as arguably has been the case in the field of history. Lots of economics researchers might also eventually have access to “Big Data.”  Clever identification strategies won’t disappear, but they might become more commonplace. We would then still need a standard for elevating some work as more important or higher quality than other work.  Popularity of topic could play an increasingly large role over time, and that is how economics might become more trendy.

Noah Smith (HT Chris Blattman ) writes that the biggest winners from this paradigm shift are the public and policymakers as the results of these experiments are often easy enough for them to understand and use. Women in economics also win from this shift towards empirical economics. When theory doesn’t rely on data for confirmation, it often becomes a bullying/shouting contest where women are often disadvantaged. But with quasi-experiments, they can use reality to smack down bullies, as in the sciences. Beyond orthodox theory, another loser from this paradigm shift is heterodox thinking as it is much more theory-dominated than the mainstream and it wasn't heterodox theory that eclipsed neoclassical theory. It was empirics. 

Heterodox economic theory didn't eclipse neoclassical economic theory. It was empirics.

About the authors

Jérémie cohen-setton.

Jérémie Cohen-Setton is a Research Fellow at the Peterson Institute for International Economics. Jérémie received his PhD in Economics from U.C. Berkeley and worked previously with Goldman Sachs Global Economic Research, HM Treasury, and Bruegel. At Bruegel, he was Research Assistant to Director Jean Pisani-Ferry and President Mario Monti. He also shaped and developed the Bruegel Economic Blogs Review.

Related content

The fiscal stance puzzle.

What’s at stake: In a low r-star environment, fiscal policy should be accommodative at the global level. Instead, even in countries with current accou

The state of macro redux

What’s at stake: In 2008, Olivier Blanchard argued in a paper called “the state of macro” that a largely shared vision of fluctuations and of methodol

Racial prejudice in police use of force

What’s at stake: This week was dominated by a new study by Roland Fryer exploring racial differences in police use of force. His counterintuitive resu

The Fed’s rethinking of normality

What’s at stake: As we approach Jackson Hole, monetary policymakers are considering how to redesign monetary policy strategies to better cope with a l

Empirical Strategies in Economics: Illuminating the Path from Cause to Effect

The view that empirical strategies in economics should be transparent and credible now goes almost without saying. The local average treatment effects (LATE) framework for causal inference helped make this so. The LATE theorem tells us for whom particular instrumental variables (IV) and regression discontinuity estimates are valid. This lecture uses several empirical examples, mostly involving charter and exam schools, to highlight the value of LATE. A surprising exclusion restriction, an assumption central to the LATE interpretation of IV estimates, is shown to explain why enrollment at Chicago exam schools reduces student achievement. I also make two broader points: IV exclusion restrictions formalize commitment to clear and consistent explanations of reduced-form causal effects; compelling applications demonstrate the power of simple empirical strategies to generate new causal knowledge.

This is a revised version of my recorded Nobel Memorial Lecture posted December 8, 2021. Many thanks to Jimmy Chin and Vendela Norman for their help preparing this lecture and to Noam Angrist, Hank Farber, Peter Ganong, Guido Imbens, and Parag Pathak for comments on an earlier draft. Thanks also go to my coauthors and Blueprint Labs colleagues, from whom I’ve learned so much over the years. Special thanks are due to my co-laureates, David Card and Guido Imbens, for their guidance and partnership. We three share a debt to our absent friend, Alan Krueger, with whom we collaborated so fruitfully. This lecture incorporates empirical findings from joint work with Atila Abdulkadiroğlu, Sue Dynarski, Bill Evans, Iván Fernández-Val, Tom Kane, Victor Lavy, Yusuke Narita, Parag Pathak, Chris Walters, and Román Zárate. The views expressed herein are those of the author and do not necessarily reflect the views of the National Bureau of Economic Research.

The work discussed here was funded in part by the Laura and John Arnold Foundation, the National Science Foundation, and the W.T. Grant Foundation. Joshua Angrist's daughter teaches in a Boston charter school.

MARC RIS BibTeΧ

Download Citation Data

Published Versions

More from nber.

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

15th Annual Feldstein Lecture, Mario Draghi, "The Next Flight of the Bumblebee: The Path to Common Fiscal Policy in the Eurozone cover slide

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » What is Economics – Definition, Methods, Types

What is Economics – Definition, Methods, Types

Table of Contents

What is Economics

Definition:

Economics is the study of how people use resources. It looks at how people use their time, land, and money to produce and consume goods and services. The term economics comes from the Greek οἰκονομία (oikonomia, “management of a household, administration”).

Economics focuses on the behavior and interactions of economic agents; how they use scarce resources to produce various commodities and how these activities impact upon one another.

History of Economics

A brief history of economics would start with the Ancient Greeks, who developed ideas that would eventually become the foundation for modern economic thought. The first economist is usually considered to be Xenophon, an Athenian philosopher who wrote about economic principles in his work Oeconomicus.

Other ancient Greek philosophers who made significant contributions to economic thought include Aristotle and Plato. Aristotle’s ideas on natural resources and labor value were particularly influential, while Plato’s work Republic laid out many of the principles that would later be codified in Adam Smith’s Wealth of Nations.

The next major figure in the history of economics is Smith himself, whose groundbreaking book was published in 1776. Smith argued that a free market economy based on competition and self-interest would ultimately lead to greater wealth and prosperity for all.

Branches of Economics

Economics is decided into two types or Branches:

Microeconomics

Macroeconomics.

  • Public economics
  • International economics
  • Labor economics
  • Development economics

Microeconomics is the study of how people manage their limited resources to satisfy their unlimited wants. It focuses on understanding and predicting human behavior in small economic units such as firms or households.

Microeconomic theory typically begins with the study of a single rational and utility maximizing individual. This individual has several constraints that limit his or her choices: time, income, wealth, preferences, and technology. With these variables held constant and given the individual’s preferences, microeconomic theory predicts how much of each good or service the person will purchase.

Microeconomics Key Factors:

Production, cost, and efficiency

Specialization, supply and demand, uncertainty and game theory, market failure.

In microeconomics, production is the conversion of inputs into outputs. Costs are the opportunity costs of the resources used in production. Efficiency is the efficient use of resources to produce the desired output.

Production functions are used to describe the relationship between inputs and outputs. They can be linear or nonlinear. Linear production functions show constant returns to scale, while nonlinear production functions show diminishing returns to scale.

Costs can be classified as fixed or variable. Fixed costs are those that do not change with changes in output, while variable costs do change with changes in output. Total cost is the sum of fixed and variable costs. Average cost is total cost divided by output. Marginal cost is the change in total cost when output changes by one unit.

In microeconomics, specialization is the process of focusing on the production of a particular good or service. Specialization allows for greater efficiency in the production process, as workers are able to develop expertise in a particular area. This increased efficiency can lead to lower costs and higher profits.

Supply and demand are the two most important factors in any market economy. They are what drive prices and determine how much of a good or service is produced. The law of supply and demand is a basic economic principle that states that when there is more demand for a good than there is supply, the price of the goodwill go up. Conversely, when there is more supply than there is demand, the price will go down. The amount of product that a company can produce at a given price is its supply curve, while the amount of product that consumers are willing to buy at a given price is its demand curve.

Firms in microeconomics are economic units that produce goods and services. In a free market economy, firms are private businesses that are owned by individuals or groups of people. Firms can be small businesses, such as corner stores, or large corporations, such as General Motors. The size of the firm does not matter; what matters is that firms produce goods and services that people want to buy.

In microeconomics, uncertainty is an important concept that helps to understand and predict human behavior. Game theory is a branch of microeconomics that deals with the strategic interaction between two or more people. I

In microeconomics, market failure is a situation in which the allocation of resources by the market fails to produce the desired outcome. It can be due to a variety of factors, including imperfect information, externalities, and government intervention.

Market failure does not mean that the market is not working. It simply means that the market is not working as efficiently as it could be.

In microeconomics, welfare is defined as the overall well-being of an individual or a group. It encompasses all aspects of an individual’s or group’s life, including their health, happiness, and economic security.

Macroeconomics is the study of large-scale economic factors, such as interest rates, inflation, and unemployment. It focuses on how these factors affect the economy as a whole.

Macroeconomics is a branch of economics that looks at the big picture of the economy. It asks questions about what drives economic growth, how recessions happen, and what policies can be used to stabilize the economy.

Macroeconomics is important because it helps us understand how the economy works and identify problems early on. By understanding macroeconomic principles, we can make better decisions about personal finance, business strategy, and public policy.

Key factors in Macroeconomics :

Business cycle

Unemployment, inflation and monetary policy, fiscal policy.

In macroeconomics, growth is defined as an increase in the production of goods and services in an economy. This can be measured by an increase in gross domestic product (GDP) or per capita income. Growth can be either positive or negative, but most economists are interested in understanding and promoting positive economic growth.

A business cycle is the natural rise and fall of economic growth that occurs over time. The cycle is typically defined as four phases: expansion, peak, contraction, and trough.

In an expansionary phase, the economy grows at an increasing rate, culminating in a peak. After the peak, the economy enters a period of contraction during which growth slows and eventually reaches a trough. From the trough, the economy begins to expand again.

The business cycle is caused by a variety of factors, including changes in consumer spending, government policy, and international conditions. While some economists argue that the business cycle is inevitable, others believe that it can be managed through active economic policy.

Unemployment is a term used to describe when people are looking for work but cannot find a job. There are different types of unemployment, but the most common is cyclical unemployment, which occurs when the economy is going through a recession.

The official unemployment rate does not include people who are underemployed or who have given up looking for work. The real unemployment rate, which includes these people, is often much higher than the official rate.

Inflation is a general increase in the prices of goods and services in an economy. The main cause of inflation is too much money chasing too few goods. This results in higher demand for goods, which then causes companies to raise prices. Monetary policy is the process by which the government, central bank, or monetary authority manages the money supply to achieve specific goals. The main goals of monetary policy are to stabilize prices and keep unemployment low.

Fiscal policy is the use of government spending and taxation to influence the economy. Fiscal policy can be used to stabilize the economy, promote economic growth, or distribute resources.

The government uses fiscal policy to influence the level of economic activity in the economy. By changing the level of government spending and taxation, the government can affect aggregate demand and inflation. Fiscal policy can also be used to redistribute resources and income.

Public Economics

Public economics is the study of how government policy choices impact economic outcomes. It is a broad field that encompasses many different areas, such as tax policy, government spending, and the regulation of markets.

Public economics is a relatively new field of study, but it has already had a significant impact on policymaking around the world. For example, public economists have played a key role in designing and evaluating policies to reduce poverty and improve economic efficiency.

International Economics

International economics is the study of economic interactions between countries. It covers a wide range of topics, including trade, investment, migration, and exchange rates.

International economics is a broad field that covers many different topics. Trade is one of the most important topics in international economics. It includes the study of how countries trade with each other and the factors that affect trade patterns. Investment is another important topic in international economics. It includes the study of foreign direct investment and portfolio investment. Migration is also an important topic in international economics. It includes the study of both legal and illegal immigration. Exchange rates are another key topic in international economics. They can have a major impact on a country’s economy.

Labor Economics

Labor EconomicsLabor Economics is the study of how workers are employed and compensated, as well as how their labor market behavior affects the economy. It is a branch of economics that focuses on the market for labor services and the determination of wages.

In general, labor economics focuses on three main areas: Labor markets, human capital, and industrial relations. Labor markets analyze the supply and demand for labor and the factors that affect employment levels and wages. Human capital examines the investment in workers through education and training, which can impact an individual’s productivity and earnings potential. Industrial relations looks at the interaction between employers and employees, including issues such as unionization, job security, and working conditions.

Development Economics

Development economics is a field of economics that focuses on improving economic conditions in underdeveloped and developing countries. The ultimate goal of development economics is to reduce poverty and improve the standard of living in these countries.

Economics Research Methods

Economics research methods are the methods economists use to study economic phenomena. These methods can be divided into two broad categories:

Empirical Research

Theoretical research.

Empirical research is a type of research that uses data that has been collected from experiments or observations to answer a question or test a hypothesis. This type of research is often used in economics, as it can be used to test economic theories and predict future economic trends.

The empirical research method involves collecting data and then analyzing it to see if there is a relationship between the two variables. For example, if an economist wants to know if there is a relationship between inflation and unemployment, they would collect data on both variables and then analyze it to see if there is a correlation.

Theoretical research is a method used in economics to evaluate and explain economic phenomena. It typically involves the use of mathematical models to derive predictions about how the economy will behave. Theoretical research can be used to understand past economic events, as well as to predict future economic behavior.

There are a number of different theoretical research methods that economists use, including game theory, microeconomic theory and macroeconomic theory.

  • Game theory is often used to study how firms compete with each other in markets.
  • Microeconomic theory is typically used to study individual consumer behavior.
  • Macroeconomic theory is usually used to study the overall behavior of the economy.

Purpose of Economics

In any society, the production and consumption of goods and services is a central activity. Economics is the social science that studies how people use their limited resources to satisfy their unlimited wants. The study of economics helps us to understand human behavior, which in turn helps us make better decisions in our own lives and improve the world around us.

The purpose of economics is to help us understand how people use their resources to satisfy their needs and wants. By understanding economic principles, we can make better decisions about how to use our own resources. We can also use economics to help solve social problems and improve the quality of life for everyone.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

What is Art

What is Art – Definition, Types, Examples

What is Anthropology

What is Anthropology – Definition and Overview

What is Literature

What is Literature – Definition, Types, Examples

Economist

Economist – Definition, Types, Work Area

Anthropologist

Anthropologist – Definition, Types, Work Area

What is History

What is History – Definitions, Periods, Methods

Empirical Analysis

  • Living reference work entry
  • First Online: 01 January 2014
  • Cite this living reference work entry

Book cover

  • Alexander J. Wulf 2  

395 Accesses

1 Citations

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Baldwin J, Davis G (2003) Empirical research in law. In: Cane P, Tushnet MV (eds) The Oxford handbook of legal studies. Oxford University Press, Oxford, pp 880–900

Google Scholar  

Chambliss E (2008) When do facts persuade. Some thoughts on the market for empirical legal studies. Law Contemp Probl 71(2):17–39

Eisenberg T (2011) The origins, nature, and promise of empirical legal studies and a response to concerns. Univ Ill Law Rev 2011(5):1713–1738

Faust F (2006) Comparative law and economic analysis of law. In: Reimann M, Zimmermann R (eds) The Oxford handbook of comparative law. Oxford University Press, Oxford, pp 837–865

Friedman M (1953) The methodology of positive economics. In: Friedman M (ed) Essays in positive economics. University of Chicago Press, Chicago, pp 3–43

George T (2005) An empirical study of empirical legal scholarship. The top law schools. Indiana Law J 81(1):141–161

Gordon RW (1993) Lawyers, scholars, and the “middle ground”. Mich Law Rev 91(8):2075–2112

Article   Google Scholar  

Hausman DM, McPherson MS (2006) How could ethics matter to economics. In: Hausman DM, McPherson MS (eds) Economic analysis, moral philosophy, and public policy, 2nd edn. Cambridge University Press, Cambridge, pp 291–307

Chapter   Google Scholar  

Ho DE, Kramer L (2013) The empirical revolution in law. Stanf Law Rev 65(6):1195–1371

Hull NEH (1989) The perils of empirical legal research. Law Soc Rev 23(5):915–920

Klerman D (2002) Statistical and economic approaches to legal history. Univ Ill Law Rev 2002:1167–1176

Lawless RM, Robbennolt JK, Ulen T (2010) Empirical methods in law. Aspen Publishers, Alphen aan den Rijn

Posner RA (1997) The future of the law and economics movement in Europe. Int Rev Law Econ 17(1):3–14

Sen A (1981) Accounts, actions and values. Objectivity of social science. In: Lloyd C (ed) Social theory and political practice. Clarendon, Oxford, pp 87–107

Suchman M (2006) Empirical legal studies. Sociology of law, or something ELS entirely? AMICI. Newsl Sociol Law Sect Am Sociol Assoc 13(1):1–4

Ulen T (2008) The appeal of legal empiricism. In: Eger T et al (eds) Internationalisierung des Rechts und seine ökonomische Analyse. Internationalization of the law and its economic analysis, Festschrift für Hans-Bernd Schäfer zum 65. Geburtstag, Gabler, Wiesbaden, pp 71–88

Download references

Author information

Authors and affiliations.

SRH Hochschule Berlin, Berlin, Germany

Alexander J. Wulf

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Alexander J. Wulf .

Editor information

Editors and affiliations.

Lehrst. Finanzwissenschaft/ Finanzsoziologie, University of Erfurt, Erfurt, Thüringen, Germany

Jürgen Backhaus

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media New York

About this entry

Cite this entry.

Wulf, A.J. (2014). Empirical Analysis. In: Backhaus, J. (eds) Encyclopedia of Law and Economics. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-7883-6_161-1

Download citation

DOI : https://doi.org/10.1007/978-1-4614-7883-6_161-1

Received : 18 August 2014

Accepted : 18 August 2014

Published : 08 September 2014

Publisher Name : Springer, New York, NY

Online ISBN : 978-1-4614-7883-6

eBook Packages : Springer Reference Economics and Finance Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. What Is Empirical Research? Definition, Types & Samples

    meaning of empirical research in economics

  2. Empirical Research: Definition, Methods, Types and Examples

    meaning of empirical research in economics

  3. Empirical Research: Definition, Methods, Types and Examples

    meaning of empirical research in economics

  4. What Is Empirical Research? Definition, Types & Samples

    meaning of empirical research in economics

  5. Definition, Types and Examples of Empirical Research

    meaning of empirical research in economics

  6. Empirical Research: Definition and Examples

    meaning of empirical research in economics

VIDEO

  1. Empirical data Meaning

  2. What does empirical data mean?

  3. The Empirical Rule

  4. Empirical Relation Between Mean, Median,Mode || COST

  5. Phil 11 5 1 Hume: Empirical Criterion of Meaning

  6. cost (5) empirical evidence of Cost #microeconomics #cost

COMMENTS

  1. Methods Used in Economic Research: An Empirical Study of Trends and Levels

    The methods used in economic research are analyzed on a sample of all 3,415 regular research papers published in 10 general interest journals every 5th year from 1997 to 2017. The papers are classified into three main groups by method: theory, experiments, and empirics. The theory and empirics groups are almost equally large. Most empiric papers use the classical method, which derives an ...

  2. What is Empirical Research? Definition, Methods, Examples

    This empirical research is crucial for understanding the ecological consequences of climate change and informing conservation efforts. Business and Economics. In the business world, empirical research is essential for making data-driven decisions. Consider a market research study conducted by a business seeking to launch a new product.

  3. PDF Empirical Modeling in Economics

    during 40 years of teaching and research to discuss the con-ceptual difficulties associated with empirical work in economics. He has always argued that the bridge between economic theory and applied economics should be a sturdy structure, across which it was both necessary and safe for practitioners to go in both directions. He is also one of ...

  4. Empirical Strategies in Economics: Illuminating the Path From Cause to

    The view that empirical strategies in economics should be transparent and credible now goes almost without saying. By revealing for whom particular instrumental variables (IV) estimates are valid, the local average treatment effects (LATE) framework helped make this so.

  5. Is there really an empirical turn in economics?

    This is why the transformation of economics is the last decades is better characterized as an "applied turn" rather than an "empirical turn.". Applied has indeed become a recurring work in John Bates Clark citations. "Roland Fryer is an influential applied microeconomist," the 2015 citation begins. Matt Gentzkow (2014) is a leader ...

  6. An empirical turn in economics research

    An empirical turn in economics research. A table of results in an issue of the American Economic Review. Over the past few decades, economists have increasingly been cited in the press and sought by Congress to give testimony on the issues of the day. This could be due in part to the increasingly empirical nature of economics research.

  7. The case for economics

    The case for economics — by the numbers. A multidecade study shows economics increasingly overlaps with other disciplines, and has become more empirical in nature. Caption A new study examines 140,000 economics papers published from 1970 to 2015, tallying the "extramural" citations that economics papers received in 16 other academic ...

  8. PDF Empirical Strategies in Economics: Illuminating the Path From Cause to

    empirical strategy that reliably captures the causal effects of government training programs inspired me and others at Princeton to explore the econometrics of program evaluation.1 An empirical strategy for program or policy evaluation is a research plan that encompasses data collection, identification, and estimation.

  9. The Credibility Revolution in Empirical Economics: How Better Research

    The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics by Joshua D. Angrist and Jörn-Steffen Pischke. Published in volume 24, issue 2, pages 3-30 of Journal of Economic Perspectives, Spring 2010, Abstract: Since Edward Leamer's memorable...

  10. Is economics an empirical science? If not, can it become one?

    Today's mainstream economics, embodied in Dynamic Stochastic General Equilibrium (DSGE) models, cannot be considered an empirical science in the modern sense of the term: it is not based on empirical data, is not descriptive of the real-world economy, and has little forecasting power. In this paper, I begin with a review of the weaknesses of neoclassical economic theory and argue for a truly ...

  11. (PDF) Methods Used in Economic Research: An Empirical Study of Trends

    The methods used in economic research are analyzed on a sample of all 3,415 regular research papers published in 10 general interest journals every 5th year from 1997 to 2017. The papers are ...

  12. PDF Economic Theory: Economics, Methods and Methodology

    enquiry.2 Thus, both economics and methodology belong in the social sci-ences, where the former deals with economic behavior, and the latter deals with the behavior of economists. Methods, by contrast, are tools that are designed to be used by scientists, but do not model a reality. We focus on theoretical rather than empirical work. The ...

  13. The empirical shift in economics

    Women in economics also win from this shift towards empirical economics. When theory doesn't rely on data for confirmation, it often becomes a bullying/shouting contest where women are often disadvantaged. But with quasi-experiments, they can use reality to smack down bullies, as in the sciences.

  14. Transaction Cost Economics: An Assessment of Empirical Research in the

    We show how TCE has branched out from its economic roots to examine empirical phenomena in several other areas. We find TCE is increasingly being applied not only to business-related fields such as accounting, finance, marketing, and organizational theory, but also to areas outside of business including political science, law, public policy ...

  15. Introduction

    In our call for papers to selective potential submitters in late 2021, we pointed out that Peter Schmidt stepped down as an Associate Editor of Empirical Economics after serving in that position for over 24 years. We noted his distinguished service at the Journal and that among his lifelong accomplishments in academics were important contributions to many areas of econometric research ...

  16. Empirical Strategies in Economics: Illuminating the Path from Cause to

    Joshua D. Angrist, 2022. "Empirical Strategies in Economics: Illuminating the Path From Cause to Effect," Econometrica, Econometric Society, vol. 90 (6), pages 2509-2539, November. citation courtesy of. Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating ...

  17. Empirical Analysis

    Empirical analysis in business refers to the systematic examination and interpretation of real-world data to gain insights, draw conclusions, and make informed decisions. It involves collecting, organizing, and analyzing data from various sources. Hence, these sources include market research, financial statements, customer surveys, operational ...

  18. (PDF) Research Methods for Economics

    Applied research can also vary according to the nature of method. used in analysis. There are mainly four categories of applied research: 1) statistical and econometric. analysis 2) calibration ...

  19. What is Economics

    Definition: Economics is the study of how people use resources. It looks at how people use their time, land, and money to produce and consume goods and services. ... Empirical research is a type of research that uses data that has been collected from experiments or observations to answer a question or test a hypothesis. This type of research is ...

  20. Milton Friedman'S Empirical Approach to Economics: Searching for

    Milton Friedman is usually presented as an economist characterized by his empirical approach to economics. His binary classification of economics into positive means and normative ends relies on the empirical content of predictions. Throughout his career, he used extensive, data-based statistical techniques.

  21. Empirical Analysis

    Empirical analysis uses empirical research methods from economics and the social sciences in an attempt to provide answers to research questions in the field of law, i.e., in order to investigate the operative and functional aspects of law and legal consequences. The goal of empirical legal research is to make a contribution to all subjects and ...

  22. PDF Distinguishing Between Economic Importance and Statistical Significance

    In this paper, we use Feminist Economics' editorial policy on communicating significance and the ongoing debate about the meaning of statistical significance as launching points to develop a set of guidelines for distinguishing between statistical and substantive significance when presenting results of empirical research.

  23. Empirical methods in the economics of education

    Empirical research in the economics of education often addresses causal questions. Does an educational policy or practice cause students' test scores to improve? ... can control for fixed effects of each individual in that it accounts for an indicator variable that takes out mean differences between individuals, so that only changes over time ...