Category Archives: Economic Policy Analysis (EC406)

THE EFFECT OF INDUSTRIAL POLICY

THE EFFECT OF INDUSTRIAL POLICY ON CORPORATE PERFORMANCE: EVIDENCE FROM PANEL DATA

C. Criscuolo et. Al

A Short Summary 

In a Nutshell

Most governments have industrial policies that claim to foster productivity and employment etc. but it is not possible to tell if they are just financing activities that would have been undertaken regardless of state funding. Comparison of outcomes with other firms does not work as a counterfactual as the sorting into firms that receive funding is far from random. This paper uses data from a quasi-experiment where changes in the eligibility criteria for assistance for the UK Regional Selective Assistance programme caused a change in the areas that were eligible to receive funds. Using RSA data they find that there is a large effect on the treated for employment, investment and probability of exit, and these effects are seriously underestimated if endogeneity is ignored. They cannot however rule out the possibility that there are negative aggregate productivity effects from protecting inefficient incumbents 

Further Details

The EU changed the eligibility rules such that areas were subject to different constraints in terms of how much funding they could receive from the government. Generally applied to manufacturing firms who needed funds for capex to create jobs, in a viable project setting. The applicant had to demonstrate need, and should be meeting the other expenses himself or otherwise from the private sector.

Specification

Yjt = αDjt + βXjt + ηj + τt + vjt  (4)

Due to data limitations, they aggregate across all plants in the same firm, and run above regression at the firm level. Note however that they use plant level data in order to later analyse area-level impact of industrial policy (thus capturing general equilibrium effects).

yjt is outcome of interest for firm j at time t. Authors consider three outcome variables: employment, investment and productivity. Xjt are covariates used as controls that vary depending on outcome of interest.

Djt is participation dummy– authors mainly use binary indicator to reflect if the firm received any treatment.

Instrument for Djt with Zjt , which is the level of the maximum investment subsidy (Net Grant Equivalent/NGE) available in the area where the firm’s oldest plant is (oldest as then the location decision will not have been made because of changes to the EU assistance map). Baseline results use mutually exclusive dummies for each of the different rates.

They instrument for participation in the programme using the changes to the system imposed by the EU.

The data set is a panel that combines admin data on the RSA participants, and matching them to a firm level database that gives employment, investment and entry/exit information. This means they can track firms before and after participation and compare them to a control group that did not participate.

Results

OLS results indicate a 37% increase in employment rising sensibly with the level of subsidy. The IV result is much larger suggesting serious downward bias in the OLS estimates. This is reduced to a much more sensible level when firm level dummies are included to control for heterogeneity of response to the treatment, but they are sensible and much larger than the OLS estimates with the same dummies included.

They do the same for labour productivity (measured as ration of gross output to employment) but the effects are small and insignificant.

They find a positive effect on employment at the area level (based on travel to work area) which indicates that there are spillovers from the RSA, and there is also a rise in the number of plants in an area. However, the employment effects on incumbents are stronger than an incentive to new entry as RSA dampens reallocation effects (due to less exit) and as there is no productivity effect from receiving RSA, it would appear that the scheme supports less productive firms and this could dampen aggregate productivity (especially since in the summary stats they show that the firms that receive RSA tend to be larger than the control firms).

Implications

Positive effect of programme on investment and employment but not on productivity. As the RSA helps firms to expand this could have a negative impact on aggregate productivity. Seems to be more characteristic of a welfare payment.

 

Advertisements

IDENTIFYING AGGLOMERATION SPILLOVERS

IDENTIFYING AGGLOMERATION SPILLOVERS: EVIDENCE FROM WINNERS AND LOSERS OF LARGE PLANT OPENINGS

M. Greenstone, R. Hornbeck & E. Moretti

 

Principal Research Question and Key Result Are there economic spillovers that accrue to incumbent plants from agglomeration and through what mechanisms do such benefits arise? The paper finds that there are positive spillover effect on total factor productivity in incumbent firms that persist to be 12% for firms 5 years after the location of a “million dollar plant” (MDP) in the vicinity. This productivity gain seems to occur only for plants that are close in economic distance, meaning that they share worker flows, and they employ similar technologies. This is consistent with a labour force and knowledge spillover. There is no evidence for input/output based spillovers (see theory). 
Theory The entry of a new firm creates spillovers, and this leads to the entry of firms that want to benefit from these spillovers. This leads to competition for inputs and hence labour/land/other local input values rise. This continues until the value of the increased output is equal to the increased cost of production (as firms are assumed to be price takers and so cannot raise prices). This simple model yields four testable hypotheses:

  1. The opening of a new plant increases TFP of incumbent plants
  2. The increase may be larger for firms that are economically closer to the new plant
  3. Density will increase as new firms move in to take advantage of the spillovers created by the new plant (if they are large enough)
  4. The price of locally supplied factors of production will increase.

What are the possible channels for these spillover effects?

  1. Labour market – the labour market is “thicker” thanks to agglomeration. This can reduce search frictions and improve the match between worker and firm (this implies increased productivity). Alternatively/complimentarily the availability of a larger local workforce reduces the possibility that there are positions unfilled (this does not necessarily imply improved productivity, only that posts will be filled).
  2. Transportation costs – transportation costs for local suppliers of intermediate inputs and services will be lower if there is agglomeration and reduces production costs in dense areas.
  3. Knowledge spillovers – the sharing of knowledge and skills through formal and informal interactions may generate production externalities across workers. This may be important particularly in hi-tech areas (think Silicon Valley). This implies increased productivity. However, it is not clear who will gain from this productivity, as the knowledge spillovers may lead to increased investment in new technologies in which case the benefit will accrue to capital, or it may increase worker productivity in which case labour wages will gain.
  4. Amenities – local amenities are valued differently between different types of worker and firms need to be located such as to attract the right kind of worker. This implies no productivity differences between high/low density areas once type of worker is controlled for.
  5. Natural advantages – oil companies near oil fields, wine makers in the Loire. Since most natural advantages are fixed over time this is not relevant for empirical analysis which looks at changes in agglomeration over time.

 

Motivation Increasingly, local governments compete for big plants to locate in their region by offering generous subsidies. The main economic rationale for doing so is that these plants create agglomeration spillovers which benefit the local economy. Yet we lack any rigorous testing of these effects, and if they are in fact found to be small, then this questions the use of taxpayers’ money being used to finance such subsidies. 
Data They identify 47 usable MDP openings. They combine this with information on incumbent plants in the winning/losing county (of which there had to be at least one pair of incumbents for the MDP to qualify) including capital stocks, materials, value of shipments, etc. from annual manufacturing surveys. The focus on existing plants eliminates problems of endogenous openings of new plants.In order to investigate mechanisms they code variables relating to % of output sold to manufacturers, % of inputs from same three digit industry, labour market transitions between industries, % of patents manufactured in each three digit industry, R&D expenditure.

 

Strategy Firms do not randomly locate, therefore simple comparison of regions is not appropriate for obvious reasons of endogeneity. So the authors use an industry property source to compare regions that did get MDPs with the regions that narrowly lost out to winning those same plants, on the assumption that those two regions are significantly similar in TFP enough to allow for use of the “loser” and control for the “winner” – formally the assumption is that but for the location of the MDP the TFP in the winner and loser regions would have developed identically. The summary statistics show that observables are generally balanced between winning and losing counties relative to the rest of the USA, and even more balanced as between firms in winning/losing counties indicating that the identifying assumption is quite strong. The authors state that even if it is not perfect, this is still a better method than simple comparison.The strategy is pretty extensive. The dependent variable is TFP defined as total value of shipments minus changes in inventory. The left hand side includes time trend dummies, a dummy equal to 1 if the plant is in a winning county and 0 otherwise. There is also a dummy that turns on when the MDP is open. Then there is:

α[(Winner)*(Open)] which is the difference in difference estimator, with alpha as the effect of being in a winning county in a time when a MDP has opened.

β[(Winner)*(Open)*(Trend)]  where Trend are the time dummies. Beta shows the differential effect being in a winning county with a MDP has on time trends beyond the year that the plant opens and the Open dummy turns on
They actually do two estimations, one which is the simple DID (which sets the Winner*Open*Trend coefficient to 0, and a full estimation where it is allowed to vary.  I.e. the specification allows for a mean shift and trend break.

They include plant and industry and region fixed effects.

 

Results Differences in TFP in the years before the opening of an MDP as between winning and losing counties are all small and insignificant. In the years following the opening coefficients on the Winner*Open year specific dummies become significant at the 5% level. This reveals a sharp upward break in the difference between the TFP of the counties (which is in fact a decline in the losing counties relative to a flattening out in the winning counties – showing the importance of the losing counties counterfactual, as if not included negligible results would have been found). The coefficients from suggest a mean shift of 4.8% in TFP in winning counties relative to losing counties. Moreover, when the second model is used that includes separate time dummies, the effect seems to be getting stronger as time passes, so having an MDP is associated with a 12% increase in TFP in winning counties in year 5 after opening which confirms the importance of including the trend break. These numbers translate into approx. $170m in year one and $430m increase in total output in year 1 and 5 respectively.Coefficient on the pre-opening trend is not significant which lends support to the identifying assumption.

There is significant heterogeneity, and in some specific cases there are negative effects of introducing a MDP.

The increase in TFP in incumbent plants seems to come from incumbent plants producing more with less after the MDP opening.

They investigate the mechanisms at work. When they compare results for incumbents in the same 2 digit industry the effects are much greater, and no significant effects are found for other two digit industries. The construct measures of PROXIMITY that captures worker flows, technological proximity, input-output flows, and re-estimate interacting it with OPEN*WINNER*PROXIMITY. All coefficients are significant except flow of goods/services. Thus there is evidence that intellectual or technological linkages, sharing workers increase the spillover – i.e. intellectual spillovers.

If these spillovers are big enough there should be new entrants, and indeed a DID with log(plants) as depvar is positive and significant – i.e. new economic activity was attracted. They also find that wages increase as demand for labour increases.

 

Robustness There could be unobserved productivity shocks coincidental to the opening of the MDP. They do a variety of specification checks; they allow inputs to be endogenous. Also since MDPs are associated with public good investment etc. this could be what is driving the results, but they find no meaningful relationship between government expenditure in the area and the plant opening.There could have been differential attrition, however, in the winning plants 72% remained at end of period, and this was 68% in losing plants i.e. very similar.

 

Problems The theoretically correct depvar is quantity, but this is not comparable across plants. So they have to use value measures. In that sense, the results could be reflecting some change in price rather than a change in actual productivity.Do the results hold for smaller plants and plants outside the manufacturing industry?

 

Implications There do seem to be externalities associated with the location of MDPs. However it is not clear that this is a justification for offering generous subsidies. In particular it may be the case that the plant will locate domestically no matter what, in which case subsidies are wasteful from a national perspective. Even at local level it is probable that all the gains will be bargained away, and hence they will be zero sum games.However, given that there is significant heterogeneity of effects there could still be a place for subsidies. For example as spillovers are greatest where there is economic proximity, MDPs should be encouraged to locate where the spillovers will be largest. The MDP may locate only where its profits will be highest (as it does not itself benefit from the spillovers it creates). In this case, some subsidy should be offered to internalize the benefit they provide and in so doing encourage them to locate where the spillovers will be greatest.

TEACHER PERFORMANCE PAY

TEACHER PERFORMANCE PAY: EXPERIMENTAL EVIDENCE FROM INDIA

K. Muralidharan & V. Sundararaman

NBER Working Paper No. 15323 (2009) 

Principal Research Question and Key Result Does performance based pay for teachers improve student performance? In an experiment in India, students who had teachers subject to performance incentives performed between 0.28 and 0.16 standard deviations better than those in comparison schools.

 

Theory It is not clear that monetary incentives will always align the preferences of the principal and the agent. In some cases they may crowd out intrinsic motivation leading to inferior outcomes. Psychological literature indicates that if incentives are perceived by workers as a means of exercising control they will tend to reduce motivation, whereas if they are seen as reinforcing the norms of professional behaviour then this can enhance intrinsic motivation.

Additionally whether incentives are at a class or school level will be of importance. This is because in the school results model (how schools perform on aggregate) there will be incentives to free ride. This is not the case if incentives operate at the individual teacher level. The problem may be reduced in small schools where teachers are better able to monitor each other’s efforts at a relatively low cost.

 

Motivation There are generally two lines of thought regarding how to improve school quality. The first argues that increase inputs are needed. This might include text books, extra teachers, better facilities etc. The other option is to implement incentive based policies to improve existing infrastructure, and perhaps improve individual selection into the teaching sector.

 

Experiment/Data The experiment took place in Andhra Pradesh which has been part of the Education for All campaign in India, but sees absence rates of around 25% and low student level outcomes. There were 100 control schools, 100 group bonus schools (all teachers received same bonus based on average performance of the school), and 100 individual bonus schools (incentive based on performance of students of a particular teacher). Focussing on average scores ensures that teachers do not just focus on getting those kids near the threshold up, thus excluding less able children. No student is likely to be wholly excluded given the focus on averages. Additionally, there was no incentive to cheat, as children that took the baseline test, but not the end of year test were assigned a grade of 0 which would reduce the average of the class.

A test was administered at the start of the programme/school year which covered material from the previous school year. Then at the end of the programme a similar test was given, with similar content, and then a further test which examined the material from the current school year (that they have just completed). The same procedure was done at the end of the second year. Having overlap in the exams means that day specific measurement error is reduced. The tests included mechanical and conceptual questions.

 

Specification

Tijkm(Yn) = α + β[Tijkn(Y0)] + γ(Incentives) + δ(Zm) +εk + εjk+ εijk 

T is the test score, where i j k m indicate student, grade, school, and mandal (region) respectively. Y0 indicates baseline tests, and Yn indicates the end of year tests. The baseline results are included to improve efficiency by controlling for autocorrelation between the test scores across multiple years. Zm is a vector of mandal dummies (fixed effects) and standard errors are clustered at the school level.  Delta is the coefficient of interest.

 

Results Students in incentive schools scored 0.15 standard deviations higher than the comparison schools at the end of the first year and 0.22 at the end of the second. This averages across maths and languages (although disaggregated the effect for maths was higher). NB Whilst the year 1 to year 0 comparison is valid, and the year 2 to year 0 is valid as well, technically the comparison of year 2 to year 1 (column three of table II) is not experimental estimation as year 1 results are already post experimental outcomes.

They examine for heterogeneous treatment effects by including relevant variables and interacting them with the INCENTIVE dummy., and find that none of them (no. students/school proximity/school infrastructure/parental literacy/caste/sex/) see differential effects from the programme indicating that the benefits are widely based and not conditional on a set of predetermined characteristics. The only interaction for which there is a mall effect, is household affluence. These then are broad-based gains. As the variance of test scores in individual school went up, this might indicate that teachers responded differently, as it seems there were no barriers for all types of children and schools to benefit from the programme (no heterogeneous effects).

When they include teacher characteristics such as education and training, the see no significant effect, but when they interact these measures with the INCENTIVES dummy they are positive and significant, indicating that high quality teachers alone may not sufficient if they are not incentivized to use their skills to maximum effect.

Teachers that were paid more responded less, presumably as they are more experienced (less conducive to change) and the bonus represented a smaller fraction of their total income.

Happily the results were similar for both the conceptual and mechanical questions, indicating that real learning is taking place, rather than just rote reproduction. Additionally students in incentive schools performed better in non-incentive subjects like science. NB it is possible that teachers diverted energy from teaching non-incentive subjects to teaching incentive subjects for obvious reasons. This result does not disprove that, but it says that in the context studied improvement in teaching in certain subjects can have spillovers into other subjects.

Both group and individual incentives were effective. However, schools size was typically between 3 and 5 teachers, so probably too small to separate effects. Group incentives may not work in larger schools.

Interestingly there was no increase in teacher attendance. In interviews after the experiment teachers said they gave extra classes, and were more likely to have set and graded homework.

Robustness
  • They tested the equality of observable characteristics across the control/treatment groups and could not reject the null that they were equal indicating that randomization was successful. Additionally, all schools (including control) were given the same information and monitoring, to ensure that differences in the treatment were not merely due to the Hawthorne effect.
  • There was no significant difference in attrition, and the average teacher turnover was the same across schools indicating that there was no sorting of teachers into the incentive schools.
  • They control for school and household characteristics which does not change the estimated value of delta, thus confirming the randomization.
  • A parallel study provided schools with money to purchase extra inputs, and the incentive levels were set such that they came to a similar amount of funds as the input schools. The input schools did see a positive effect, but to a much lesser degree. Additionally, the incentive programme actually ended up costing much less.

 

Interpretation Programme design is extremely important. In particular how the teachers feel about incentives may affect performance, and the size of schools may mean that benefits from group incentives are not seen due to the ability of teachers to freeride on the back of their colleagues.

Given that the study was compared with an input study in the same region and found improved results, it would seem that funding should be allocated to incentive schemes rather than input schemes. In addition, rather than raiding pay by 3% each year, that 3% could be allocated to the bonus scheme, and thus it would actually cost virtually nothing to run (other than the administering of the tests etc.). However, a mix of policies is probably a good idea, especially since the incentive scheme did not improve absence rates. As other literature has shown improving infrastructure etc. can lead teachers to be present more, so this could be one option for the input schemes.

 

DISEASE AND DEVLOPMENT

DISEASE AND DEVLOPMENT: THE EFFECT OF LIFE EXPECTANCY ON ECONOMIC GROWTH

D. Acemoglu & S. Johnson

Journal of Political Economy, Vol. 115 No. 6 (2007) pp. 925-85

Principal Research Question and Key Result Have the 21st century’s increases in life expectancy translated into increased economic growth? No statistically significant effect on growth is found and in fact per capita indicators show significant negative association with life expectancy, indicating that any aggregate growth gains are more than offset by increases in population and birth rates.

 

Theory In neoclassical growth theory increased life expectancy raises that population which initially reduced capital to labour ratios thus depressing income per capita. This may be compensated for in the medium/long run by higher output as more people enter the labour market and more capital is accumulated. This compensation may eventually exceed the initial per capita measures is there are significant productivity benefits from longer life expectancy. This may not occur however if some factors of production are supplied inelastically (not sure I get this last bit). None of this should be taken as an indication that welfare will not be greatly increased from increased life expectancy, only that there is no discernible relationship with GDP.

 

Motivation There is a growing consensus that improving health outcomes can have indirect payoffs through accelerating economic growth. Whilst Marco studies of these effects are plagued by problems of comparability and also that countries characterized by ill health are also disadvantaged in other ways, micro studies (which show positive effects of health outcomes on growth) cannot account for general equilibrium effects. The most important of these would be that as there are diminishing returns to effective units of labour, micro studies that cannot account for the effect of population growth pursuant to improvements in health, will tend to overstate the economic returns from doing so. This paper takes a novel approach by instrumenting fairly successfully for life expectancy by exploiting what they term the international epidemiological transition that occurred in the 1940s which was essentially the development and diffusion of curative and preventative medicine.

 

Data  
Strategy There is a long differences strategy (primarily to look at effects of life expectancy on other demographic variables), and an IV strategy (to look at effects of life expectancy on wealth variables).

The long differences strategy compares variables in 1940 (prior to the transition) with 1980 (being prior to HIV). By taking differences they are excluding all the country fixed components of the growth model ( technology, initial human capital etc.), with an additional vector of controls. The dependent variables of interest are GDP, population, births, and age composition of the country. The primary explanatory variable is life expectancy.

The IV strategy uses potential mortality to instrument for life expectancy. They collect data comparable data for 15 of the most important infectious diseases, and create time dummies for when interventions in that disease started to occur at the medicinal frontier. The instrument is constructed as follows:

Mit = ∑[(1-Idt)Mdi40 + IdtMdFt

Mdit denotes mortality in country I from disease d at time t. Idt is a dummy for intervention for disease d at time t (and thus equla to 1 for all dates after the intervention). Mdi40 is the pre-intervention mortality from disease d in country I, and MdFt is the mortality form disease d at the technological frontier after the intervention.

(1-Idt)Mdi40this part of the expression will equal the mortality rate in the country before there is an intervention as I only equals 1 when there has been an intervention, and so when there is no intervention 1-I = 1 and 1*Mdi40 equals the mortality rate pre intervention. Then when there is an intervention I turns on to equal 1, and this part of the expression will equal zero.

IdtMdFt – this part of the expression will equal zero in the pre-intervention period, and the mortality rate at the frontier after the intervention turns the dummy to 1.

Critically the value of the predicted mortality derived is in no way dependent upon how successful a country is at implementing the intervention. The dummy switches to one for all countries at the same time, which strengthens the exclusion restriction, as variations in predicted mortality are in no way correlated with the country specific error. Additionally, the Mdft is set to zero such that predicted mortality from a disease equals zero after the intervention, and so is uncorrelated with the error.

They show that there is a strong first stage relationship between the predicted mortality and life expectancy, and this is the case even excluding the richer countries from the sample

Results Long difference: the 1940-80 estimate of the effect of life expectancy on population is 1.6 which suggests that a 1% increase in life expectancy causes a 1.6% increase in the population. The coefficient on births is 2.35 and the on the % of population under 20 is 0.94. These are robust to using longer time windows (1940-2000). When the GDP measures are the dependent variable, there is a 0.85% increase in GDP associated with 1% increase in life expectancy, but this is insufficient to compensate for the population effects, so GDP per capita and GDP per working age population are both significant and negative. These results are only tentative, as there are endogeneity problems i.e. richer countries may invest more in health.

The first stage yields a coefficient implies that improving predicted mortality will improve life expectancy by 21% and this is significant at the 1% level. This is the case then shorter time windows are used also. The run the 2SLS IV estimate with different dependent variables:

  • Population – countries with a larger decline in predicted mortality experienced larger increases in population. The coefficient on the baseline sample is 1.65 and is significant at the 1% level.
  • Births – coefficients vary between 2.15 and 2.9 which are quite high
  • % Population under 20 – the coefficient is large and significant, but using the longer time window becomes insignificant indicating that the new drugs saved the lives of young people initially, but this effect was averaged out over time.
  • GDP – the coefficient on life expectancy is now only 0.32 and is not significant, although given the size of the standard errors, economically significant effects are impossible to rule out. This effect is smaller when only looking at the subsample of poorer countries.
  • GDP per capita – any increase in GDP was more than offset by the population growth as the coefficient is now negative and significant, and this seems to have affected working age population as much as any.

 

Robustness
  • They use some alternative instruments which I will not go into, results are similar.
  • In the IV estimates they control for institutions, initial GDP, continent dummies etc. and there are no significant changes to the estimates.
  • They do a falsification test by showing that lead predicted mortality had no effect on life expectancy, and they further see if predicted mortality in 1940 is a good predictor of life expectancy in the period of 1900-1940 which it is not indicating that preexisting trends are not driving the validity of the instrument.
Problems
  • The analysis only takes into account mortality. If the epidimiological transition has also greatly reduced morbidity and this has had an effect on individual productivity, then there may have been economic effects not captured by life expectancy measures.
  • If the disease environment in 1940 in some way was a reflection , or cause of the growth trajectory/path of a country, and that trajectory is persistent over time, then using the 1940-80 window may cause endogeneity issues, as outcomes then, and in 1980 will be correlated with this unobserved trajectory variable and this could be biasing results.
  • The results may not be applicable to the world today as the transition was a very singular event, and the disease environment today is very different particularly taking into account HIV which tends to affect adults in the prime of their lives, rather than the diseases analyses here whose burden fell predominantly on young people outside the workforce.
Implications Increasing life expectancy increases the population, and birth rates did not decline sufficiently for there to be any meaningful compensation in terms of increased productivity. Overall this has led to a decrease in income per capita. This would seem to suggest that for economic (as opposed to simply welfare) benefits to be felt from improved health outcomes, there needs to be concurrent efforts to improve productivity such that capital can be accumulated at a faster rate than the population growth diminished capital per worker. Naturally none of this means that promoting health outcomes does not have its own intrinsic value.

 

 

THE ECONOMIC BURDEN OF MALARIA

THE ECONOMIC BURDEN OF MALARIA

J.L. Gallup & J.D. Sachs

Journal of Tropical Medicine and Hygiene, Vol. 64 (2001) pp. 85-96

A Summary

What effect does Malaria have on GDP growth in Malaria endemic regions? They estimate that countries with severe malaria incidence have on average 1.3% lower growth rates per year.

It is quite clear that poor countries predominate in the same regions as malaria, and that economic growth in those regions is much lower than elsewhere. What is not clear is whether the observed correlation between malaria and low growth are causal or merely coincidental. Specifically, malaria could merely be proxying for other determinants of low growth such as poor quality tropical soils, inaccessibility to world markets, different patterns of colonization etc. However, in regressions that include these types of geographic variables, malaria is still found to have a significant effect. This is also the case when Africa is excluded from the sample, which shows that it is not merely Africa that is driving the results.

What is not clear however is whether malaria is a cause of poor growth, or the product of poor growth? If for example, malaria eradication/prevention becomes possible only when countries reach a certain level of development, then the true effect runs from growth to malaria, not the other way around. The authors claim that this interpretation does not tally with the facts. They claim that Malaria eradication is determined by ecology and climate, and not by personal behviour, general development, or the extent of urbanization (although they may be important, they are of second order importance). This can be shown when it is considered that the regions with the worst malaria in 1965 had the least reduction in malaria in the next three decades. They claim that in endemic areas of Africa with up to 300 infectious bites per night, control simply is not possible and so it cannot be argued that control is the effect of growth.

Whilst seductive, this line of reasoning is not really sufficient to prove the case. As they attest to there have been successful eradications of malaria. For example in Greece where up to a quarter of the population was infected in the malaria season, it was successfully eradicated. This also occurred in Italy, and other parts of Southern Europe. Whilst the virulence and prevalence of the disease in those areas may indeed have been less severe than in Africa, it is not at all clear that they were not able to eradicate the burden due to comparatively higher levels of development.  The sort of largely anecdotal evidence presented in the paper is not really sufficient to substantiate the claim that causality runs from malaria to growth and not vice-versa.

Additionally, they show growth patterns for Taiwan that successfully eradicated malaria, and compare their pre and post malaria growth rates with that of the rest of East Asia, and the simple difference in difference is only +0.9% which is hardly compelling, and is not estimated using regression techniques so no statistical significance can be evaluated. Additionally Mauritius did not see growth after eradicating malaria, although this may have been due to the closed nature of their economy – this indicates either that the effect does not run from malaria to growth, or that malaria interacts with other features of the economy such that simple eradication does not guarantee a growth episode.

They run a cross country growth regression using a new malaria index which is the fraction of population living in areas with high malaria risk in 1965 times that fraction of malaria cases in 1990 that are due to the most severe malaria vector. On the right hand side are initial income levels, human capital, institutional variables, geographic and economic. The results indicate that both initial levels of the index and subsequent changes are significantly associated with GDP growth. One might argue that the malaria index is only capturing the worst type of malaria, and there seems to be little sense in leaving out other types, as restricting to the worst variety essentially confines analysis to Africa, and Haiti. Indeed when Africa is excluded the results persist, however this could be being driven exclusively by Haiti, which is extremely poor, and burdened with very high malaria levels. Nevertheless, the results show that for a 10% reduction in the malaria index on average would result in 0.3% growth increase.

In order to combat remaining problems of endogeneity they instrument for the malaria index using the prevalence of mosquito vectors in each country in 1952. No first stage is reported and thus it is not possible to evaluate the strength of the first stage. Additionally it is not clear that the instrument and the index are substantially different, they both seem basically to be measuring the same thing. However, the results do not substantially change the OLS estimates.

When they include other tropical diseases in the regressions they do not find any significant effects. Whilst this indicates that malaria is not proxying for other diseases, it is not theoretically clear why malaria should have significant effects, but yellow fever etc. should not. This leads me to question he results.

There is little agreement as to what channels malaria works through to lead to lower GDP. Suggested in the paper are:

  • Lower productivity due to morbidity (although potentially mitigated by partial immunity)
  • Lower levels of cognitive development (see for example the Spanish flu paper in EC454)
  • Malaria keeps away tourists and investors.
  • Malaria limits internal movement of people and hence goods/services.

If the results are taken at face value, then a huge amount of emphasis should be put on prevention and eradication. This is in fact what we see in the development community. LLINs are being widely distributed and provide a very cost effective way of reducing the incidence of malaria, particularly since only 50% of a community need to sleep under a net in order for spillovers to be created (the mosquito dies when it lands on the net so is unable to bite anyone else). Naturally these interventions are not made solely with GDP in mind as there are welfare benefits from not being sick that are potentially more of concern than long run GDP growth.

 

THE ECONOMIC COSTS OF CONFLICT

THE ECONOMIC COSTS OF CONFLICT: A CASE STUDY OF THE BASQUE COUNTRY

A. Abadie & J. Gardeazabal

The American Economic Review, Vol. 93, No. 1 (2003) pp. 113-32

Principal Research Question and Key Result Did the conflict in the Basque country affect the economy? The results suggest a 10% loss of GDP due to the terrorism.
Theory Terrorism could affect GDP in various ways. The most important is likely to be investment. If earning a return on investment becomes uncertain because either the return may be extorted or the entrepreneur killed then this acts as a random yet significant tax on investment. Under such a circumstance investment is depressed and this will affect output and hence GDP. Additionally foreign investment in the affected region could be reduced if conducting business in that region is thought to be risky, although the mechanism is exactly the same, although it operates on international rather than domestic actors.

 

Motivation Political instability is often said to have strong effects on economic prosperity. However, studies to date have largely been cross country studies which suffer from comparability issues (as conflicts are rarely similar). This study seeks to explain how the richest region in Spain subsequently dropped to the 6th position in terms of GDP per capita. As it is focused on only one such conflict the heterogeneity issues outlined above are circumvented to a certain extent (although, as ever at the expense of external validity).

 

Data They have panel data for 1968-1997 which includes variables on deaths and killings, as well as GDP and other variables that can be thought to determine GDP such as investment ration, and human capital measures.

 

Strategy They exploit the fact that ETA was created in 1959 but did not implement large scale terror operations until the mid-70s. Additionally in 1998 a ceasefire was declared which was subsequently cancelled, and this provides testing ground for looking at how economic outcomes varied during both the scale up of violence (largely killings and kidnappings), and the cease fire.

They essentially do a DID, however they cannot simply compare the Basque country to another region, as there was no comparable region – the Basque country was the richest, most industrialized etc. So they construct a synthetic control group. They do so by identifying a list of variables that are thought be drivers of economic growth (agriculture share etc. table III) and assign weights to the other possible control regions such that when aggregated the weighted averages of the variables resemble the observed variables for the Basque country subject to the constraint that the variable that should best be reproduced is the GDP per capita for the Basque country in the 1960s. When this is done, they end up with a synthetic control group that is comprised of 85% Catalonia and 15% Madrid.

During the ceasefire they look at the cumulative abnormal returns of stocks listed as Basque stocks, relative to other stocks on the Spanish market. Asset prices should reflect all available information, and if instability is important then Basque stocks should have performed better when the ceasefire was announced and became credible, and worse as the cease fire broke down. They categorized the stocks using market professionals.

 

Results They plot the GDP growth of the synthetic control and the Basque country and they follow each other closely until the mid-70s, when the Basque country falls behind. This suggests a loss of 10% of GDP due to the terror. The gap in the GDPs of these regions seems to spike at the same time as the deaths from terrorism in the Basque country. 

The results of the ceasefire study are that the good news dummy coefficients are significant and positive for Basque stocks and negative for Bad news.

Robustness The do a placebo study, by comparing Catalonia and a synthetic Catalonia (constructed as above but excluding the Basque country as a possibility) and find that there is no significant gap in GDP, although the real Catalonia did outperform the synthetic one by 4% around the time of the Barcelona Olympics.

 

Problems
  • The synthetic control is made up almost exclusively of Catalonia, thus it is not very balanced, or impervious to idiosyncratic shocks in that region. Additionally, it is not clear that selecting weights so that GDP is matched is the best strategy, as similar GDP levels in the 60s does not guarantee that what is salient in terms of future growth has been captured.
  • They do not actually estimate the DID using regression techniques as far as I can see, so we have no idea what the standard errors are, or what the other significant factors were in determining outcomes in the Basque country. This does not allow us to verify how important e.g. industrial decline was in explaining GDP in the Basque. Without such results it cannot be said conclusively that the higher industrial share in the Basque country pre-terror was not driving lower GDP in the face of industrial decline post-terror.
  • The authors state that Catalonia and the Basque country were both highly industrialized regions. If one of the effects of terror was to incentivize entrepreneurs or businesses to move away from the Basque country due to instability, the chances are they would relocate to somewhere that was similar to the Basque country, which surely would be Catalonia. As the synthetic control is made up predominantly of Catalonia, any significant movement of human capital from the Basque to Catalonia could have affected GDP outcomes in Catalonia, and hence this would tend to overstate the results.
  • It is not clear that they have isolated anything to do with property rights as such.

 

Implications Conflict can harm the economy. This is not a new idea. Not sure what the policy implications are, other than avoid civil conflict if possible.

 

 

RADIO’S IMPACT ON PUBLIC SPENDING

RADIO’S IMPACT ON PUBLIC SPENDING

D. Stromberg

The Quarterly Journal of Economics Vol. 119, No. 1 (2004) pp. 189-221

Principal Research Question and Key Result Does penetration of mass media such as radio create better informed voters that consequently receive more favourable policies? In the context of early radio expansion, this paper finds that an increase in the share of households with a radio by 1% increases spending in that area by 0.54%. 
Theory Mass media creates a distribution of informed an uninformed citizens. Informed citizens may be able to achieve better policy results. For this to occur they must vote, and they must know whether their representative has done something for them, and information from the media helps them. Thus is it more costly for politicians to neglect voters with access to political information via the media. This indicates that government spending s should be higher on groups that have access to the media, higher on groups where more people vote and voter turnout should be higher where people have access to media.The model indicates that if:

xi(uc)(zc) – βi η > ui

then the incumbent will be reelected. X is 1 if the population knows that something has been done for them. U is the utility they receive from the amount of spending Z. Beta is the ideological preference for the challenger, and Eta is the general popularity of the other candidate.

The governor knows that the voter will vote with some probability t and that the voter knows of his responsibility for the relief programme with some probability α that is an increasing function of r (radio coverage)

This generates the following propositions:

  1. If the voters cannot know if money is spent in their county or not (x = 0) then the politician has no incentive to spend there, as he will get no political credit for doing so.
  2. If Beta is distributed such that the ideological preference for the challenger is such that the incumbent cannot win, then he will not spend in that county as he will not be reelected in any case.
  3. He will allocate more funds where there are more gains to be had on the margin i.e. where turnout is higher, and there are more radios, there are more swing voters and the need for relief spending is high (where Uc is high).

 

Motivation  
Data 2500 US counties in panel from 1920-1940. This was in the middle of radio expansion, and also during the FERA programme which distributed funds to those whose income was inadequate to meet their needs. It was locally administered and local officials decided who would and would not receive the assistance. Governors were the main arbiters.
Strategy

Ln(zc) = αln(rc) + βln(tc) + δ1xc1 +εic

State specific fixed effects are also included and standard errors are clustered within state. The main hypothesis is that α>0

 

Results Factors indicating low socio-economic status are positively correlated with spending indicating that income assistance was directed to places where utility was likely to be highest (i.e. where they needed it most).The elasticity of spending with regard to radio ownership imply that increasing radio coverage by 1% would raise spending by 0.54%, and increasing turnout by 1% increases spending by 0.57%. The most important explanatory variable is unemployment which indicates that this was not just pork barrel politics, but that spending was directed where it was needed.

If radio use increases turnout, and turnout increases spending, then this is another mechanism through which radio is working. A fixed effects PANEL regression is estimated with turnout as dependent variable, with a host of controls. The coefficient on radio coverage is 0.117 and significant at 5% levels. Thus increasing radio coverage by 10% would increase turnout by 1.2%. Since every increase of 1% of turnout increases spending by 0.57% then the effect of radio on spending through turnout is 0.12*0.57 = 0.07%.

 

Robustness It is recognized that there could be bias in the estimates. Specifically, if richer counties (not otherwise captured by controls) have lots of radios, but no need of assistance then results would be downward biased. But it more people seek out radio ownership and are also better at getting their preferred policies, then this would create upward bias. In recognition of this, he implements an IV strategy, which uses geological features ground conductivity and woodland cover as instruments for radio ownership (as these variables both affect the quality of the received signal). The F-stats in the first stage are all strong. Exogeneity might be questioned, as geological features especially wood cover could be correlated with poverty or exclusion and hence relief spending which would downward bias the IV estimates. However, despite these concerns the IV results are actually more positive than the OLS results. As the author therefore takes the OLS results as his baseline, the IV just indicates the direction of the bias (i.e. people seek out radios who are better at getting what they want), and as such the main results of the paper are conservative, and this lends credence to the story.Property values, employment stats, income, wages, bank deposits etc. are all controlled for as well as share of votes in last election, voter density etc.

If the model is correct there should be more spending where elections were more closely fought. This is tested by excluding noncompetitive states, and the coefficients are nearly twice as large.

The effects should be larger in rural areas, as urban dwellers had better access to other types of media. When the specification is tested on a rural subsample the coefficient increases nearly 50%

If radio use is simply proxying for some other variable relating to the use of consumer durables then we should see similar results for other durables e.g. car ownership. Indeed gasoline sales are shown to have correlations with wages, employment etc. (just as radio does), but gasoline sales per capita are not related to spending in regressions.

 

Problems
  • This is a cross section, with data being pooled cumulatively. Panel data would have been ideal as we could see how outcomes changed with increased radio penetration, and particularly if funds are limited, then as radio coverage becomes near universal the limited pot of FERA funds may not be significant enough for use for political capital in all counties covered by radio. This would be akin to a general equilibrium effect. Panel data would have allowed.
  • Sadly no interaction terms are used. For example an interaction term between unemployment and radio coverage could have given an estimate of the differential effect that radio coverage has in the presence of a given level of unemployment, or need. This would have been interesting to see, as the levels estimates are not as readily intelligible.

 

Implications Mass media can carry politically relevant information to voters who can then use this to update their voting positions. This can make politicians more accountable as people are more likely to vote.Simply extending the franchise to the poor is not enough as this paper makes clear. What is important is how informed people are, for if certain sections are not informed as to the spending policies of the government, then such spending may be cut without fear of losing votes, and redirected to areas that may have less objective need for the spending.

As the inclusion of welfare indicators made the estimates stronger it seems clear that spending was not just directed at those who were rich enough to own radios.

The bottom line is that radio improved the relative ability of rural America to attract government transfers.

 

 

THE POLITICAL ECONOMY OF GOVERNMENT RESPONSIVENESS

THE POLITICAL ECONOMY OF GOVERNMENT RESPONSIVENESS: THEORY AND EVIDENCE FROM INDIA

T. Besley & R. Burgess

Quarterly Journal of Economics (2002)

Principal Research Question and Key Result Does access to mass media, in particular to newspapers, increase the responsiveness of governments to the needs of the people? In other words does mass media mitigate political agency problems by providing information to voters? In the context of India, the authors find that newspaper circulation does indeed increase the amount of government responsiveness. A 1% increase in newspaper circulation is associated with a 2.4% increase in food distribution and a 5.5% increase in calamity spending.

 

Theory The general idea is that media enables vulnerable population to assess the actions of incumbents in order to inform their voting decisions.

 

Voters are of two types 1) vulnerable – meaning vulnerable to some shock (weather etc.), and 2) non-vulnerable. Of 1) there are a) needy – those for whom in the given time period a shock actually materializes, and b) non-needy – being the vulnerable who are not actually affected by a shock.

 

Incumbents are of three types 1)selfish – will never help the vulnerable 2)altruistic – will always help 3) opportunistic – will help if it increases chances of reelection. In order to help the incumbent has to exert an amount of effort  which is a cost to him.

The needy always observe how much effort has been applied, but the rest of the vulnerable population learns from the media. Effort is more likely to be learned about when the effort is greater, and the marginal impact of effort will be greater when there is more media.

 

Those who are needy in the first time period, and those who are vulnerable realize they may be affected by a shock in the next time period. Thus when they elect the official in the election that occurs between periods they want to maximize the chances of getting of getting a politician that will help them. (Formally, as there are only two periods in this set up, the opportunistic politician will not help in time 2, as he has no more reelection concerns, thus the voters want an altruistic politician. However they cannot observe the type directly). Thus they will always vote for the incumbent that helped them in time 1 as he is definitely not selfish and may turn out to be altruistic. By backward induction, this means that effort by an opportunistic incumbent is higher when:

 

  1. Voters have more media access
  2. There is higher turnout
  3. There is a larger vulnerable population
  4. The incumbent has a low advantage

 

Non-vulnerable citizens are thought to vote along ideological lines.

 

This can all be summarized thus: greater media activity raises the marginal value of effort because it is more likely that reports of the effort will find its way to voters. More turnout increases the effectiveness of effort by turning it into support at the ballot box, and the same is true when the vulnerable population is larger. Effort is greater when there is more competition

Motivation In the absence of well-functioning markets, the vulnerable sections of society are often reliant upon government action for protection. Of concern then is what institutions can be developed to ensure that the government does so protect its people. This question is particularly important given that poor people are less likely to be informed about politics, and also less likely to vote, so without good institutional design they could be totally excluded from benefitting from government, and also changing government.

 

Data Data are from Indian states that were responsible for administering public distribution of food and calamity relief. When the local governments were given this power there was also a huge increase in the number of newspapers that were being published, including a rise in local language publications. The press was relatively free and independent.

A panel from 1958-1992 is constructed that details public food distribution and calamity relief expenditure by state. The need for intervention is proxied by food grain productions and flood damage to crops variables. Newspaper circulation proxies for media penetration.

 

Strategy Fixed effects model.

 

git = αi + βt + γsit + δ(sit)(zit) + θ(zit) + εit

 

Where g is the outcome in state i  in time t. Alpha is state fixed, beta is time fixed effect. S captures the need for state intervention, and the effect of the need for intervention is captured by γ. This is effectively the “activist” component of government action i.e. how much the government is likely to respond to crisis. Z is a host of political variables that may affect government responsiveness including the media penetration variables.  Θ captures the effect these variables have on relief spending. The real coefficient of interest however is  δ as this captures the true “responsiveness” of government, in other words the differential response of governments to crisis in the presence of media (etc.). This will pick up whether responses are greater given more media, turnout, competition etc.

 

Results
  • The effects of newspaper circulation are large and significant. A 1% increase in newspaper circulation is associated with a 2.4% increase in food distribution and a 5.5% increase in calamity spending.
  • Turnout in the last election, a measure of political competition, and dummies that indicate when elections are near at hand are included. Turnout does not seem to affect responsiveness. Competition is only significantly associated with food distribution not calamity relief, the same goes for being in an election year.
  • The coefficient on the interaction terms food production * newspaper circ is negative, indicating that for a given level of newspaper penetration, a fall in food production elicits are greater response in terms of food distribution. Similarly the interaction on flood damage * media penetration is positive, indicating that for a given level of newspaper circulation, more flood damage increases the amount of calamity relief offered.

 

Robustness
  • They include a number of economic variables such as population density, income per capita etc. (as wealth etc. may increase media presence and relief spending), but none of the variables enter significantly. Thus it appears that economic factors have limited influence on government responsiveness.
  • The predict values of food grain production, by regressing the food grain production variable on state/year effects and the drought/flood variable, and used the predicted value (which essentially is the amount of grain that was affected by the weather shock) in the main specification. The results show that there is no relationship between the shock value of grain production and the outcomes, but there is a relationship between the shock value * media penetration interaction, which supports the interaction interpretation offered above.
  • The split out the papers by language and find that local language papers are much more important than English papers etc. (as they are more likely to report local news presumably – and vulnerable population is more likely to read in their local language).
  • There could be some OV problem that is not accounted for, so they instrument for media penetration using ownership on the basis that private ownership is more likely to be associated with bigger distribution as state owned media is more biased and thus there is less demand for their product.
  • They interact the other political variables with the proxies for need. And find that greater turnout is associated with greater responsiveness, as is political competition, although the effects for food distribution continue to be larger than for calamity relief.

 

Problems
  • The results may confirm the main hypothesis of the model (that increased media increases government responsiveness. However, other than this, results are quite mixed. In particular the other hypotheses of the model are not borne out for both food distribution and calamity relief. The authors claim that this is because food distribution is a more visible form of relief (and therefore easier to cash in on politically), but we might wonder whether this is sufficient.
  • It is not clear that newspapers should be the most important form of information dissemination. For example, if literacy is an issue in Indian states, then newspaper circulation may be informing a very specific subset of the population, and this may not be the vulnerable population. As the non-vulnerable population are said to vote on ideological grounds, then they cannot affect government responsiveness to crisis, and thus newspapers cannot be the driving force behind the observed responsiveness. Some measure of TV/radio penetration could have been included to see if/how the different forms of media substitute for each other. If TV/Radio are more likely to be in areas with high newspaper circulation (due to a high demand for information), then the newspaper variable could be picking up the effects of these other forms of media. The amount of these other media will be varying over time and by state so the fixed effects model cannot completely control for them.
  • The IV strategy is not great. The instruments are pretty weak (F = c. 5.5) and exogeneity is not well argued for i.e. greater private ownership of media sector could be associated with all sorts of political variables that might also affect relief spending. However, the estimates returned are much larger than the OLS estimates, which is a comfort, as the OLS estimates can then be thought of as lower bounds (perhaps due to attenuation bias from measurement error).

 

Implications Whilst democracy may be important for development, it is clear from this paper that simply amending the rules of the game will do little to change outcomes without a concurrent change in complimentary institutions. This paper shows that mass media and open political institutions can affect government activism and responsiveness. This confirms what Amartya Sen stated when he said that there have been no famines in India since the advent of democracy partly because newspapers make the fact known thus forcing issues to be faced by governments. The results indicate that civil society is thus a key component to a functioning democracy.

 

 

Income, Schooling and Ability

A Summary for the EC406 class so its not im my usual format… 

Topic: Returns to Education

Income, Schooling and Ability: Evidence from a New Sample of Identical Twins

O. Ashenfelter & C. Rouse, QJE Vol. 113 No. 1 (1998)

       Key Findings An extra year of schooling increases earnings by 9%  

Relevant Theory

One of the key challenges for estimating returns to education is that ability levels differ by individual, and as ability is unobserved and can only be proxied for on a rough basis this biases OLS results. IV techniques have been used with a varying degree of success, but one other line of attack is to look at twins. Essentially the argument runs that identical twins are identical in terms of their ability due to their shared genetic makeup. Therefore the schooling investments made by genetic twins should be the same apart from random deviations that are not related to the determinants of schooling choices (ability etc.)

 

 
Empirical Strategy Generalised Least Squares and Fixed Effects Model.  

Specification

Y1j = βS1j + θ[(S1j + S2j)/ 2] + λXj + Vj + ε1j           (1)

And

Y2j = βS2j + θ[(S1j + S2j )/ 2] + λXj + Vj + ε2j           (2)

 

Where Y is log wage for twin 1 or 2 of family j. S is schooling indexed similarly. X is a vector of covariates that determine wages across families but not within families (race, age etc.). The average education of the family is added (S1j + S2j / 2) as this controls for what they term family ability (i.e. what you inherit thanks to the conditions under which the twins grew up). Vj is then the innate ability of the individual which is the same across both equations as the twins are assumed to be identical.

 

This specification can then be estimated using GLS, and this will allow for an estimation of the family ability, but not the innate ability which remains in the error term. However, as innate ability is equal across specifications subtracting (1) from (2) will difference away the innate ability and a fixed effects model estimates:

 

Y2j-Y1j = β [(S2j –S1j)/2] + (ε2j– ε1j)                          (3)

 

The advantage of (3) is that innate ability is differenced away at the cost of being able to estimate family ability. Additionally, (3) is reliant on the fact that there is no difference in the ability within twins. Any deviation in ability will create biased estimates of β.

 

There are well documented problems with measurement error in self-reported units of education, which is exacerbated in first differenced equations (see lecture notes for more on this including proofs). Thus in the paper they implement an IV strategy to deal with the reported errors. That is they ask not only for self-reported units of education, but they also ask the twin how many years of education were attained by their sibling. They then use the responses of one to instrument for the amount of education of the other. This eliminates the person specific element of the measurement error.

 

 

Data

 

Data are drawn from 700 interviews collected at a twins fair in Ohio where the emphasis is on similarity so everyone tend to be identical, and dressed the same etc. The data are in panel format from 1989-1993, and if respondents were interviewed more than once, their answers were averaged.  

Main Table

 

So column (1) reports the GLS estimation of aroun 0.1 log points increase in wages for each year of education. Column (2) reports the fit of the equations (I have no idea what that means), Column (3) reports three stage least sqaures which in a nutshell uses the IV strategy to cope with measurement error before moving on to the GLS.

Column (4) reports the first difference estimator, which reduces significantly to 0.7 suggesting that innate ability is important for determining both years of education and subsequent earnings. The coefficient is restored to around 0.9  in column (5) when the IV strategy is implemented to cope with the measurement error. Columns (6) – (10) do the same process but controlling for a host of family level covariates.

 

 

Critique

It may well be thought that twins do in fact differ in innate ability. Whilst they attempt to control for family ability by using the average level of education attained by the twins, this is an incredibly rough approximation, and is probably not fine enough to capture how twins may experience different learning environments. They do also control for who was born first, which is good as it is often said that the first born is heavier and this determines life chances.

From the summary statistics presented at the front of the paper it is clear that the twins interviewed differed positively from the population at large in terms of earning, employment, and education. Thus this sample is not hugely representative, and it may be hard to generalize the results to the population at large.

 

 

Policy Implications

Increasing access to education can improve wage opportunities based presumably on productivity increases for the better educated.

 

 

When I Grow Up, I Want To Be:

The type of person who knows what the hell GLS is.  

THE NATURE AND EXTEND OF DISCRIMINATION IN THE MARKETPLACE

THE NATURE AND EXTEND OF DISCRIMINATION IN THE MARKETPLACE: EVIDENCE FROM THE FIELD

J. A. LIST

The Quarterly Journal of Economics Vol. 119, No. 1 (2004) pp. 49-89

Principal Research Question and Key Result Do minorities experience discrimination in a particular bilateral bargaining market (the market for baseball cards at baseball card fairs)? If so is this prejudice based, based on different bargaining ability, or based upon statistical discrimination? The results of these various experiments indicate that minorities do experience discrimination in the market place, but that this discrimination is based upon cognizant statistical discrimination rather than prejudicial/taste based discrimination.

 

Theory Discrimination can come in a variety of different forms. Taste-based discrimination is discrimination based upon prejudice, and is considered to be morally wrong – i.e. treating two otherwise identical people differently simply because you have a distaste for the race of one of them. Discrimination could also be statistical which means that people use observable characteristics about an individual in order to make inferences about unobserved characteristics that determine different levels of treatment. For example, if women generally have an unobserved tendency to drive harder bargains when negotiating a sale, then potential buyers may be more willing to strike a deal with them that erodes their surplus, than they are with men. Thus different sales prices for men and women would not be evidence that men were being discriminated against based upon their sex, but rather that their sex was a proxy for some other characteristic that determined the outcome (being rather weak negotiators).

 

Motivation In general regression analysis of existing data is restricted by the unstated hypothesis that the unobserved characteristics of majority and minority individuals are the same, whereas they may not be. This means that such studies cannot differentiate between taste based and statistically based discrimination. In general it is argued that only differences in treatment based upon taste should be considered true discrimination.

 

Data The data are drawn from a series of experiments that were performed at three baseball card shows in the same geographic area.

Experiment I and II

Participants were males, females and minorities aged 20-30, and men over 60. They entered the market to purchase a specific card for as little as possible from a randomly selected dealer who did not know he was part of an experiment. Participants did not know it was a study on discrimination, and they were also guaranteed a $10 fee irrespective of how much below the reservation price they managed to secure the card. This happened twice, once with a reasonable reservation value, and once with one too low to guarantee them participation in the market.  As there were 120 participants, this yielded 240 observations. This is experiment (B) for Buying. A similar treatment was to do with selling (S) whereby participants were asked to sell a card. 60 such participants approached 5 dealers each yielding 300 observations. No individuals participated in more than one treatment. In order to check if they had had prior dealings the subjects were asked, and only 3-5 had. The bargaining process was timed.

Data about the dealers that were approached by the participants was also collected.

Experiment III

This is a dictator game, where the dealers in question were given $5 and told to divide it anonymously between themselves and one of the participants (who was not present). The only information the dealer received was what group the individual he was sharing with belonged to. The design was such that no one could know which dealer made what offer in order to allow them to behave as naturally as possible. This can used to see if the dealers that participated have taste based discrimination or not.

Experiment IV

Here both participant and dealer know they are part of an experiment. The buyers are given a card with a reservation price on, that they must try to get below. This is done 5 times with different dealers and different prices. In half of the interactions the dealer understands that the reservation price is randomly assigned, and in the other half this is not stated (such that he really believes that they are arguing for their genuine reservation price). The dealers all have target too, so the amount of dealer surplus that is lost in the bargaining process is quantified.

 

Strategy

Pij = βXij + αj + εij

Where P is the initial/final price offered by dealer j made to individual i, X are controls for years of experience in market, education, transaction intensity (how many that have made previously), income dummy, past experience dummy etc. Alpha is a dealer fixed effect and epsilon is the error term. 

Results Experiment I and II

(B): minorities received higher initial and final prices when selling (summary stats). The regression tends to confirm the summary stats. Differences are large and significant for women and men over 60, but are positive though insignificant for non-white males.

(S): the effects of discrimination were much more pronounced when selling, with all minorities experiencing significant discrimination that amounted to a nearly 30% reduction in initial offer prices. These differences are attenuated but not lost as a consequence of the bargaining process. In both experiments years of experience is significant. He splits the results up into whether the individual is experienced in dealing or not and finds that whilst the effects remain for inexperienced dealers (although again not significantly so for non-white men in experiment B), for individuals with lots of dealing experience the effects are attenuated by the bargaining process such that final offer prices are pretty similar. However, minorities have to spend longer bargaining to get to that position. These results say nothing about the type of discrimination that is occurring.

He uses data on the dealers approached to regress a measure of discrimination based on different offers they make, on the observable characteristics, and finds that the level of discrimination rises with the amount of experience a dealer has.

Experiment III

The only statistically significant difference in the offer in the dictator game is that women are in general offered slightly more, which fits with other psychological studies where the men are the dictators, and possibly reflects some kind of chivalry impulse. This indicates that dealers do not exhibit noneconomic taste based discrimination.

Experiment IV

He runs an OLS with individual dealer lost surplus on dummies for the group to which the buyer belonged, and dummies indicating which experimental session the bargain had occurred. The results indicate that majority buyers outperform minority buyers when dealers do not know that reservation values are determined randomly, but that minority and majority buyers perform similarly when the dealer knows that the reservation price is randomly allocated. This indicates that discrimination only occurs when the dealer believes that he can get a better price for his commodity, as when he knows that reservation prices are random, he is willing to lose an equal amount of his surplus to each group of buyers. This is consistent with the idea that dealers knowingly statistically discriminate.

This is tested further. If there is statistical discrimination then dealers that know that certain groups have a larger variance of reservation prices will tend to offer higher initial/final offers to those groups for whom the variance is relatively bigger, in an attempt to secure the best deal for himself when selling his goods. Information about the willingness to pay for a certain card was collected from participants at the show by getting them to put a bid in for a card (and being told that if theirs was the best bid they would receive the card although they would have to pay for it). Similar information was collected on willingness to accept, potential seller wrote down the least offer they would take for the card and were told that one person would randomly win the card, and if that person happened to be the lowest offer, they would also receive cash to the value of the next lowest bid. When summarized these data indicate that minority reservation price distributions are more widely dispersed than the majority reservation prices.

The various variances by group were then taken into the field, and dealers were asked to match reservation price bands to the different groups to test whether they accurately perceive the different distributions. This experiment indicated that dealers do recognize different price distribution but experienced dealers are better at doing so.

In sum results suggest that there is statistical discrimination. Dealers seem to know that minorities have different willingness to pay/accept prices, and they use their identifiable traits (race/sex) as proxies for the unobserved reservation price potential, and this explains the differential treatment of the groups.  

 

Robustness This whole article is robustness
Problems Sample size is pretty small which may prevent him from finding stronger more precisely estimate results. Also there could be some sample selection effects. For example that experienced minorities can achieve similar results to majorities if they bargain for longer may not be a real world effect, if they are more likely to press for a negotiated deal knowing that they are part of an experiment that is concerned with selling/buying baseball cards. In real life they might just walk away.

External validity is not very high. This is a quite specific market, that does not bare a huge amount of similarity to markets in general. For example very little experience is needed to deal in the market for groceries, or electricity, whereas for baseball cards clearly knowing what the value of the product is is itself the product of a certain amount of experience. It is hard to generalize these results.

It is not clear why nonwhite males should not experience significant effects in the buying experiment.

 

Implications In seeking to address social policy to questions of discrimination it is important to understand what the sources of the discrimination are in order to best design policy. Given the intricate nature of this experiment is the means of identifying the different channels that discrimination can operate through, it is hard to see how it could be replicated to other types of environment such as schooling or labour market outcomes. Perhaps the best that can be hoped for from those studies is an overall finding of discrimination whether it be taste or statistical. Although the taste based discrimination is generally thought to be the morally reprehensible one, we might further question whether just because women are prepared to accept a lower wage, an employer should be allowed to exploit this and pay her less than a male colleague, which is essentially what statistical discrimination is. If we decide that this is not acceptable then the regression type methods become more useful.

 

 

AUTOMOBILE EXTERNALITIES AND POLICIES

AUTOMOBILE EXTERNALITIES AND POLICIES

I.W.H. Patty, M. Walls & W. Harrington

Journal of Economic Literature Vol. XLV (2007) pp. 373-399 

In a Nutshell

This is a wide ranging and very detailed paper that describes a host of negative externalities that are imposed by drivers on the road, and the policy responses that seek to address them. The main conclusion is that for the most persistent externalities that cause gridlock in so many places, the best hope is offered by congestion charging via electronic road pricing. To improve highway safety some mileage based insurance policies should be considered. Whilst it is unlikely that there measures will internalize global warming type externalities it is argued that to do so with transport policy would not be most efficient. To reduce carbon emissions some form of carbon trading is preferable, or a tax on all oil products, not just on gasoline. These measures could be buttressed by R&D into new technologies to aid in the shift away from fossil fuels.

 

A Typology of Externalities

Local Air Pollution: gasoline vehicles emit harmful elements into the atmosphere which can be damaging to the health of those located in the vicinity. These pollutants can be reduced by reducing VKT and lowering emissions through technology. A fuel tax may achieve the former, but it cannot achieve the later and so should not be pursued in isolation. In fact emissions have fallen dramatically due to progressively stringent emissions standards. A willingness to pay estimate for the cost of avoiding poor health indicates that 2.3 cents per mile is the value of the externality.

 

Global Air Pollution: cars etc. account for 20% of nationwide carbon emissions. A fuel tax is a tax on carbon emissions as there are no available technologies for reducing carbon emissions on light vehicles. Various estimates of the value of a rise in temperature have been proffered, and they differ wildly, largely due to different social discount rates being used. They yield estimates of between 5, 12 and 72 cents per gallon of gasoline.

 

Oil Dependency: dependence exposes the country to volatility and price manipulation.  Like the vulnerability to oil price volatility and the cost of military presence in the Middle East, this externality is pretty murky and ill defined, and is not clearly related to market failure. See the paper for more details.

 

Congestion: I do not go again into how congestion is related to externalities. Estimates of the externality imposed in terms of reduced speed indicate a tax of $1.05 per gallon. However, if this were applied to fuel the effect may not be as intended as peak driving decision are often much more inelastic with respect to price – people have to get to work. Thus such a fuel tax would probably only have an effect on the least congested roads.

 

Traffic Accidents: there are around 40,000 deaths per year on the US highways. What is needed is a tax on VMT reflecting differences in marginal costs across drivers, vehicles and regions. Using quality adjusted life years estimates the value would appear to be 15 cents per VMT.

 

Noise Costs: are estimated at 0.4 cents per mile for passenger vehicles.

 

Others considered are highway maintenance, urban sprawl, parking subsidies etc. 

Policies

Fuel Tax: fuel taxes have declined as they have not kept up with inflation and improved fuel economy. Behaviour changes in response to fuel prices, but the elasticity of VMT to fuel range is between -0.1 and -0.6.  This could thus be an avenue for policy. However, there are issues regarding equity (fuel is a proportionately larger part of the poor’s budget, so the tax is thought to be regressive) although these could be partially solved by recycling the tax dollars in the direction of pro poor policy. There are also political issues with a strong auto/oil lobby.

 

Fuel Economy Standards: economy standards reduce emissions and dependence although they may increase other externalities as people are encouraged to drive more VKTs.

 

Alternative fuel technologies are another possible response.

 

Congestion Tolls: This is attractive. Building new roads is now hard (given high levels of existing development) and may not even be efficient. New income is needed for highway maintenance as fuel tax levels have fallen. Congestion can now be collected electronically which reduces bottlenecks due to toll booths etc. On the other hand, it may be hard to make meaningful estimates of what the marginal pricing structure should be, and it may represent a substantial information barrier which the consumer is not able to react to efficiently. There are political problems too.

THE FUNDAMENTAL LAW OF ROAD CONGESTION: EVIDENCE FROM US CITIES

G. Duranton & M.A. Turner

NBER Working Paper 15376

Principal Research Question and Key Result Does increasing the size of the interstate highway system relieve congestion? The key finding is that the elasticity of vehicle kilometres traveled to highway lane kilometres is almost 1 across all specifications which indicates that the amount of traffic increases proportionately with the size of the highway network. In other words, building roads is not a good means of reducing congestion.

 

Theory When deciding whether to enter the road system a driver assesses the marginal benefit of driving an extra kilometer (which is assumed to be a decreasing function), and the marginal cost (including his time, fuel etc.) of driving that kilometer. He drives until the marginal benefit equals marginal cost. However, the marginal social cost of him driving that kilometer is higher than the marginal personal cost as he imposes an externality on other drivers by being on the road (i.e. his presence on the road adds to congestion in general). Thus the social optimum equilibrium of the amount he drives will be lower than the private equilibrium. Transport policy can intervene in order to better equate the social and private costs such that the social optimum is reached.

This can occur in a variety of ways. Fuel tax could be used for example. However, this has been shown to affect largely leisure trips and not the travel to work trips (which are presumably less elastic with respect to price) that are the main cause of congestion. Another option would be to charge per metre of road used, with different pricing mechanism for the time of day and the amount of traffic. This option would be hard to implement and also it would be hard for an individual to respond rationally to a complex and changing charging mechanism. Congestion charging is a limited form of this, a point that will be returned to in later summaries.

The option under examination here is to build more roads. As this will increase capacity on the road network it should reduce the amount of negative externality the individual driver imposes on others, thus moving the marginal social cost nearer to the marginal private cost, and bringing congestion nearer to the socially optimum level. However, maybe more roads simply attract more drivers in which case all equilibiria are simply shifted outwards, and the result will be no nearer to the social optimum than the previous equilibrium.

 

Motivation The cost of congestion is huge. Between 1995 and 2001 the time spent on household travel increased 10% whilst distances remained constant, which is equivalent to billions of dollars’ worth of lost time.

 

Data Using Metropolitan Statistical Areas (MSA) they use official highway data to generate variables that detail the lane kilometres, vehicle kilometers traveled (VKT), and the average annual daily traffic (AADT).  They then do the same for other major roads. The summary stats show that the AADT increased from 4,832 vehicles per kilometer lane of highway in 1983, to 9,361 per lane kilometer in 2003. They have three cross sections of data.

 

Strategy There are a variety of strategies. Firstly they pool the cross sections and do a simple OLS regression VKT on the left and lane kilometres on the left with geographic, climactic, socio-economic and population controls.

Using the panel format they then control for fixed effects, and time fixed effects by differencing the data.

They recognize that there could be endogeneity issues. Specifically if VKT is correlated with some unobserved demand for driving, and planners respond with road building policies to that demand for driving by building roads then the coefficients will be overestimated as the increase in vehicle kilometres traveled will be due to demand for driving, not a consequence of the road building. Thus they have an IV strategy.

IV1: Planned highway kilometres from the 1947 Highway plan. This was a plan to connect the major population centres as directly as possible. Clearly this will be very relevant, and they argue it is exogenous as the plan was drawn up to connect population centres in the 1950s, without a thought for future traffic demand. This instrument is conditional on population. (don’t know what this means).

IV2: Rail network in 1898. Railroad travel connected a lot of cities and towns in the 19th century, and as the importance of railways waned, roads were built that followed their routes as substitutes. Given that the economy was very different when the railroads were constructed, and that they were done so primarily by private companies who were concerned with relatively short term gain, it is unlikely that they were made with future traffic flows in mind, and this they argue adds credibility to the exogeneity argument. They claim the instrument only need be exogenous conditional on the controls, so controlling for historical populations and geographic variables is sufficient to guarantee exogeneity. (check this)

IV3: Expedition routes between 1835 and 1850. Again they control for historical populations and geography and say it is hard to imagine how the explorers were selecting routes with travel between future cities in mind [I don’t see how that is the point particularly].

They then instrument for VKT using all instruments (though they do test them separately). As the F-stat in the first stage is less than 10, they do a LIML estimation as well which is supposedly more robust to weak instrument problems.

Results They have a coefficient of around 1 in all specifications (adding controls one at a time). This is the case when instruments are used one at a time, and also when amalgamated into a single first stage.

It appears then that new road capacity is met with a proportional increase in driving, thus confirming what Downes called the fundamental law of road congestion.

 

Robustness That the coefficients are robust to a wide variety of specifications is fairly good evidence that the results are not being driven by the nature of the model. (a more pessimistic interpretation would be that all specifications are affected equally by endogeneity).

They use data on availability of public transport and find that increasing public transport does not affect congestion. This is because public transport may take some people off the road, but as that effectively increases road capacity in a similar way as building new roads, the VKT demand response is the same.

Using data on what type of vehicles are using the road network over time they try to decompose VKT to understand where the extra demand is coming from. They find that commercial traffic accounts for between 10-20% of the increased VKT. Individuals account for around 11-45% of the increase. Population is thought to increase due to new highways as economic activity is increased. They find that a 10% increase in the road network causes a 1.3% increase in population in a MSA over 10 years and this accounts for around 5-15% of the extra VKT. Another mechanism could be diversion from other roads to the highways, but when they test this they find only very small results suggesting that traffic creation, not diversion is the problem (the mechanism for testing is regressing the VKT for highways on non-highway lane measures).

 

Problems Bad controls. – including socioeconomic controls could be dangerous as they are direct outcomes of the independent variable (kms of highway). Introducing outcome measures as additional controls biases estimates in indeterminable ways. Some comfort is taken from the fact that the results do not change significantly.

IV – exogeneity concerns remain particularly for the 1947 highway plan. Comfort is taken from the fact that the results are broadly the same across all specifications.

Weak Instruments – the instruments are weak, and it is not totally clear that the LIML estimation solves this problem. Again, it is comforting that all estimates are broadly similar across specifications.

 

Implications It appears then that new road capacity is met with a proportional increase in driving, thus confirming what Downes called the fundamental law of road congestion.

Public transport probably will not affect congestion levels. They do back of the envelope welfare calculations and find that the time saved by new highways is probably not worth the cost, whereas improvements in public transport are most likely to be welfare improving.