Category Archives: Spatial Regression Models



T. Lyytikainen SERC Discussion Paper 82

A Brief Summary 

This paper uses spatial instrumental variables in order to estimate the neighbourhood effects of tax rate setting. Using the IV approach the empirical results suggest that there is no significant interaction in tax rate choices among Finnish municipalities.


The Finnish case was chosen because in 2000 there was a reform to the system. Tax rates on property were previously selected by municipalities based upon a band of possible rates that was set by the government. In 200 the lower rate was raised by the government from 0.2% to 0.5%. This new lower limit was binding for 40% of the municipalities.


He discusses the spatial lag, and special instrumental variables models that are generally used. The method ultimately used is similar to the traditional spatial IV technique with one key difference; it is the policy intervention that is used as instrument rather than a higher order weighted average.


The actual imposed tax rate changes are not observable as there is no observable information about what rates would have been chosen had the lower bound not been altered. However, he constructs a measure or predicted imposed increase in tax rates that looks like this:


Zi2000 = D(T2000 > T1998i)(T2000 – T1998i)


Where T2000 > T1998i  is a dummy variable equal to 1 if the municipality had a 1998 tax rate below the lower limit of the year 2000 newly imposed lower limit. He then uses this as an instrument for the spatially lagged tax rate change. He says the instrument is relevant in the first stage (but does not report it), and conducts a placebo test.


The weighting system is nearest neighbour, but for robustness purposes, population weighting and a combination weighting scheme are also used.


The results are that although the coefficient on neighbour’s tax rates is positive, it is very small, and also statistically insignificant. This result is robust to different weighting systems.

Interestingly he tests the data using the SAR and general spatial IV method and finds strong and highly significant coefficients which casts doubts upon their reliability given that the IV method he uses should be stronger due to diminished endogeneity concerns.



J.K. Brueckney & L.A. Saavedra, National Tax Journal Vol 56, No. 2

A Brief Summary 

In a Nutshell

The authors use city level data from the US to estimate a model of strategic-tax competition and specifically the tax reaction function. They find that this function has a non-zero slope which indicates changes in a local competitor’s rates affects choices made by a different community.

The data are drawn from a sample of 70 cities that comprise the Boston Metropolitan area. Working under the assumption that community The authors use city level data from the US to estimate a model of strategic-tax competition and specifically the tax reaction function. They find that this function has a non-zero slope which indicates changes in a local competitor’s rates affects choices made by a different community.

The data are drawn from a sample of 70 cities that comprise the Boston Metropolitan area. Working under the assumption that community i‘s tax decision is a function of the tax rates in other communities they use a SAR model of weighted averages of neighbouring jurisdictions as a spatial lag. To check their results are not driven by the weighting scheme used (as it is arbitrary), they test different weighting schemes as part of their robustness checks (contiguous neighbour, distance decay, population weighting, and combinations thereof).

They are aware of the simultaneity problem and the bias that would introduce using OLS, so their estimations are made using Max Likelihood.

The principal finding was that the coefficient on the spatial lag was positive and significant, and this was robust to the different weighting measures. This implies that for the period, strategic tax rate setting occurred and the best response of a community who was faced with increased rates in a neighbouring community, was to themselves raise rates. In game theory this means that communities are strategic compliments.



S. Gibbons & H. Overman

A Summary with some additions from the lecture and Under the Hood Issues in the Specification and Interpretation of Spatial Regression Models, L. Anselin, Agricultural Economics 27 (2002) 

Spatial Models and Their Motivation

The inclusion of spatial effects is typically motivated on theoretical grounds that there is some spatial/social interaction that is affecting economic outcomes. Evidence of this will be spatial interdependence. Thus models are created that seek to answer how interaction between agents can lead to emergent collective behaviour and aggregate patterns. These might also be termed neighbourhood effects.

To start with a basic linear regression:


yi = xβ + µi


Where x is a vector of explanatory variables and β is a vector of parameters with µ as ever being the error term. This basic format assumes that each observation is independent of the others. This is generally too strong when in a spatial context as events in one place often affect events in another, particularly if they happen to be close to each other. A simple way of capturing the effects that nearby observations have on each other is to define a weights vector w which reflects how observations affect each other (for example distance weighting etc.). If this weighting system is multiplied by y then we have a matrix w’iy which for observation i is the linear combination of all y with which it is connected. If the weights are summed to 1 then this will give a weighted average of the neighbours of i.


Spatial Autoregressive Model

This weighted average can then be used to construct the spatial autoregressive model (SAR) which is also known as the “spatial y” model, and is referred to as a spatial lag. This model attempts to uncover the spatial reaction function, or spillover effect. The model looks like this:


yi =ρw’iy + xiβ +µi


The idea is that an individual observation is affected both by their own characteristics and recent outcomes of other nearby agents who are capable of influencing his behaviour. One example may be that when determining at what price to sell one’s house, the individual characteristics such as number of bedrooms are taken into account as well as property prices achieved by others in the vicinity. In this case the Beta captures the effects of the individual characteristics and the Rho captures the causal effect of neighbourhood actions.


Spatial X Model

Alternatively we may drop the assumption that yi  is affected by neighbouring y outcomes, and instead assume that it is affected by spatial lags of the observable characteristics. This is then a spatial x model (SLX):


yi = x’iβ + w’Xγ + µi


This assumes that the observable neighbourhood characteristics are determinants of yi. As in the above example this could be the characteristics of neighbourhood housing such as appearance, size etc. influencing individual price decisions. Beta is as above, and Gamma is the causal effect of neighbourhood characteristics.


Spatial Durbin Model

The spatial Durbin model (SD) combines SAR and SLX:


Yi = ρw’iy + x’iβ + w’Xγ + µi


Interpretation is as above indicates.


Spatial Error Model

This model drops the assumption that outcomes are explained by lags of explanatory variables, and instead assumes that a type of SAR autocorrelation in the error process. This yields:


yi = xβ + µi ; µi  = ρw’iµ + vi


This model assumes that outcomes are dependent upon the unobservable characteristics of the neighbours.



OLS with a lagged y variable (SD and SAR) yields inconsistent estimates unless Rho equals 0. This is because w’iy is correlated with the error term. [need help here] The gist of it seems to be that the average neighbouring dependent variable includes the neighbour’s error term, the neighbour’s neighbour’s error term etc., such that any observation i depends to some extent on the error terms of all the other observations. [I assume this would not be the case if the weighting restrictions were set to only include the nearest neighbor]. The intuition behind this problem is that you are your neighbour’s neighbour. In the simple i-j case the following occurs:


yi = ρyj + xiβ + εi (1)

yj = ρyi + xjβ + εj(2)


Substituting (2) into (1) we get:


yi = ρ(ρyi + xjβ + εj) + xiβ + εi


Which shows that yi is dependent in part upon itself.


Using OLS for the SLX is also problematic, as the assumption underlying OLS is that the error term is not correlated with the regressors. For the SLX model this means that E(ε| x) = 0 and E(ε| Wx) = 0. However if there is spatial sorting for example when motivated parents locate themselves near to good schools, then this assumption is violated as E(ε| Wx) ≠ 0.


The SE model may generate consistent estimates as the assumption that the error is not correlated with the regressor holds, however, standard errors will be inconsistent as by definition the model has autocorrelated error terms. This can lead to mistaken inferences.


Standard errors are inconsistently estimated for all models.


Additionally, the different types of model are difficult to distinguish without assuming prior knowledge of the data generating process which in practice we do not have.



Maximum Likelihood

These problems can be got around using Maximum Likelihood estimation which will provide consistent estimators. Essentially this is the probability of observing the data y given a value for the parameters Rho and Beta. A computer uses iterative numerical maximization techniques to find the parameter values that maximize the likelihood function. [I am totally unclear on how this works, however, I am assuming that we do not need to know the ins and outs.]


The issue with this specification is that it assumes that the spatial econometric model estimated is the true data generating process. This is an incredibly strong assumption that is unlikely to hold in any circumstance.


Instrumental Variables

In theory a second order spatial lag w2’xi (or even third, fourth order) can be used as instruments for w’yi  and then this “exogenous” variation in the neighbourhood outcome can be used to determine yi under the assumption that the instruments are correlated with Wy but not directly with yi. The first stage would look like this:


wy = w’xβ + ρw2xβ + ρ2w3xβ… 

and then the predicted values of wy would be used in the second stage regression with yi as the dependent variable.


There are problems also with this technique. Firstly it is unlikely that the true nature of w is known, and that it is correctly specified is crucial to the model. For example X variables may have an effect over a 5km distance, but the weighting system incorrectly restricts analysis to 2km.  Secondly the higher order lags of the X variables could still be having an effect upon yi and hence the exogeneity restriction is violated, and the 2SLS results are biased. Lastly the different spatial lags are likely to be highly correlated, and as such there will be little independent variation which is essentially a weak instruments problem. Weak instruments can severely bias second stage coefficients which will additionally be measured imprecisely.


The Way Forward

  • Panel data can allow for differencing over time to control for fixed effects. But the problems will be the same as above, but only in the context of differenced data.
  • In terms of the IV strategies, genuinely exogenous instruments should be found such as changes to institutional rules [see later tax paper summary].
  • They argue that the SAR model should be dropped, and if neighbourhood effects cannot be identified using genuine instruments, a reduced form of the SLX model should be used.
  • Natural experimental techniques from other economic literatures should also be borrowed e.g. DID, Matching. These techniques may help us to find causal effects but the tradeoff is that they are only relevant to some sub-set of the population (as in the Local Average Treatment Effect for IVs).




E.L. Gaeser and J.A. Scheinkman

A Brief Summary


Identifying Social Interactions

Inequality, concentrations of poverty and other outcomes may be the partial result of social interactions. Thus interventions that seek to address these phenomena may operate through both their incentives upon individual actors and through these social interactions. However, as policies are generally aimed at the former, it is difficult to quantify the effect, if any, the policy has through social networks. Whilst theory abounds that attests to the effect these interactions can have on the distribution of private outcomes, simply studying the outcomes tells us nothing about whether it was the interactions, or the private incentives that were responsible.


Different methods have been used to identify the interactions. One way is to look for multiplier effects, i.e. identifying the social effects as those operate above and beyond the private effect. However this requires being able to state exactly what the private effects should be in order to look at the difference, and this is generally not possible. A more promising approach looks at the results of interventions that operate directly on the social interactions and not on the private incentives. Three approaches are relevant:

  1. Interventions that change group membership. If group membership is changes (with no change to private incentive), then any changes in outcome could plausibly be attributed to the group effect. The problem here is that private incentives often change alongside, and it is often hard to enforce such that individuals simply do not revert back to their original groupings.
  2. Changing the private incentives for a sub-set and seeing if there are effects on others whose private incentives are not changed.
  3. Interventions that seek to directly challenge social norms such as mass media campaigns. The identification issue here is that the changes may simply affect private preferences rather than acting directly upon the social norms.

Econometric identification of social interactions is very hard, and perhaps the strongest evidence for these phenomena are the persistent degrees of stratification amongst populations.


Econometric Possibilities

The basic concept of social interactions is that one individual’s actions are made based in part upon the actions of another individual or grouping of individuals. Various techniques are used to empirically test these interactions [see later summaries for spatial context]. In general however, these specifications are subject to three problems:

  1. Simultaneity: A’s actions may be affected by B’s, but B’s will most likely be affected by A’s at the same time. This means that any regression that includes B’s actions as an explanation of A’s will suffer from endogeneity. Endogenous and exogenous interactions cannot be separately identified.
  2. Correlated unobservables problem and the related errors in variables problem. This arises is there is some group specific component of the error term that varies across groups and is correlated with the exogenous characteristics of the individuals. The unobservables could arise from preferences, or environmental settings.
  3. Endogenous membership problem – people may sort into groups based on unobservable characteristics. This is similar to selection bias.


The challenge then is to see whether these issues that can generally be lumped together as endogeneity issues, can be circumvented using techniques such as instrumental variables, quasi-experiments, or randomized control trials.