Zen of modeling

  1. Your model should have some theoretical basis.
  2. Your model, when simulated, should produce outcomes with a similar density to the observed values. Similarly, your model should not place weight on the impossible (like negative quantities, or binary outcomes that aren’t binary). It should place non-zero weight on possible but unlikely outcomes.
  3. Think deeply about what is a random variable and what is not. A good rule of thumb: random variables are those things we do not know for certain out of sample. Your model is a joint density over the random variables.
  4. You never have enough observations to distinguish one possible data generating process from another process that has different implications. You should model both, giving both models weight in decision-making.
  5. The point of estimating a model on a big dataset is to estimate a rich model (one with many parameters). Using millions of observations to estimate a model with dozens of parameters is a waste of electricity.
  6. Unless you have run a very large, very well-designed experiment, your problem has unobserved confounding information. If this problem does not occupy a lot of your time, you are doing something wrong.
  7. Fixed effects normally aren’t. Mean reversion applies to most things, including unobserved information. Don’t be afraid to shrink.
  8. Relationships observed in one group can almost always help us form better understanding of relationships in another group. Learn and use partial pooling techniques to benefit from this.
  9. For decision-making, your estimated standard deviations are too small; your estimated degrees of freedom are too big, or your have confused one for the other. Remember, the uncertainty produced by your model is the amount of uncertainty you should have if your model is correct and the process you are modeling does not change.
  10. You always have more information than exist in your data. Be a Bayesian, and use this outside information in your priors.

Comments

Notes on causal inference workshop

Yesterday, I gave a workshop on an introduction to causal inference. The slides and materials are here (the slides are called causality.htm, you will need to download the html file, and it is best not viewed in Chrome). Most of the attendees came from finance/BI/predictive fields, and so I think the material was actually new to most of them, which is great!

The central point of the workshop is that causal problems are hard, and the intuition of a predictive modeler can lead them astray. In particular, we need to think about bad controls/moderators and unobserved confounders. If blindly applying machine learning to large datasets without applying causal reasoning, we get precise estimates of a meaningless number.

After the workshop, a few of the students requested some readings. So below are some introductory papers/slides that I have found helpful in shaping my own thinking and generating course notes.

The Three Layer Causal Hierarchy. A very short piece that highlights the difference between associative and causal reasoning. A very good entry point.

FOR OBJECTIVE CAUSAL INFERENCE, DESIGN TRUMPS ANALYSIS – Rubin. Donald Rubin has written many, many great pieces saying pretty similar things. This is one! It’s a good review piece, that talks through the Rubin Causal Model, including the potential outcomes framework and the treatment assignment mechanism. Very clear.

Mastering Metrics by Angrist and Pischke, a fantastic introductory book on causal analysis, entertaining enough to be read at bed. They walk through regression, panel data, instrumental variables, regression discontinuity design, and difference-in-differences. Should have a “gateway drug” warning. If you know this stuff already, then check out Mostly Harmless Econometrics, their graduate-level treatment of the subject.

This fantastic conversation between Card and Krueger gives you a bit more of a back-story about how the techniques discussed in Mastering Metrics made their way from medicine into applied economics. It’s very entertaining.

Causal mediation – a few days ago, Andrew Gelman had a fairly short post on causal mediation (a deeper way of thinking about bad controls). One of those few occasions on the internet where the comments are better than the post.

Chapters 9, 10 and 23 from Gelman and Hill are a superb place to get a practical start in causal inference. Lots of examples, code, etc. If you want to implement the models from chapter 23, you might (should) want to use Stan rather than bugs. The examples have been translated to Stan here. If you want to implement models from the book, I recommend using the R package rstanarm, and prepending “stan_” to most of the model calls.

There’s an exciting new field in using machine learning methods (especially BART) to estimate heterogeneous treatment effects. I know the machine-learny people will want to start here, but I will tell you not to. Once you understand confounding, bad controls, fixed effects, etc. then it might be safe to play with this stuff. One great application is in this paper by Don Green and Holger Kern. Jennifer Hill has also been pushing for this sort of analysis recently, for example this paper and these slides.

Finally, Pearl. I have read bits and bobs of the book, and have found the language and notation off-putting. What I find extremely helpful is how clearly he thinks about causality, and reading his real-life arguments (as in the post on Gelman linked above).

 

Comments (5)

New post at the Lendable blog

Lendable recently launched a new website, lendablemarketplace.com. I have a post there on the risk engine I’ve been building over the last year. It’s a pretty cool piece of tech that models individual credit risk (the risk of an individual defaulting given what we know about them) and portfolio risk (the risk of many defaulting simultaneously) jointly.

Enjoy:

http://lendablemarketplace.com/blog/2016/03/29/risk/

 

Comments off

Tutorial on Bayesian econometrics in Stan

I put together a quick-start tutorial on how to put together a simple Bayesian model in Stan. Enjoy!

Here is it!

Comments (2)

How would an historian do time-series econometrics? The TL;DR version

A couple of weeks ago I finished a paper required for a Masters degree at the University of Melbourne. If you want to read the whole thing, you can do so here. But the point of this post is to boil it down to something you can chew through in a few minutes.

One of the big issues I see with contemporary econometrics is that its practitioners don’t read widely from other empirical fields, such as statistics and machine learning. Recent papers by Hal Varian, Guido Imbens and Susan Athey have started to bridge this gap, but there is still much that econometricians can learn from other fields. My paper is such an attempt at bridging the gap.

The paper proposes an answer to the following simple question: How much history should I include in a time-series econometric model? Too much history and I end up estimating parameters that are useful over history, but not useful today; too little history and I risk not learning from important history. Implicit in estimating a time-series model is that something can be learned from the past. Yet the world changes over time, and sometimes a model estimated in one world will be useless in another.

Many prominent researchers have done a lot of work to try to capture a changing world. A common approach (Koop and Korobilis, Cogley and Sargent) is to build sophisticated time-varying-parameter models. Another method most closely associated with the work of James Hamilton is a regime-switching framework, where the economic model flips between states, with each state associated with its own set of parameters.

Don’t get me wrong, I love these models; they’re statistically principled and they work well. But they’re extremely difficult to build well–even for smart people. And it’s possible that we may be able to do better by letting technology guide our beliefs of how the world is changing.

My paper illustrates a method for estimating analogy weights—weights that tell us how “similar” various periods of history are to today. These weights have two applications: first, because they give weight to analogous periods of history, the researcher does not need to choose a data window on which to estimate their model. They just estimate their model using the usual methods, but specifying a weighting vector. Because they use this weighting vector, the model shifts markedly as the state of the world shifts, improving predictive performance. Second, the weights tell the researcher when the world is very different from all periods of history. This should sound warning bells—DO NOT USE YOUR MODEL, USING MODELS THAT ARE ESTIMATED ON ENTIRELY DIFFERENT DATA CONDITIONS IS DANGEROUS.

This all sounds great, but how do I cook up these analogy weights? The trick here was getting a deep understanding of how the Random Forest works (I will not explain here, but you should read the original Breiman paper). The Random Forest is a very popular tool from machine learning. As a biproduct of fitting a Random Forest, the researcher is rewarded with a proximity matrix–a matrix that describes how “proximate” all data points are from all other data points in the dataset. Proximity here is not based on a metric, instead it is the similarity of every two periods of history in terms of all the characteristics that help for the prediction of the outcome variable. Thus it is a distance measure that is relatively robust to the inclusion of new X variables that may not have systematic relationships with the outcome variable.

So what do I find?

Using analogy weights tends to improve the predictive performance of time-series models.

I ran a horse-race using a small VAR-system in three specifications: the first was completely vanilla. The second was vanilla but used my analogy weights. The third was a sophisticated time-varying-parameter model. Unfortunately, I set up the comparison between the models in such a way that the sophisticated model is not directly comparable to the others (I used posterior predictive checking, not true out-of-sample checking). I’ll have to go back and re-do this bit to get better comparisons. Still, the weighted model performed extremely well in pure out-of-sample prediction, on some measures beating the sophisticated model (which was assessed in-sample).

I also back-tested a portfolio volatility model that uses analogy weights for a small stock portfolio over the Global Financial Crisis. The analogy weighted volatility model outperformed the common CCC-GARCH model and state-of-the-art DCC-GARCH model. Most of the relative gains for the weighted model were in the beginning and end of the Global Financial Crisis, when the weighted model was first quicker to recognise a regime shift, and then more quickly to resume tempered volatility forecasts afterwards.

These results are not conclusive that analogy weighting is a clear winner, but they do suggest that it’s at least worth exploring the use of Random Forest analogy weights.

 

Comments (2)

Upcoming talks

I have a couple of upcoming talks/courses scheduled. If you want to hear or learn about the intersection of data science, strategy, causality and risk, the come-on down!

Melbourne:

14 July, York Butter Factory at 6pm: Intro to Data Science

4 August – 27 August at Collective CampusData Science Short Course

7 September, RMIT Green Brain Room at 12pm: Brown Bag Lunch #2: Tidy Data for Killer Analysis

Sydney:

21 July, Atlassian Sydney at 5.30pm. Strategy, (causal) economics, and randomness for data scientists: why the data you’ll never see is important, and how to think about it

 

Comments (4)

Fun with Point of Sale data from Rex Tremendae

[Edit: charts below updated, with feedback from Tim Cameron.]

About a year ago, my brother and I signed our first commercial lease, a claim to use 18 square metres up the dodgy end of Flinders Lane for up to nine years. Today, Rex Tremendae, our cafe, is ticking along quite well. Our fantastic customers—basically everyone’s a regular—appear to like us, and on the whole it’s beginning to look as though it won’t be the world’s worst investment.

Below are a few charts from some analysis I pulled together, based on transaction-level data from our point-of-sale system. The time period covered is October 2014 – January 2015. All the charts are produced in ggplot2 (from within R). Y axes are sales in dollars, but I’ve censored these as I’d prefer not to tell our competitors how much we’re selling!

Note that the figures have been scaled down to fit within this awful WordPress theme. Click through for full size.

How does the weekly cycle look?

When ordering inputs (milk, pastries, deli items etc.) we need to have a bit of an idea which days are more likely to be busy. One thing is for sure—there is a cycle through the week, with Thursdays and Fridays quite a bit busier. The red line is my very rough estimate of total daily running costs—of course, costs increase with sales, but marginal costs in the cafe business are very small compared to fixed (or semi-fixed) costs.

sales_week_1

Are same-day sales going in the right direction?

The big question here is how sales are going, comparing like days with each other. I’ve fit a linear trend-line to each of the series, which also has 95% confidence intervals around the line. On the whole, the slopes are in the right direction, though we’ve not enough observations to be sure that it’s not noise. Again, the red line gives me some indication of average daily running costs.

dayoftheweek

 

When are we busy?

Rex is very small, and has no indoor seating. While we do make the most delicious toasties on the planet (salami, chevre, tomato tapanade and spinach is my fav), our customers don’t buy many of them. Instead, they seem happy to queue out the door for the delicious coffee (roasted by Rob, my brother) each morning.

The plot below illustrates the sales through the day. The x-axis is the time, the y-axis is the sales (in dollars, again, censored), and the curve fit through them is a smoothed sales profile. The red horizontal line gives the average hourly cost through the day. As you can see, afternoons aren’t an especially profitable time for us (though the true costs of being open then tend to be lower, as one of the staff knock off, or Rob uses the time to go and roast beans/work with wholesale clients).

sales_hour_1

 

Are we winning business within certain times?

Another question I’ve been asking is whether the after-lunch coffee market is improving? If there’s no improvement, then we need to think about changes in strategy. Thankfully, there does seem to be some improvement over time.

The chart below illustrates sales through the day and sales-by-the-hour over time. Each box represents an hour of the day; the points on the left come from October, and the points on the right are from December. The lines fitted are the linear trend lines with 95% confidence bands. As you can see, there is considerable growth in the 10-11 segment, and from 1-4, though the trend-lines in the afternoon aren’t too strong.

sales_hour_2

How bad is January really?

Bad. I’d heard about January being a crap month in hospitality, but our January was really quite crappy. It didn’t help that Rob and Effi (his partner) spent half the month in Germany seeing Effi’s folks. Or that all of our customers were on holidays. Or that the weather was bad. It was crap!

The chart below illustrates this. The height of the lines indicates the sales per hour, and the X axis is the time of day. The red line is January.

January

 

That’s all for now folks. If there are any cuts of the data you’d like to see, do let me know.

Jim

Rex Tremendae on Urbanspoon

Comments (4)

3 things we should see in tomorrow’s macroeconometric modelling

Macroeconometric modelling is a funny sort of field. The stakes of doing it wrong (or right) are extremely high, while the data used are infrequently and poorly measured. In many cases, estimating a model on a large-enough sample to do useful inference involves including observations from a long time in the past—I’m talking  the 60s and 70s—and believing that the data were both correctly measured and coming from the same economy.

A skeptical macroeconometrician may ask: “how much of my view about how the world works today should I inform using data points from the 1960s, 70s, or 80s?” and they’d have a good point.

Here’s a field where it’s basically impossible to know anything—at least to any scientific standard—that has enormous impact and policy relevance. It’s really no wonder that it attracts a ‘spectrum of personalities’, vying with one another for the ears of our political leaders.

At the same time, macroeconometrics done right is useful. There is not nothing to be learned from history, so long  as macroeconometricians are honest about what can and cannot be learned from historical data.

To these ends, I thought I’d put together a list of 3 characteristics that we should expect in tomorrow’s empirical macro models, with a few notes on how to implement them. All of these exist already, but are not standard features of commonly used empirical macro models.

1. Model uncertainty and sensible confidence intervals

Most readers here would expect any forecast to come with forecast confidence intervals, normally 95%. The implication to the reader is that the forecaster is “95% sure” that future values will fall inside the confidence band. An alternative interpretation may be that “95% of possible futures” fall inside the confidence band.

Almost all of the time, these confidence bands are poorly constructed, resulting in the reader being too sure about the future. This is because confidence intervals constructed the usual way—using historical forecast errors—assume that the underlying economic model is true. That is, using the normal approach, a 95% confidence band contains 95% of potential futures given the underlying economic model is a perfect representation of the world.

Of course, economic models are not perfect representations of the world, and so the 95% confidence band here is useless. I doubt highly that if the Australian Treasury had used their current technique of constructing confidence intervals over the last decade that such confidence bands would have included 95% of the realised outcomes.

spaghetti

Introducing model uncertainty—uncertainty over how well the model actually represents the world—helps to overcome this. There are ways of introducing model uncertainty’ to a macro model, often by bootstrapping (which I have issues with, as I don’t believe that historical data come from the same model), and more commonly by using Bayesian techniques, using priors that reflect how little we actually know. These tools are used quite frequently by many macro modellers, though, unfortunately, not many who matter.

2. Coherent weighting schemes/model shifts

When building an empirical macro model, often one of the most difficult choices is how much data to include. Macroeconomic data are recorded fairly infrequently—monthly for unemployment and trade, quarterly for prices and the national accounts, annually for state accounts, etc. Many of these series don’t really move about too much, which makes it difficult to pin down the relationships between macro variables. This means that the empirical macroeconomist often needs to estimate their models on long histories.

This is a tough choice: include a long history and you end up estimating a useless value (the average relationship between variables for the whole period, rather than their relationship today), include a short history and you end up throwing out a lot of data that may have value. One common work-around is to use a weighting scheme that gives more importance to recent observations, and less importance to historical observations. But is the recent past really a better predictor than the distant past? Can we learn nothing from history?

Once you’re in the world of time-series modelling, you’re implicitly saying that relationships between historical variables are of some use. If this is the case, then why not go the whole way and say that more can be learned from more relevant histories? 

One fairly simple way of doing this is to simply give more weight to relevant histories when we build our models. But how do you know which histories are relevant and which are not? My method is to do the following:

1. Train a random forest on the relevant dependent variable, using a wide range of independent variables. The random forest is a tool from machine learning that will throw out irrelevent independent variables, so you can afford to put many in.

2. Save the proximity matrix from the random forest. This symmetric matrix gives us a measure of similarity between two observations. Importantly, it is how similar two observations are in all the ways that matter to predict the dependent variable. I have written on this elsewhere; I consider it to be one of the most important tools to the future of inferential economic research.

Here is the first five rows and columns of a proximity matrix from the demonstration below.

1979Q3 1979Q4 1980Q1 1980Q2 1980Q3
1979Q3 1 0.240876 0.164179 0.19708 0.402878
1979Q4 0.240876 1 0.169355 0.212598 0.222222
1980Q1 0.164179 0.169355 1 0.132231 0.103704
1980Q2 0.19708 0.212598 0.132231 1 0.115108
1980Q3 0.402878 0.222222 0.103704 0.115108 1

3. Run your regression model, taking the appropriate row of the proximity matrix to be the weighting vector. This will normally be the last row, as you’re interested in finding similar histories to today.

It’s really that simple.

So how much of the data actually gets used in this method? To illustrate, I’ve put together a little demo (code and data—which downloads automatically on running the script—available here). In it, I’m trying to model labour productivity growth, in particular, how much it appears to be affected by changes to unemployment. Note that this is for illustrative purposes, and I’m not making any claims about whether the parameter is well identified.

The figure below illustrates the weights that are being given to historical data observations, along with the fitted values and predicted values.

proxweights_png

 

If we use this method, we can see how the relationship between changes in unemployment and changes in productivity vary over time, when we give more weight to relevant histories. The line in the middle is what we would estimate today, if we gave equal weight to all observations. As we can see, in some histories, changes to productivity do appear to move together with changes in unemployment.

 

time_varying_coefficient

 

These charts wrap up my spiel on using relevant histories, though I’ll probably write some more on it in the future.

 

3. The ability to inform the user when the model should not be used

One of the major shortcomings of macro models today is that they lack an intuitive way of knowing when a forecast or policy simulation should not be performed because the model was estimated on data from a different world. Instead, a model is typically just a bunch of coefficients (or occasionally distributions) that we multiply with our hypothetical x variables. It doesn’t care whether those x variables are nothing like the ones that the model was estimated on.

This is one of the big areas of abuse of models, sometimes with catastrophic consequences. We estimate a model on the good times, and wonder why it doesn’t work during the bad. Wouldn’t it be wonderful if the model just reported enormous confidence bands whenever it was being asked to do something unreasonable?

Well, this can be done too, using the weighting scheme discussed above. If we’re in the middle of an unusual economy, then there will be very few histories proximate to the present, and confidence intervals can be adjusted accordingly (as the model is effectively being estimated on fewer data-points). On the other hand, if we’re estimating the behaviour of a fairly regular economy, we have lots of relevant histories and our confidence intervals will be smaller.

—–

There are many things we should be asking of our macroeconomic modellers. My top three are that they appropriately model what they do not know, that they stop building models on useless data, and that they do not use their models out of context. That’s not too much to ask.

Comments (3)

The Kangaroo Jack effect: what happens when nobody says no

Some big ideas are objectively terrible, yet are followed through on, resulting in a predictably terrible product. Kangaroo Jack comes to mind [1], as does the Bollywood production of Fight Club, in which some budding entrepreneurs start a for-profit fight club (market research and all). Here are some other terrible ideas from the startup world.

The interesting thing about these terrible ideas is that there were presumably several people who could have just said “no”. Political scientists call these people veto players. There are the financiers, the production houses, the directors, producers and so on. It’s an interesting dynamic that has veto players allow so many people to pour so much human effort into these thought farts.

I’m not sure why these ideas end up going ahead. It could be some combination of:

– Hindsight bias. Maybe Kangaroo Jack, Fight Club the musical etc. weren’t obviously terrible ideas at the outset?
– In some communities—think Hollywood and Silicon Valley—maybe everyone owes everyone, so people work on each other’s projects in order to curry favour.
– A lot of people do say no, yet the persistence of the bad idea’s champions ends up finding the dumb money or weak/stupid veto players.
– Both film production firms and VCs just play the numbers game, aware that many apparently dumb ideas end up paying off. Indeed, the Kangaroo Jack effect may be a feature of investment markets in which most investments make nothing, and a few make a huge amount.

I’d be interested to hear of your explanations/examples.

[1] Kangaroo Jack, despite being a terrible film, did turn a small profit, but “Bollywood version of Fight Club effect” doesn’t quite sound the same. The opportunity cost of funding Kangaroo Jack isn’t not making Kangaroo Jack; it’s not making The LEGO movie. I’m pretty sure the world is worse off due to Kangaroo Jack.

Comments (1)

Update from Chicago

Here’s my much-promised, less-delivered update on Chicago: what we’re doing, how it’s going, and if we’re going to come back.

In reverse order, yes. Sue and I need to leave the country at the end of August, when my visa (but not hers) expires. But gee, it’ll not be without regrets. Escaping an apocalyptic Melbourne winter to come to this great city has been immensely enjoyable. More on that below. So yes, all the rumours are false; we’re not staying on, even if we really want to.

The reason for us making the trip over here is so that I could take up the Eric and Wendy Schmidt Data Science for Social Good fellowship, being run through the University of Chicago. The fellowship itself came about from Eric Schmidt’s involvement in the Obama 2012 campaign, the data-intensive side of which was being run by Rayid Ghani.

So the story goes, Schmidt, who has made a little bit of money running Google, wanted Ghani to put the skills of young data scientists to the public good, and funded the fellowship. In return, Ghani and his team run the fellowship, which brings together some seriously bright folk from around the world (but mainly the US) to Chicago for 12 weeks over summer. These fellows work in groups of four with partner organisations—mainly not-for-profits and government agencies—to lend their skills to help solve some difficult problems.

The team I’ve landed in is fantastic. The members are Diana Palsetia, Pete Lendwehr, and Sam Zhang. Diana is a PhD student at Northwestern who specialises in high-performance parallel computing on large datasets. She’s the sort of person who interjects sparingly, from the back of the room, with extremely useful insights. Pete is a Phd student at Carnegie Mellon University, specialising in ‘advanced computation’, but seems to spend as much time pondering theatre and hunting down excellent coffee beans. He also turns my ideas—good and bad—into deployable Python code. Sam is a brilliant 21 year old who just finished up at Swarthmore, an elite liberal arts college. He’s an all-round hacker, writer and statistician who makes me reconsider the wisdom of having wasted so much of my 20s.

Our team has been partnered with two organisations: the first is Enroll America, a well-funded not-for-profit tasked with getting as many people signed up for health insurance under the Affordable Care Act/Obamacare as possible; the second is Get Covered Illinois, which is being run out of the Illinois Governer’s office, who are attempting to do the same, though only in Illinois.

Both of these organisations have limited budgets, and the same aim—get uninsured people covered. The big question for them is who should they target with their limited resources? There are some people who will never take out insurance no matter how much they’re pestered, and sad as it is, it’s a waste of money trying to convince them to do so. There are other folk who are far more interested in taking out insurance, but have not because, frankly, the system takes a bit of work. And you can always do it tomorrow, right?

There is no shortage of data here. As many of the people working on this problem (mainly on the Enroll America side) also worked on the Obama 2012 campaign, they use similar datasets to those used to find persuadable voters. That means that some of these datasets are quite big—a row for almost every American, with plenty of details (mostly best guesses) on each.

The approach that our team is taking is to build several statistical models to help Enroll America and GCI work out who they should be spending money contacting. The first model gives, for each person, the probability that that person is uninsured. There is no point contacting someone who is insured. What is surprising about this model is that there are a surprising number of people who you’d not expect to have health insurance who do, and so it’s surprisingly difficult to build a good predictive model that sorts out the insured from the uninsured.

The second model tells us how persuadable someone is, given their probability of being uninsured. Thankfully, Enroll America ran a randomised control trial in March, in which they randomly selected a ‘control group’, who they’d not pester during their telephone and email campaign. They then compared this group to a ‘matched treatment’, who were similar folk who were pestered, and compared the differences in insurance rates after the enrolment period ended. The result was quite profound: people who were pestered by email and phone were about 6% more likely to have taken up health insurance.

While the ‘treatment effect’ of being pestered is about 6% on average, the interesting question for our team is working out what the treatment effect must be for an individual person. This is an extremely difficult problem, towards which we have been devoting most of our time. Our current solution is here.

There are other problems that we’ve not done as much work on. For instance, what is the best contact language? Where should tabling events be held? How can we best guess someone’s income (which will determine how large a subsidy they will receive)? These are for the coming weeks.

Sue and Emi have also been busy, making friends in our neighbourhood–right next to the University of Chicago–and spending long days at the beach. Emi has learned to run, and Sue has learned to spot enclosed playgrounds.

A few words on Chicago. Picture this: one in three days in this city you can freeze to death without trying hard. Yet almost 10 million people decide to live in and around the city. Why would so many people make such a choice, surely crazy to the outsider?

I’m not 100 per cent sure, but it must have something to do with the fact that it somehow combines being an extremely large city with a small-town feel. Traffic is no worse—probably better—than Melbourne. Public Transit certainly isn’t Singapore, but is cheap and effective (especially during the rush). The music, theatre and intellectual scenes are full and exciting. The beaches are fun, the food great, and the people are extremely friendly; some combination of northern-Midwestern Nice and Southern hospitality, a remnant of the Great Migrations. Finally—this was unexpected—the summer is delicious. 29C every day, often cooled by large storms at night. Splendid.

The 10 million people who live in Chicagoland aren’t mad. They could live in the Eternal Spring that is Southern California, but don’t. If Southern California had Chicago’s winter, nobody would live there.

Comments (4)

« Previous entries Next Page » Next Page »