This is a summary of our JPER journal article (available here) about Craigslist rental listings’ insights into U.S. housing markets.
Rentals make up a significant portion of the U.S. housing market, but much of this market activity is poorly understood due to its informal characteristics and historically minimal data trail. The UC Berkeley Urban Analytics Lab collected, validated, and analyzed 11 million Craigslist rental listings to discover fine-grained patterns across metropolitan housing markets in the United States. I’ll summarize our findings below and explain the methodology at the bottom.
But first, 4 key takeaways:
- There are incredibly few rental units below fair market rent in the hottest housing markets. Some metro areas like New York and Boston have only single-digit percentages of Craigslist rental listings below fair market rent. That’s really low.
- This problem doesn’t exclusively affect the poor: the share of its income that the typical household would spend on the typical rent in cities like New York and San Francisco exceeds the threshold for “rent burden.”
- Rents are more “compressed” in soft markets. For example, in Detroit, most of the listed units are concentrated within a very narrow band of rent/ft² values, but in San Francisco rents are much more dispersed. Housing vouchers may end up working very differently in high-cost vs low-cost areas.
- Craigslist listings correspond reasonably well with Dept of Housing and Urban Development (HUD) estimates, but provide up-to-date data including unit characteristics, from neighborhood to national scales. For example, we can see how rents are changing, neighborhood by neighborhood, in San Francisco in a given month.
Continue reading Craigslist and U.S. Rental Housing Markets
Tools like WalkScore visualize how “walkable” a neighborhood is in terms of access to different amenities like parks, schools, or restaurants. It’s easy to create accessibility visualizations like these ad hoc with Python and its pandana library. Pandana (pandas for network analysis – developed by Fletcher Foti during his dissertation research here at UC Berkeley) performs fast accessibility queries over a network. I’ll demonstrate how to use it to visualize urban walkability. My code is in these IPython notebooks in this urban data science course GitHub repo.
First I give pandana a bounding box around Berkeley/Oakland in the East Bay of the San Francisco Bay Area. Then I load the street network and amenities from OpenStreetMap. In this example I’ll look at accessibility to restaurants, bars, and schools. But, you can create any basket of amenities that you are interested in – basically visualizing a personalized “AnythingScore” instead of a generic WalkScore for everyone. Finally I calculate and plot the distance from each node in the network to the nearest amenity:
Continue reading How to Visualize Urban Accessibility and Walkability
I recently wrote about visualizing my Foursquare check-in history and mapping my Google location history, and it inspired me to mount a more substantial project: mapping everywhere I’ve ever been in my life (!!). I’ve got 4 years of Foursquare check-ins and Google location history data. For everything pre-smart phone, I typed up a simple spreadsheet of places I’d visited in the past and then geocoded it with the Google Maps API. All my Python and Leaflet code is available in this GitHub repo and is easy to re-purpose to visualize your own location history.
I’ll show the maps first, then run through the process I followed, below. First off, I used Python and matplotlib basemap to create this map of everywhere I’ve ever been:
Continue reading Mapping Everywhere I’ve Ever Been in My Life
I recently wrote about visualizing my Foursquare check-in history and it inspired me to map my entire Google location history data – about 1.2 million GPS coordinates from my Android phone between 2012 and 2016. I used Python and its pandas, matplotlib, and basemap libraries. The Python code is available in this notebook in this GitHub repo, and it’s simple to re-use to visualize your own location history.
Just download your JSON file from Google then run the code. First I load the JSON file and parse the latitude, longitude, and timestamp with pandas. Then I map my worldwide data set:
Continue reading Mapping Your Google Location History with Python
Last.fm is a web site that tracks your music listening history across devices (computer, phone, iPod, etc) and services (Spotify, iTunes, Google Play, etc). I’ve been using Last.fm for nearly 10 years now, and my tracked listening history goes back even further when you consider all my pre-existing iTunes play counts that I scrobbled (ie, submitted to my Last.fm database) when I joined Last.fm.
Using Python, pandas, matplotlib, and leaflet, I downloaded my listening history from Last.fm’s API, analyzed and visualized the data, downloaded full artist details from the Musicbrainz API, then geocoded and mapped all the artists I’ve played. All of my code used to do this is available in this GitHub repo, and is easy to re-purpose for exploring your own Last.fm history. All you need is an API key.
First I visualized my most-played artists, above. Across the dataset, I have 279,769 scrobbles (aka, song plays). I’ve listened to 26,761 different artists and 66,377 different songs across 38,026 different albums from when I first started using iTunes circa 2005 through the present day. This includes pretty close to every song I’ve played on anything other than vinyl during that time. Continue reading Analyzing Last.fm Listening History
I started using Foursquare at the end of 2012 and kept with it even after it became the pointless muck that is Swarm. Since I’ve now got 4 years of location history (ie, check-ins) data, I decided to visualize and map it with Python, matplotlib, and basemap. The code is available in this GitHub repo. It’s easy to re-purpose to visualize your own check-in history: you just need to plug in your Foursquare OAuth token then run the notebook.
First the notebook downloads all my check-ins from the Foursquare API. Then I mapped all of them, using matplotlib basemap.
Continue reading Visualize Foursquare Location History
A guide to setting up the Python scientific stack, well-suited for geospatial analysis, on a Raspberry Pi 3. The whole process takes just a few minutes.
The Raspberry Pi 3 was announced two weeks ago and presents a substantial step up in computational power over its predecessors. It can serve as a functional Wi-Fi connected Linux desktop computer, albeit underpowered. However it’s perfectly capable of running the Python scientific computing stack including Jupyter, pandas, matplotlib, scipy, scikit-learn, and OSMnx.
Despite (or because of?) its low power, it’s ideal for low-overhead and repetitive tasks that researchers and engineers often face, including geocoding, web scraping, scheduled API calls, or recurring statistical or spatial analyses (with small-ish data sets). It’s also a great way to set up a simple server or experiment with Linux. This guide is aimed at newcomers to the world of Raspberry Pi and Linux, but who have an interest in setting up a Python environment on these $35 credit card sized computers. We’ll run through everything you need to do to get started (if your Pi is already up and running, skip steps 1 and 2). Continue reading Scientific Python for Raspberry Pi
Google Takeout lets you download an archive of your data from various Google products. I downloaded my Gmail archive as an mbox file and visualized all of my personal Gmail account traffic since signing up back in July 2004. This analysis excludes work and school email traffic (as well as my other Gmail account for signing up for web sites and services), as I have separate dedicated email accounts for each. It also excludes the Hangouts/chats that Google includes in your mbox archive. So, this analysis just covers personal communication.
This also demonstrates working with time series in Python and pandas. All of my code is on GitHub as an IPython notebook. You can re-purpose it for your own inbox – just download your Gmail archive then run my code.
Continue reading Visualizing a Gmail Inbox
Also check out this follow-up analysis of stadium attendance.
The 2016 college football championship game between Clemson and Alabama was held at University of Phoenix Stadium, where the NFL’s Arizona Cardinals play. Interestingly, this NFL (ironic, given its name) stadium is considerably smaller than the home stadiums of either Clemson or Alabama. In fact every NFL stadium is considerably smaller than the largest college stadiums. Outside of North Korea, the 8 largest stadiums in the world are college football stadiums, and the 15 largest college football stadiums are larger than any NFL stadium.
Americans are obsessed with college football, but how much is too much? Today most athletic departments are subsidized by their schools. Public universities increased their annual football spending by $1.8 billion between 2009-2013 while racking up huge debts to finance stadiums with little chance of profit. This interactive map shows each NCAA Division I college football team’s home stadium: collectively they seat 8.5 million people. Click any point for details about stadium capacity and year built:
Continue reading America’s College Football Stadiums
The U.N. world population prospects data set depicts the U.N.’s projections for every country’s population, decade by decade through 2100. The 2015 revision was recently released, and I analyzed, visualized, and mapped the data (methodology and code described below).
The world population is expected to grow from about 7.3 billion people today to 11.2 billion in 2100. While the populations of Eastern Europe, Taiwan, and Japan are projected to decline significantly over the 21st century, the U.N. projects Africa’s population to grow by an incredible 3.2 billion people. This map depicts each country’s projected percentage change in population from 2015 to 2100:
Continue reading World Population Projections