There are dozens of ways to rank cities. Here’s what they all get wrong
Every year, the world’s most “liveable” cities are celebrated, in city rankings published by Mercer, the Economist Intelligence Unit, and Monocle Magazine. Global institutions including UN Habitat and the OECD are also joining consultancies, media organisations and think tanks, in the quest to compare living standards and well-being in cities. At the last count, there were more than 30 urban livability indexes produced worldwide, and more than 500 measures and benchmarks comparing cities.
Each new round of results triggers debate over the biases and blind spots of these comparisons. Yet the sheer volume and variety of data and benchmarks, as well as the growing number of institutions interested in producing them, suggest that city rankings are here to stay.
The question, then, is how to improve the measurements used, to ensure that the rankings align with the public interest, and help those in positions of leadership – such as lawmakers, local governments and urban planners – to better understand the huge array of data they have access to.
Comparisons have consequences
Most city rankings weren’t intended to guide policy, but they certainly attract the attention of mayors and city leaders all over the world. Larger cities often have departments that draw on city performance measures, and monitoring teams that keep track of relevant studies to manage the risks and opportunities the results may present. A bad ranking can add significant pressure to city leaders, while positive news can help them to argue the merits of past or future policies.
City comparisons are also used outside government, for example by global firms looking to relocate staff internationally or promote their expertise or services. Such studies are increasingly conducted by companies and think tanks as a way to engage with government clients at city and national levels.
The rise of liveability rankings and other comparative information has generated a trove of data. Much of it is welcomed by cities confronting issues such as housing affordability, aging populations and spikes in air pollution and congestion.
Rankings provide a memorable tool for organizing complex information, but there are several common challenges, which can inadvertently cause them to caricature the complex realities of urban life. For example, it is difficult to compare data collected at different scales, and sources aren’t always up-to-date. Often proxy variables stand in for other attributes that are hard to measure directly.
Progress has been slow to address these dilemmas. Many of the organizations producing livability comparisons lack strong incentives to innovate, since they already benefit from a captive media audience and stable revenue streams from supplying data to a trusted set of customers. New entrants try to improve on established rankings, including by crowd sourcing data and consulting the needs of residents or target demographics. But because it’s so difficult to meaningfully compare a diverse group of cities, many new studies still borrow heavily from existing datasets.
Ranking risks and rewards
Now and then, some livability comparisons do open up productive discussions about how cities should be. But reflecting on recent research and practical experience with city governments, we have found some common flaws in the way city leaders use and interpret such insights.
For one thing, they can be tempted to use whichever ranking that tells the best story for their city, ignoring the broader spectrum of studies that reflect a more varied and complex picture. Trying to build a city’s reputation around its positions in just one or two rankings is fraught, especially if scores hinge on things that a city has limited control over – such as the weather or terrorist attacks.
City leaders also misstep when they try to draw direct comparisons with every other city. Cities have inherited geographies, climate, demographics, governance, investment capability and cultural norms, which shape the quality of life there. Tracking progress in comparison to cities that are on an entirely different trajectory is unhelpful and unrealistic.
Whether misuse of city rankings result in ill-conceived projects or unattainable goals, it often leads to disappointment and reduced public confidence in city policies.
Filling the gaps
There are paths to address these problems and measure livability in ways that are more useful for urban leaders and benefit local people. Indicators are already evolving to include a broader set of data points including access to public transport and jobs, gender discrimination, housing affordability, democratic participation, inclusion, public space and environmental resilience. They can also consider how people experience the city differently according to ethnicity, gender, sexuality and income group.
Of course, simply adding up all of these measures doesn’t produce useful information and there are important choices on how to weight different metrics and represent the overall performance of a city. But incorporating these measures will raise awareness around livability issues for residents that may have been neglected, and compel decision makers to face up to uncovered weaknesses.
It’s also vital to fill out the gaps in the data, which hamstring existing efforts to compare cities. Research can build the evidence base on housing, infrastructure, public spaces, social inequality, strength of community, economic resilience, quality of government and public finances. This research needs to be brought together, kept updated and made available to all. Initiatives such as the Newcastle Urban Observatory and Gauteng City-Region Observatory are good examples.
Livability needs the long view
For urban leaders and governments, knowing how your city’s public transport compares to another’s has some value. But it’s vastly more useful to know how recent investments, reforms and regulations have improved cities’ public transport performance across a five-, ten- or 20-year period.
City leaders can move beyond short term, reactive responses to rankings by observing comparisons of performance and taking on nuanced, longer-term insights about urban change. These will give better feedback on the effectiveness of existing policies, and take account of the time it takes for policies to produce measurable impacts.
With the majority of the world’s population now living in cities, there is a great deal at stake in the measurement (or mis-measurement) of quality of life. Cities are striving to address multiple challenges: social cohesion, environmental sustainability, affordability and economic prosperity. Fixing the flaws of liveability rankings and forging ahead with new tools will be one small victory in a much larger battle to measure progress, in a way that benefits all urban citizens.
Jenny McArthur is lecturer in urban infrastructure and policy at UCL and Tim Moonen is honorary lecturer at UCL. This article is republished from The Conversation under a Creative Commons license. Read the original article.