Traffic Accidents Have Fallen in NYC Since Congestion Pricing Started
Open data provides an immediate answer to a pressing policy question.
New York City implemented congestion pricing on January 5th. Per the New York Times, under this policy “Most passenger cars will now have to pay $9 to enter Manhattan south of 60th Street at peak hours...Small trucks will have to pay $14.40; large trucks, $21.60. Discounted rates will be offered overnight when there is less traffic.”
The above NYT article also sums up the resistance to the policy, saying: “But suburban commuters, residents of the city’s so-called transit deserts and public officials of both parties say congestion pricing will do little to reduce traffic while punishing drivers from outside Manhattan with few other travel options.”
I don’t pretend to know the intricacies of how the policy impacts people who live in New York City versus the surrounding areas. I do know, however, that a policy was implemented on a specific date and a police department has publicly available data that can help us analyze the initial immediate impacts of that policy. And the data clearly points to a sizable reduction in traffic as measured by traffic accidents.
NYPD has publicly available data on motor vehicle collisions to measure this issue, but there are a few challenges with working with the NYPD’s dataset. Not every traffic accident gets reported to police, and the dataset appears to lack information on minor fender benders with minimal damage. Per NYPD:
“The Motor Vehicle Collisions data tables contain information from all police reported motor vehicle collisions in NYC. The police report (MV104-AN) is required to be filled out for collisions where someone is injured or killed, or where there is at least $1000 worth of damage.”
Still, that’s going to be most accidents and should clearly show the trends.
Another challenge to working with the data is that a lot of accidents (about a third) don’t have a borough or zip code listed. Congestion pricing only directly impacts Manhattan south of 60th Street, so I used zip codes to approximate that zone as best I could, but I didn’t try to geocode accidents to correct this discrepancy in reporting.
The final challenge is that the policy is barely a month old. It’s possible that some traffic accidents haven’t been inputted into the system, and it’s also possible that any benefits seen in the data will be short term.
Yet by any measure there has been a clear and sudden drop in motor vehicle collisions since the policy went into effect.
Most boroughs saw declining vehicle collisions in the 30 days after congestion pricing went into effect though Manhattan had the most pronounced decline. These things can go up and down over short stretches like this, so it’s certainly plausible that other factors such as seasonality are at play as well. Still, every borough saw at least some decline in accidents in January and into February relative to December.
Using zip codes to see accidents just in the congestion pricing zone points to a more clear impact from the policy’s implementation. There were more than 536 traffic accidents in the zone in the 30 days prior to congestion pricing’s initiation and 375 in the 30 days after. There were 248 people injured in traffic accidents in the zone in the zone in the 30 days prior to congestion pricing’s initiation and 163 in the 30 days after.
Taking a step back shows accidents in the congestion pricing zone at the lowest level since January 2023. This context helps to show that the declines are substantial but the level of accidents and injuries is hardly unprecedented.
There are obviouely quite a few layers of politics and other complexities tied into this issue. Evaluating the open data, however, shows a pretty clear connection between congestion pricing being implemented and fewer traffic accidents and injuries. Whether this trend holds for the long term, of course, remains to be seen.
Driving conditions are clearly impacted by seasonal weather conditions which are more or less cyclical on a yearly basis. With that in mind, the first thing I thought of when looking at your graphs above is that you are only looking at one year, and that in many of these graphs, it looks like the drop off at the end of your year meets up with where it started out with. Have you looked back a few years to see that this isn't the case? It's possible periodic factors are confounding your data here.
Why is data not going back further than 12 months? Impossible to incorporate seasonal effects now, rendering conclusions much less credible