What do subway riders want?
20 SUBWAY LINES
(click your line!)
They want short waits, trains that arrive regularly, a chance for a seat, a clean car, and understandable announcements that tell them what they need to know. That’s what MTA New York City Transit’s own polling of rider satisfaction measures. 1
This “State of the Subways” Report Card tells riders how their lines do on these key aspects of service. We look at six measures of subway performance for the city’s 20 major subway lines, using recent data compiled by MTA New York City Transit.2 Some of the information has not been released publicly before on a line-by-line basis. Most of the measures are for all or the last half of 2013.
Our Report Card has three parts:
First, is a comparison of service on 20 lines, as detailed in the attached tables.
Second, we give an overall “MetroCard Rating”3 to 19 of the 20 major lines.4
Third, the report contains one-page profiles on each of the 20 lines. These are intended to provide riders, officials and communities with an easy-to-use summary of how their line performs compared to others.
This is the sixteenth Subway Report Card by the Straphangers Campaign since 1997. 5
Our findings show the following picture of how New York City’s subways are doing:
1. The best subway line in the city was the 7 with a MetroCard Rating of $2.00. The 7 has ranked number one in seven out of our sixteen State of the Subways reports. The 7 ranked highest because it tied for best in the system on frequency of service — and also performed above average on three measures: delays caused by mechanical breakdowns; seat availability at the most crowded point during rush hour; and subway car cleanliness. The line did not get a higher rating because it performed below average on regularity of service and subway car announcements. The 7 runs between Times Square and Main Street Flushing in Queens.
2. For the second time in sixteen Report Cards, the 2 performed worst in the subway system, with a MetroCard Rating of $1.30. The 2 performed below average on three measures: regularity of service; delays caused by mechanical breakdowns; and seat availability during rush hour. The line did not get a lower rating as it tied for best in the system on subway car announcements and performed near average on subway car cleanliness and amount of scheduled service. The 2 operates between Brooklyn College in Brooklyn and Wakefield in the Bronx.
3. Systemwide, for twenty lines, we found the following on three of six measures we can compare over time: car breakdowns, car cleanliness and in-car announcements.
a) The car breakdown rate worsened from an average mechanical failure every 172,700 miles to every 153,382 miles comparing the 12-month period ending December 2011 to December 2013 — a loss of 11.2%. In a recent letter, transit officials acknowledged the problem writing: “We are aware of the trend in our operating statistics…and we are concerned.” 6 We found that thirteen lines declined (2, 4, 5, 6, B, C, E, F, G, J/Z, L, M, and Q), and seven improved (1, 3, 7, A, D, N, and R).
b) Subway cars went from 90% rated clean in our last report to 92% in our current report — a slight improvement of 2.2%. We found that six lines declined (1, 6, 7, E, G, and Q) and fourteen improved (2, 3, 4, 5, A, B, C, D, F, J/Z, L, M, N, and R).
c) Accurate and understandable subway car announcements improved slightly, also going from 90% in our last report to 92% in the current report — an increase of 2.2%. We found ten lines improved (2, 3, 5, 6, 7, C, D, E, L, and R), six declined (1, 4, A, B, F, and G) and four did not change (J/Z, M, N, and Q).
4. There are large disparities in how subway lines perform.
a) Breakdowns: The E had the best record on delays caused by car mechanical failures: once every 546,744 miles. The C was worst, with a car breakdown rate nearly ten times higher: every 58,859 miles.
b) Cleanliness: The C and J/Z are the cleanest lines, with only 4% of cars having moderate or heavy dirt, while the dirtiest line — the Q — had 17% of its cars rated moderately or heavily dirty, a rate more than four times higher.
c) Chance of getting a seat: We rate a rider’s chance of getting a seat at the most congested point on the line. We found the best chance is on the R, where riders had a 66% chance of getting a seat during rush hour at the most crowded point. The 2 ranked worst and was much more crowded, with riders having only a 23% chance of getting a seat, nearly three times worse.7
d) Amount of scheduled service: The 6 and 7 lines had the most scheduled service, with two-and-a-half minute intervals between trains during the morning rush hour. The C ranked worst, with nine- or ten-minute intervals between trains all through the day.
e) Regularity of service: The C line had the greatest regularity of service, arriving within 25% of its scheduled interval 83% of the time. The most irregular line was the 5, which performed with regularity only 71% of the time.
f) Announcements: Five lines — the 2, 5, 6, E, and Q lines — had a perfect performance for accurate and understandable announcements made in subway cars, missing no announcements and reflecting the automation of announcements. The C line was worst, missing or garbling announcements 23% of the time.
The NYPIRG Straphangers Campaign reviewed extensive MTA New York City Transit data on the quality and quantity of service on 20 subway lines. We used the latest comparable data available, largely from 2013.8 Several of the data items have not been publicly released before on a line-by-line basis. MTA New York City Transit does not conduct a comparable rider count on the G line, which is the only major line not to go into Manhattan. As a result, we could not give the G line a MetroCard Rating, although we do issue a profile for the line.
We then calculated a MetroCard Rating — intended as a shorthand tool to allow comparisons among lines — for 19 subway lines, as follows:
First, we formulated a scale of the relative importance of measures of subway service. This was based on a survey we conducted of a panel of transit experts and riders, and an official survey of riders by MTA New York City Transit. The six measures were weighted as follows:
|Amount of service|
|scheduled amount of service||30%|
Dependability of service
|percent of trains arriving at regular intervals||22.5%
|chance of getting a seat
|adequacy of in-car announcements
Second, for each measure, we compared each line’s performance to the best- and worst-performing lines in this rating period.
A line equaling the system best in 2013 would receive a score of 100 for that indicator, while a line matching the system low in 2013 would receive a score of 0. Under this rating scale, a small difference in performance between two lines translates to a small difference between scores.
These scores were then multiplied by the percentage weight of each indicator, and added up to reach an overall raw score. Below is an illustration of calculations for a line, in this case the 4.
|Indicator||4 line value including best and
worst in system for 5 indicators
|4-line score out of 100
||4 line adjusted raw score|
|Scheduled Service||AM rush—4 min, 15 sec;|
PM rush—4 min, 15 sec
||74% (best-83%; worst-71%)
|Breakdown Rate||125,714 miles (best-546,744 miles; worst-58,859 miles)||14||12.5%||2|
|Crowding||36.6% seated (best-66%; worst-23%)||29||15%||4|
|Cleanliness||90% clean (best-96%; worst-83%)||53||10%||5|
|Announcements||99% adequate (best-100%; worst-77%)||96||10%||10|
|Adjusted Score Total|| || || ||4-line-48pts|
Third, the summed totals were then placed on a scale that emphasizes the relative differences between scores nearest the top and bottom of the scale. ('See Appendix 1.)
Finally, we converted each line’s summed raw score to a MetroCard Rating. We created a formula with assistance from independent transit experts. A line scoring, on average, at the 50th percentile of the lines for all six measures would receive a MetroCard Rating of $1.50. A line that matched the 90th percentile of this range would be rated $2.50, the current base fare. The 4 line, as shown above, falls at a weighted 48th percentile over six measures, corresponding to a MetroCard Rating of $1.45.
New York City Transit officials reviewed the profiles and ratings in 1997. They concluded: "Although it could obviously be debated as to which indicators are most important to the transit customer, we feel that the measures that you selected for the profiles are a good barometer in generally representing a route’s performance characteristics… Further, the format of your
profiles…is clear and should cause no difficulty in the way the public interprets the information."
Their full comments can be found in Appendix I, which presents a more detailed description of our methodology. Transit officials were also sent an advance summary of the findings for this year's State of the Subways Report Card.
For our first five surveys, we used 1996 — our first year for calculating MetroCard Ratings — as a baseline. As we said in our 1997 report, our ratings “will allow us to use the same formula for ranking service on subway lines in the future. As such, it will be a fair and objective barometer for gauging whether service has improved, stayed the same, or deteriorated over time.”
However, in 2001, 2003, 2004, 2005, 2008, 2009, 2010, 2011, and 2013, transit officials made changes in how performance indicators are measured and/or reported. Transit officials rejected our request to re-calculate measures back to 1996 in line with their adopted changes. As a result, in this report we were forced to redefine our baseline with current data, and considerable historical comparability was lost.
Also due to changes in the measuring and/or reporting of data by Transit officials, it was necessary to make modest adjustments to the MetroCard Rating calculation and scale—as was the case in several earlier State of the Subways reports. In selecting this scale we attempted to create a single measure which we felt accurately and fairly represents the relative performance priorities listed in our original 1996 poll of riders, community leaders and independent transit experts.
Why does the Straphangers Campaign publish a yearly report card on the subways?
First, riders are looking for information on the quality of their trips, especially for their line. Our profiles seek to provide this information in a simple and accessible form.
In recent years, the MTA has moved forward and backward on providing detailed performance measures on a line-by-line basis:
- In 2009, the MTA began posting monthly performance data for subway car breakdowns by each of the 20 subway line on its website at http://web.mta.info/persdashboard/performance14.html. However, sometime in 2013, the MTA stopped reporting this information;
- In 2010, it made part of its performance measurement databases available publicly on its “developer resources” webpage, but in 2014 began requiring an API key to access this database. This delayed and complicated access to this data, requiring time and technical expertise many users do not have; and
- In 2011, NYC Transit developed a new line-by-line statistic that combines three service measures and weights them, not unlike our combined MetroCard rating.
Second, our report cards provide a picture of how the subways are doing. Riders can consult our profiles and ratings and see how their subway line compares to others, disparities and all. They can also see the recent modest improvement in subway car cleanliness and announcements, as well as the negative trend for subway car breakdowns. Future performance will be a challenge given the MTA’s tight budget.
Lastly, we aim to give communities the information they need to win better service. We often hear from riders and neighborhood groups. They will say, “Our line has got to be worst.” Or “We must have the most crowded trains.” Or “Our line is much better than others.” For riders and officials on lines receiving a poor level of service, our report will help them make the case for improvements, ranging from increases in service to major repairs.
That’s not just a hope. In past years, we’ve seen riders win improvements, such as on the B, N, and 5 lines. For those on better lines, the report can highlight areas for improvement. For example, riders on the 7 — now the frontrunner in the system — have pointed to past declines and won increased service.
This report is part of a series of surveys on subway and bus service. For example, we issue annual surveys on subway car cleanliness and announcements and on the conditions of subway station platforms, as well as give out the Pokey Awards for the slowest city bus routes.
Our reports can be found online at www.straphangers.org, as can our profiles. We hope that these efforts — combined with the concern and activism of many thousands of city transit riders — will win better subway and bus service for New York City.
1 New York City Residents’ Perceptions of New York City Transit Service
, 1999 Citywide Survey, prepared for MTA New York City Transit.
2 The measures are: frequency of scheduled service; how regularly trains arrive; delays due to car mechanical problems; chance to get a seat at peak period; car cleanliness; and in-car announcements. Regularity of service is reported in an indicator called wait assessment, a measure of gaps in service or bunching together of trains.
3 We derived the MetroCard Ratings with the help of independent transportation experts. Descriptions of the methodology can be found in Section II and Appendix I. The rating was developed in two steps. First, we decided how much weight to give each of the six measures of transit service. Then we placed each line on a scale that permits fair comparisons. Under a formula we derived, a line whose performance fell exactly at the 50th percentile in this baseline would receive a MetroCard rating of $1.50 in this report. Any line at the 90th percentile of this range would receive a rating of $2.50, the current base fare.
4 We were unable to give an overall MetroCard Rating to the system’s three permanent shuttle lines — the Franklin Avenue Shuttle, the Rockaway Park Shuttle, and the Times Square Shuttle — because data is not available. The G line does not receive a MetroCard Rating as reliable data on crowding for that line is not available.
5 No Report Card was issued in 2013 given concerns about the impact of Superstorm Sandy on the subway system. Based on similar concerns, that was also the case in 2002 following the attack on the World Trade Center. As a result, the Straphangers Campaign has issued subway Report Cards sixteen times in eighteen years. Since we did not release a report in 2013 in the wake of Sandy, comparisons (where possible) are made between the last half of 2013 and the last half of 2011.
6 Letter to Joan Byron, John Raskin, Gene Russianoff and Veronica Vanterpool from Carmen Bianco, president, MTA New York City Transit, June 20, 2014.
7 The most recent crowding data available is drawn from New York City Transit's Year 2012 Cordon Count, completed immediately before Superstorm Sandy. Following Sandy, the R line was split into two separate halves. Crowding conditions listed above may not accurately reflect patterns observed as of the printing of this report.
8 See Appendix I for a complete list of MTA New York City Transit data cited in this report.