I. Findings

PROFILES OF
22 SUBWAY LINES

(click your line!)
1/9
L
A
2
M
B
3
N
C
4
Q
D
5
R
E
6
F
7
G
What do subway riders want?

They want short waits, trains that arrive regularly, a chance for a seat, a clean car and understandable announcements that tell them what they need to know. That’s what MTA New York City Transit’s own polling of its riders shows.1

This “State of the Subways” Report Card tells riders how their lines do on these key aspects of service. We look at six measures of subway performance for the city’s 22 major subway lines, using recent data compiled by MTA New York City Transit.2 Much of the information has not been released publicly before on a line-by-line basis. Most of the measures are for all or the last half of 2009.

Our Report Card has three parts:

First, is a comparison of service on 22 lines, as detailed in the attached tables.

Second, we give an overall “MetroCard Rating”3 to 21 of the 22 lines.4

Third, the report contains one-page profiles on each of the 22 lines. These are intended to provide riders, officials and communities with an easy-to-use summary of how their line performs compared to others.

This is the thirteenth Subway Report Card issued by the Straphangers Campaign since 1997.5

Our findings show the following picture of how New York City’s subways are doing:

1. For the second year in a row, the best subway line in the city was the 7 with a “MetroCard Rating” of $1.60. The 7 ranked highest because it performs best in the system on subway car cleanliness and above average on four measures: frequency of scheduled service, regularity of service, delays caused by mechanical breakdowns, and seat availability at the most crowded point. The line did not get a higher rating because it performed below average on announcements. The 7 runs between Times Square in Manhattan and Flushing in Queens.

2. For the second year in a row, the C was ranked the worst subway line, with a MetroCard Rating of 55 cents. The C line performs below average on five measures: amount of scheduled service, delays caused by mechanical breakdowns and announcements (all three next to worst); regularity of service; and cleanliness. The line performed better than average on one measure: chance of getting a seat at rush hour. The C line operates between East New York in Brooklyn and Upper Manhattan.

3. Overall, we found an improved picture for subway service on the three measures we can compare over time — car breakdowns, car cleanliness and announcements. (We are unable to compare the three remaining measures due to changes in reporting by New York City Transit.)

  • The car breakdown rate improved from an average mechanical failure every 134,795 in 2008 to 170,314 miles in the 12-month period ending May 2010 — a gain of 26%. This positive trend reflects the arrival of new model subway cars and better maintenance of Transit’s aging fleet. We found sixteen lines improved (3, 4, 5, 6, 7, B, E, F, J/Z, L, M, N, Q, R, V and W), while five lines worsened (2, A, C, D and G) and one stayed the same (1).
  • Subway cars went from 91% rated clean in our last report to 95% in our current report. We found that twenty lines improved (1, 3, 4, 5, 6, 7, A, B, C, D, E, F, G, L, M, N, Q, R, V and W) and two worsened slightly (2 and J/Z).
  • Accurate and understandable subway car announcements improved, going from 90% in our last report to 91% in the current report. This likely reflected in part the increasing use of automated announcements on ’new technology“ cars. We found eleven lines improved (1, 3, B, D, E, F, G, J/Z, L, Q and W), five worsened (2, 6, 7, R and V) and six did not change (4, 5, A, C, M and N).

4. There are large disparities in how subway lines perform.6

  • Breakdowns: The M had the best record on delays caused by car mechanical failures: once every 1,045,886 miles. The G was worst, with a car breakdown rate sixteen times higher: every 60,039 miles.
  • Cleanliness: The 7, L and V were the cleanest lines, with only 1% of cars having moderate or heavy dirt, while 11% of cars on the dirtiest lines — the J/Z and R — had moderate or heavy dirt, a rate more than ten times higher.
  • Chance of getting a seat:7 We rate a rider’s chance of getting a seat at the most congested point on the line. We found the best chance is on the B line, where riders had a 68% chance of getting a seat during rush hour at the most crowded point. The 2 ranked worst and was much more overcrowded, with riders having only a 27% chance of getting a seat.
  • Amount of scheduled service: The 6 line had the most scheduled service, with two-and-a-half minute intervals between trains during the morning and evening rush hours. The M ranked worst, with ten-minute intervals between trains all through the day.
  • Regularity of service: The J/Z line had the greatest regularity of service, arriving within two to four minutes of its scheduled interval 93% of the time. The most irregular line is the A, which performed with regularity only 83% of the time.
  • In-car announcements: The 5, E, L, M and W lines had a perfect performance for adequate announcements made in its subway cars, missing no announcements, and reflecting the automation of announcements. The R was worst, missing announcements 25% of the time.

II. Summary of Methodology

The NYPIRG Straphangers Campaign reviewed extensive MTA New York City Transit data on the quality and quantity of service on 22 subway lines. We used the latest comparable data available, largely from 2009.8 Several of the data items have not been publicly released before on a line-by-line basis. MTA New York City Transit does not conduct a comparable rider count on the G line, which is the only major line not to go into Manhattan. As a result, we could not give the G line a MetroCard Rating, although we do issue a profile for the line.

We then calculated a MetroCard Rating — intended as a shorthand tool to allow comparisons among lines — for 21 subway lines, as follows:

First, we formulated a scale of the relative importance of measures of subway service. This was based on a survey we conducted of a panel of transit experts and riders, and an official survey of riders by MTA New York City Transit. The six measures were weighted as follows:

Amount of service
• scheduled amount of service 30%
Dependability of service
• percent of trains arriving at regular intervals    22.5%
• breakdown rate 12.5%
Comfort/usability
• chance of getting a seat 15%
• interior cleanliness 10%
• adequacy of in-car announcements 10%

Second, for each measure, we compared each line’s performance to the best- and worst-performing lines in this rating period.

A line equaling the system best in 2009 would receive a score of 100 for that indicator, while a line matching the system low in 2009 would receive a score of 0. Under this rating scale, a small difference in performance between two lines translates to a small difference between scores.

These scores were then multiplied by the percentage weight of each indicator, and added up to reach an overall raw score. Below is an illustration of calculations for a line, in this case the 4.

Figure 1

Indicator 4 line value including best and worst in system for 5 indicators

4 line score out of 100

Percentage weight

4 line adjusted raw score

Scheduled service

AM rush—4 min; noon—8 min; PM rush—4 min

72

30%

22

Service regularity

87% (best—93%; worst—83%)

38

22.5%

9

Breakdown rate

206,214 miles (best—1,045,886 miles; worst—60,093 miles)

15

12.5%

2

Crowding

31% seated (best—68%; worst—27%)

8

15%

1

Cleanliness

98% clean (best—99%; worst—89%)

90

10%

9

Announcements

99% adequate (best—100%; worst—75%)

96

10%

10

Adjusted score total

 

 

 

4 line—52 pts.


Third, the summed totals were then placed on a scale that emphasizes the relative differences between scores nearest the top and bottom of the scale. (See Appendix I.)

Finally, we converted each line’s summed raw score to a MetroCard Rating. We created a formula with assistance from independent transit experts. A line scoring, on average, at the 50th percentile of the lines for all six performance measures would receive a MetroCard Rating of $1.15. A line that matched the 95th percentile of this range would be rated $2.25.

New York City Transit officials reviewed the profiles and ratings in 1997. They concluded: “Although it could obviously be debated as to which indicators are most important to the transit customer, we feel that the measures that you selected for the profiles are a good barometer in generally representing a route’s performance characteristics...Further, the format of your
profiles...is clear and should cause no difficulty in the way the public interprets the information.”

Their full comments can be found in Appendix I, which presents a more detailed description of our methodology. Transit officials were also sent an advance summary of the findings for this year's State of the Subways Report Card.

For our first six surveys, we used 1996 — our first year for calculating MetroCard Ratings — as a baseline. As we said in our 1997 report, our ratings “will allow us to use the same formula for ranking service on subway lines in the future. As such, it will be a fair and objective barometer for gauging whether service has improved, stayed the same, or deteriorated over time.”

However, in 2001, 2003, 2004, 2005, 2008 and 2009, transit officials made changes in how performance indicators are measured and/or reported. The Straphangers Campaign unsuccessfully urged MTA New York City Transit to re-consider its new methodologies, because of our concerns about the fairness of these measures and the loss of comparability with past indicators. Transit officials also rejected our request to re-calculate measures back to 1996 in line with their adopted changes. As a result, in this report we were forced to redefine our baseline with current data, and considerable historical comparability was lost.

III.  Why A Report Card on the State of the Subways?

Why does the Straphangers Campaign publish a yearly report card on the subways?

First, riders are looking for information on the quality of their trips. In the past, the MTA has resisted putting detailed line-by-line performance measures on their web site. That has been gradually changing. In 2009, for example the MTA began posting monthly performance data for subway car breakdown rates on its website, www.mta.info. More is planned. Other aspects of service — such as car cleanliness — are not broken down by line. Our profiles seek to fill this gap.

Second, our report cards provide a picture of where the subways are headed. Overall, we found an improved picture for subway service on the three measures we can compare over time — car breakdowns, car cleanliness and announcements. We were unable to compare the other three measures due to changes in methodology by transit officials.

  • The car breakdown rate improved from an average mechanical failure every 134,795 in 2008 to 170,314 miles in the 12-month period ending May 2010 — a gain of 26%. This positive trend reflects the arrival of new model subway cars and better maintenance of Transit&’s aging fleet. We found sixteen lines improved (3, 4, 5, 6, 7, B, E, F, J/Z, L, M, N, Q, R, V and W), while five lines worsened (2, A, C, D and G) and one stayed the same (1).
  • Subway cars went from 91% rated clean in our last report to 95% in our current report. We found that twenty lines improved (1, 3, 4, 5, 6, 7, A, B, C, D, E, F, G, L, M, N, Q, R, V and W) and two worsened slightly (2 and J/Z).
  • Accurate and understandable subway car announcements improved, going from 90% in our last report to 91% in the current report. This likely reflected in part the increasing use of automated announcements on “new technology” cars. We found eleven lines improved (1, 3, B, D, E, F, G, J/Z, L, Q and W), five worsened (2, 6, 7, R and V) and six did not change (4, 5, A, C, M and N).

Future performance will be a challenge given the MTA’s tight budget.

Lastly, we aim to give communities the information they need to win better service. We often hear from riders and neighborhood groups. They will say, “Our line has got to be worst.” Or “We must have the most crowded trains.” Or “Our line is much better than others.”

For riders and officials on lines receiving a poor level of service, our report will help them make the case for improvements, ranging from increases in service to major repairs. That’s not just a hope. In past years, we’ve seen riders win improvements, such as on the B, N and 5 lines.

For those on better lines, the report can highlight areas for improvement. For example, riders on the 7 — now the best in the system — have pointed to past declines and won increased service.

This report is part of a series of studies on subway and bus service. For example, we issue annual surveys on payphone service in the subways, subway car cleanliness and subway car announcements, as well as give out the Pokey Awards for the slowest city bus routes.

Our reports can be found here, as can our profiles. We hope that these efforts — combined with the concern and activism of many thousands of city transit riders — will win better subway and bus service for New York City.

 

1 New York City Residents’ Perceptions of New York City Transit Service, 1999 Citywide Survey, prepared for MTA New York City Transit.

2 The measures are: frequency of scheduled service; how regularly trains arrive; delays due to car mechanical problems; chance to get a seat at peak period; car cleanliness; and in-car announcements. Regularity of service is reported in a indicator called wait assessment, a measure of gaps in service or bunching together of trains.

3 We derived the MetroCard Ratings with the help of independent transportation experts. Descriptions of the methodology can be found in Section II and Appendix I. The rating was developed in two steps. First, we decided how much weight to give each of the six measures of transit service. Then we placed each line on a scale that permits fair comparisons. Under a formula we derived, a line whose performance fell exactly at the 50th percentile in this baseline would receive a MetroCard rating of $1.15 in this report. Any line at the 95th percentile of this range would receive a rating of $2.25, the current base fare.

4 We were unable to give an overall MetroCard Rating to the system’s three permanent shuttle lines — the Franklin Avenue Shuttle, the Rockaway Park Shuttle, and the Times Square Shuttle — because data is not available. The G line does not receive a MetroCard Rating as reliable data on crowding for that line is not available.

5 We did not issue a report in 2002. Because of the severe impact on the subways from the World Trade Center attack, ratings based on service at the end of 2001 would not have been appropriate.

6 For some measures, small differences in rounding scores explain the first- and last-place rankings.

7 New York City Transit does not include G line passenger counts in its annual Cordon Count, as the G is the only one of the 22 major lines not to enter Manhattan’s central business district. For this reason, Straphangers Campaign does not give a MetroCard Rating to the G.

8 See Appendix I for a complete list of MTA New York City Transit data cited in this report.