I. Findings

PROFILES OF
20 SUBWAY LINES

(click your line!)
1/9
L
A
2
M
B
3
N
C
4
Q
D
5
R
E
6
F
7
G
What do subway riders want?

They want short waits, trains that arrive regularly, a chance for a seat, a clean car and understandable announcements that tell them what they need to know. That’s what MTA New York City Transit’s own polling of rider satisfaction measures. 1

This “State of the Subways” Report Card tells riders how their lines do on these key aspects of service. We look at six measures of subway performance for the city’s 20 major subway lines, using recent data compiled by MTA New York City Transit.2 Some of the information has not been released publicly before on a line-by-line basis. Most of the measures are for all or the last half of 2011.

Our Report Card has three parts:

First, is a comparison of service on 20 lines, as detailed in the attached tables.

Second, we give an overall “MetroCard Rating”3 to 19 of the 20 major lines.4

Third, the report contains one-page profiles on each of the 20 lines. These are intended to provide riders, officials and communities with an easy-to-use summary of how their line performs compared to others.

This is the fifteenth Subway Report Card by the Straphangers Campaign since 1997.5

Our findings show the following picture of how New York City’s subways are doing:

1. The best subway line in the city was the Q with a “MetroCard Rating” of $1.60. The Q ranked number one in the system for the first time since 2001. The Q ranked highest because it tied for best in the system on announcements — and also performed above average on three measures: delays caused by mechanical breakdowns, seat availability at the most crowded point during rush hour, and subway car cleanliness. The line did not get a higher rating because it performed below average on the amount of scheduled service and average on regularity of service. The Q runs between Coney Island-Stillwell Avenue in Brooklyn and Astoria-Ditmars Boulevard in Queens.

2. For the fourth year in a row, the C was ranked the worst subway line, with a MetroCard Rating of 85 cents. The C line performed worst or next to worst in the system on four measures: amount of scheduled service, delays caused by mechanical breakdowns, subway car cleanliness and announcements. The line did not get a lower rating as it performed above average in the system on regularity of service and on chance of getting a seat at rush hour. The C operates between East New York in Brooklyn and Washington Heights in Manhattan.

3. The subways are a story of winners and losers. Riders on the best line — the Q — have much more reliable cars, frequent service and subway car cleanliness and car announcements than riders on the worst, the C. Sharp disparities among subway lines can be seen throughout the system.

  • Breakdowns: The E had the best record on delays caused by car mechanical failures: once every 816,935 miles. The C was worst, with a car breakdown rate more than twelve times higher: every 64,324 miles.

  • Cleanliness: The 1 was the cleanest line, with only 3% of cars having moderate or heavy dirt, while the dirtiest line — the C — had 25% of its cars rated moderately or heavily dirty, a rate more than eight times higher.

  • Chance of getting a seat: We rate a rider’s chance of getting a seat at the most congested point on the line. We found the best chance is on the R, where riders had a 71% chance of getting a seat during rush hour at the most crowded point. The 5 ranked worst and was much more overcrowded, with riders having only a 23% chance of getting a seat, three times worse.

  • Amount of scheduled service: The 6 line had the most scheduled service, with two-and-a-half minute intervals between trains during the morning and evening rush hours. The C ranked worst, with nine- or ten-minute intervals between trains all through the day.

  • Regularity of service: The J/Z line had the greatest regularity of service, arriving within 25% of its scheduled interval 82% of the time. The most irregular line was the 5, which performed with regularity only 70% of the time.

  • Announcements: The 4 and Q lines had a perfect performance for adequate announcements made in subway cars, missing no announcements and reflecting the automation of announcements. The 7 line was worst, missing announcements 29% of the time.


4. System-wide, for twenty lines, we found the following on three of six measures that we can compare over time: car breakdowns, car cleanliness and announcements. (We cannot compare the three remaining measures due to changes in definitions by New York City Transit.)

  • The car breakdown rate improved slightly from an average mechanical failure every 170,217 miles to 172,700 miles during the 12-month period ending December 2011 — a gain of 1.5%. This positive trend reflects the arrival of new model subway cars in recent years and better maintenance of Transit’s aging fleet. We found eleven lines improved (1, 2, 3, 5, 6, C, E, F, G, N, and Q), while nine lines worsened (4, 7, A, B, D, J/Z, L, M, and R).

  • Subway cars went from 94% rated clean in our last report to 90% in our current report — a decline of 4.3%. We found that fifteen lines declined (2, 3, 4, 6, B, C, D, E, F, J/Z, L, M, N, Q, and R), four improved (1, 7, A, and G) and one remained unchanged (5).

  • Accurate and understandable subway car announcements improved, going from 87% in our last report to 90% in the current report — an increase of 3.4%. We found ten lines improved (1, 2, 4, B, C, D, F, G, J/Z, and N) six declined (3, 5, 7, A, E, and M) and four did not change (6, L, Q, and R).

II.  Summary of Methodology

The NYPIRG Straphangers Campaign reviewed extensive MTA New York City Transit data on the quality and quantity of service on 20 subway lines. We used the latest comparable data available, largely from 2011.6 Several of the data items have not been publicly released before on a line-by-line basis. MTA New York City Transit does not conduct a comparable rider count on the G line, which is the only major line not to go into Manhattan. As a result, we could not give the G line a MetroCard Rating, although we do issue a profile for the line.

We then calculated a MetroCard Rating — intended as a shorthand tool to allow comparisons among lines — for 19 subway lines, as follows:

First, we formulated a scale of the relative importance of measures of subway service. This was based on a survey we conducted of a panel of transit experts and riders, and an official survey of riders by MTA New York City Transit. The six measures were weighted as follows:

Amount of service
  • scheduled amount of service

  • 30%
    Dependability of service
  • percent of trains arriving at regular intervals      
  • 22.5%
  • breakdown rate

  • 12.5%
    Comfort/usability
  • chance of getting a seat
  • 15%
  • interior cleanliness
  • 10%
  • adequacy of in-car announcements
  • 10%

    Second, for each measure, we compared each line’s performance to the best- and worst-performing lines in this rating period.

    A line equaling the system best in 2011 would receive a score of 100 for that indicator, while a line matching the system low in 2011 would receive a score of 0. Under this rating scale, a small difference in performance between two lines translates to a small difference between scores.

    These scores were then multiplied by the percentage weight of each indicator, and added up to reach an overall raw score. Below is an illustration of calculations for a line, in this case the 4.


    Figure 1

    Indicator 4 line value including best and worst in system for 5 indicators

    4 line score out of 100

    Percentage weight

    4 line adjusted raw score

    Scheduled service

    AM rush—4 min; noon—8 min; PM rush—4 min

    71

    30%

    21

    Service regularity

    72% (best—82%; worst—70%)

    18

    22.5%

    4

    Breakdown rate

    160,930  miles (best—816,935 miles; worst—64,324 miles)

    13

    12.5%

    2

    Crowding

    23% seated (best—71%; worst—23%)

    1

    15%

    0

    Cleanliness

    89% clean (best—97%; worst—75%)

    64

    10%

    6

    Announcements

    100% adequate (best—100%; worst—71%)

    100

    10%

    10

    Adjusted score total

     

     

     

    4 line—43 pts.


    Third, the summed totals were then placed on a scale that emphasizes the relative differences between scores nearest the top and bottom of the scale. (See Appendix I.)

    Finally, we converted each line’s summed raw score to a MetroCard Rating. We created a formula with assistance from independent transit experts. A line scoring, on average, at the 50th percentile of the lines for all six measures would receive a MetroCard Rating of $1.25. A line that matched the 90th percentile of this range would be rated $2.25, the current base fare. The 4 line, as shown above, falls at a weighted 43rd percentile over six measures, corresponding to a MetroCard Rating of $1.15.

    New York City Transit officials reviewed the profiles and ratings in 1997. They concluded: "Although it could obviously be debated as to which indicators are most important to the transit customer, we feel that the measures that you selected for the profiles are a good barometer in generally representing a route’s performance characteristics... Further, the format of your profiles... is clear and should cause no difficulty in the way the public interprets the information."

    Their full comments can be found in Appendix I, which presents a more detailed description of our methodology. Transit officials were also sent an advance summary of the findings for this year's State of the Subways Report Card.

    For our first five surveys, we used 1996 — our first year for calculating MetroCard Ratings — as a baseline. As we said in our 1997 report, our ratings “will allow us to use the same formula for ranking service on subway lines in the future. As such, it will be a fair and objective barometer for gauging whether service has improved, stayed the same, or deteriorated over time.”

    However, in 2001, 2003, 2004, 2005, 2008, 2009, 2010, and 2011, transit officials made changes in how performance indicators are measured and/or reported. The Straphangers Campaign unsuccessfully urged MTA New York City Transit to re-consider its new methodologies, because of our concerns about the fairness of these measures and the loss of comparability with past indicators. Transit officials also rejected our request to re-calculate measures back to 1996 in line with their adopted changes. As a result, in this report we were forced to redefine our baseline with current data, and considerable historical comparability was lost.

    Also due to changes in the measuring and/or reporting of data by Transit officials, it was necessary to make modest adjustments to the MetroCard Rating calculation and scale—as was the case in several earlier State of the Subways reports. In selecting this scale we attempted to create a single measure which we felt accurately and fairly represents the relative performance priorities listed in our original 1996 poll of riders, community leaders and independent transit experts.

    III.  Why A Report Card on the State of the Subways?

    Why does the Straphangers Campaign publish a yearly report card on the subways?

    First, riders are looking for information on the quality of their trips, especially for their line. Our profiles seek to provide this information in a simple and accessible form.

    In the past, the MTA has resisted developing detailed line-by-line performance measure. That has been gradually changing, to the agency’s credit:

    • In 2009, the MTA began posting monthly performance data for subway car breakdown rates on its website, www.mta.info. It now includes subway car “’mean distance between failures” in its monthly NYC Transit Committee agenda. The agency also provides a measure of regularity—“wait assessment”—by subway line and key bus routes;

    • In 2010, it made some of the performance measurement databases available publicly on its “developer resources” webpage; and

    • In 2011, NYC Transit developed a new line-by-line statistic that combines three service measures and weights them, not unlike our combined rating.

    Second, our report cards provide a picture of where the subways are. Riders can consult our profiles and ratings and see how their subway line compares to others, disparities and all. They can also see the current positive trend for subway care breakdown rates and announcements, as well as the negative direction for subway car cleanliness. Future performance will be a challenge given the MTA’s tight budget.

    Lastly, we aim to give communities the information they need to win better service. We often hear from riders and neighborhood groups. They will say, “Our line has got to be worst.” Or “We must have the most crowded trains.” Or “Our line is much better than others.” For riders and officials on lines receiving a poor level of service, our report will help them make the case for improvements, ranging from increases in service to major repairs.

    That’s not just a hope. In past years, we’ve seen riders win improvements, such as on the B, N and 5 lines. For those on better lines, the report can highlight areas for improvement. For example, riders on the 7 — now a frontrunner in the system — have pointed to past declines and won increased service.

    This report is part of a series of surveys on subway and bus service. For example, we issue annual surveys on subway car cleanliness and announcements and on the conditions of subway station platforms, as well as give out the Pokey Awards for the slowest city bus routes.

    We hope that these efforts — combined with the concern and activism of many thousands of city transit riders — will win better subway and bus service for New York City.

     

    1 New York City Residents’ Perceptions of New York City Transit Service, 1999 Citywide Survey, prepared for MTA New York City Transit.

    2 The measures are: frequency of scheduled service; how regularly trains arrive; delays due to car mechanical problems; chance to get a seat at peak period; car cleanliness; and in-car announcements. Regularity of service is reported in a indicator called wait assessment, a measure of gaps in service or bunching together of trains.

    3 We derived the MetroCard Ratings with the help of independent transportation experts. Descriptions of the methodology can be found in Section II and Appendix I. The rating was developed in two steps. First, we decided how much weight to give each of the six measures of transit service. Then we placed each line on a scale that permits fair comparisons. Under a formula we derived, a line whose performance fell exactly at the 50th percentile in this baseline would receive a MetroCard rating of $1.25 in this report. Any line at the 90th percentile of this range would receive a rating of $2.25, the current base fare.

    4 We were unable to give an overall MetroCard Rating to the system’s three permanent shuttle lines — the Franklin Avenue Shuttle, the Rockaway Park Shuttle, and the Times Square Shuttle — because data is not available. The G line does not receive a MetroCard Rating as reliable data on crowding for that line is not available.

    5 We did not issue a report in 2002. Because of the severe impact on the subways from the World Trade Center attack, ratings based on service at the end of 2001 would not have been appropriate.

    6 See Appendix I for a complete list of MTA New York City Transit data cited in this report.