I have not seen detailed proposals from either CTU or CPS on the teacher contract terms, and therefore I cannot comment on whether one side is less reasonable than the other in the negotiations. However, the 6 June letter from Jean-Claude Brizard fails to inspire confidence in CPS’s credibility.
First, Brizard claims that the recent changes in the Illinois Education Reform Legislation was “designed specifically to foster negotiation and avoid a strike.” Either Brizard is wrong or the legislative majority was deeply stupid. The 2011 amendments specifically exclude a set of issues from the jurisdiction of mediators or fact-finders (115 ILCS 5 Sec. 12 (b)). Among other items, the exclusions include: “Decisions to determine class size, class staffing and assignment, class schedules, academic calendar, length of the work and school day, length of the work and school year, hours and places of instruction, or pupil assessment policies” (115 ILCS 5 Sec. 4.5 (a) 4). These exclusions only apply to Chicago. Why the length of the work day would be excluded from contract negotiations is unclear (the work day provision was part of the 2011 amendments). In short, CTU and CPS must discuss these issues directly as they relate to compensation. If anything, the law was designed to impede negotiation, and the predictable effect was that the prospects of a strike increased.
Second, the 2011 changes created an incentive for CTU to seek a strike authorization vote earlier than it would otherwise seek one. The law required CTU to get 75% of its total membership to vote to strike, not a simple majority of a quorum of teachers. In effect, this raises the effective percentage of those voting well above 75% to strike. If ten percent of the membership were not available to vote, then CTU would need 83.34% of those voting to vote for a strike. Therefore, CTU needs to vote when most if its membership is available, which is not during the summer. As a result, the vote is not premature; it is timed based the requirements the legislature set.
Third, Brizard states the fact-finding process underway should be completed before a strike vote is taken. But the fact-finding process can only discuss compensation issues and not the sec. 4.5 issues, therefore the completion of that process will not reveal any more information about those issues. Teachers already know that CPS did not accept CTU’s initial proposals, and so the teachers are well informed of the range of feasible bargains.
Fourth, as CTU points out, this vote authorizes a strike; it is not a vote to strike now. Instead, CTU leaders will have a credible threat to strike if the authorization occurs. Fifth, the letter is littered with non-sequiturs. How does a strike vote cause “disruption” in classrooms? My seven-year-old read this:
A strike vote causes disruption in our children’s classrooms that are currently in session.
and asked “How does it disrupt the class?” To make sure she knew what a disruption was, I asked her to give an example of one: “It’s like when people are talking in the hall or outside the window when they’re not supposed to and it disturbs us trying to concentrate.” OK, she knows what a disruption is, but she was rather exasperated by the concept of the vote causing a disruption. She knew that teachers were voting because she heard people talking about it, but there was no disruption in her classroom. How does the vote disrupt it, she wanted to know. The answer is that the proposition is false: the vote does not disrupt classrooms.
Our children deserve this legal process to play out as intended by the Illinois Education Reform Law, which was agreed upon by the district and the Chicago Teachers Union.
The legal process is playing out — the state has sought to exclude potential bargains and the union has responded, as anyone familiar with bargaining theory would have predicted, by bolstering the credibility of a strike. Unless there is a feature of Illinois law-making I’m not familiar with, I’m not sure how the district and CTU agreed to the 2011 changes to the Illinois Educational Labor Relations Act. As far as I know, the Illinois General Assembly agreed to that.
By not allowing this legal process to conclude, we send the wrong message to our children and disrupt student learning.
Actually, the message is clear and the children are learning how bargaining works. The lesson is this: if you try to preemptively restrict the scope of an agreement, your counter-party is more likely to leave the talks than negotiate under your constraints. Unfortunately, for the children the majority of the General Assembly never learned this.
There is a debate ongoing in the comments sections at CPS Obsessed on charter schools. As I wrote in the comments there, I do not oppose charters entirely. I think they can be useful to experiment with different teaching approaches, and they can serve niche purposes in otherwise underserved neighborhoods. But I do not believe that diverting public school funds to create charters is a solution to the problem of under-performing schools. Whether charter schools perform better than comparable public schools is still on open question. I have used some CPS data to make a quick comparison.
These tables cover all charter, open enrollment, and open magnet high schools schools that had composite ACT scores from 2007 to 2011. The cumulative gain is calculated from 2007, which is not necessarily the first time the school administered the ACTs. The median ACT composite score is from the same 2007-2011 range, not the school’s most recent score. Only one magnet high school has open admissions; the others require certain scores for admission. Open admission high schools that required similar scores or tests were also excluded. The data is from CPS. I computed the cumulative gains and the median ACT composite scores and identified the schools by type.
As you can see, sorting by cumulative gains in the 2007-2011 period shows open admission public high schools outperforming charter high schools. By contrast, sorting by median ACT composite scores in the same period shows charter schools ahead of open-enrollment schools. The median gain for charters is 0; the median gain for open-admission schools is 0.2. Taking the median—the number in the middle—avoids distortions in averages due to outsize low or high scores of one school. The average, or mean, gain for charters was -0.1 and the mean gain for open-admission was 0.4. For ACT medians, charters scored 17.4 and open-admissions 14.9. In other words, it is not clear that charters are actually improving their students scores consistently over time as opposed to enrolling students that already achieve more than their peers who were either not admitted or did not apply to the charters. This is consistent with some school-level value added data, which shows some charters having negative to zero value-added. However, we need to study that data more systematically.
Charters do take fewer special education students and have fewer students on reduced or free lunches than open-admission schools, but not by much. For 2012, based on CPS data, charters had a median of 13.73% special education students and 89.86% reduced or free lunch recipients; the respective numbers for open-admissions were 18.82% and 97.45%. I ran a crude calculation weighting the median ACT scores by the 2012 special ed numbers, and the change in gap in median scores narrowed only slightly. This is crude because the 2012 data was being used on scores that were taken between 2007-2011. Someone might wish to run a more serious analysis to see if differences in special ed percentages have more of an impact.
Some charter proponents argue that charters are the only choice that low-income, minority students have and that opposition to charters by upper-class, white parents leaves these students behind. However, from my sample, charters serve more white students than the open-admission schools do. The median percentage of whites is 0.7 in charters versus 0.1 in open-admission schools.
In my view, rather than focus on adding charters, CPS should open more open-admission, well-staffed magnet schools that reserve 50% of the seats for students coming from schools that score in the bottom quartile on assessments.
Update: CPS attorney’s agreed that the Election Code did not apply to LSC elections in this appellate case Lindsey v. Board of Educ. of City of Chicago (2004).
In my prior post, I was puzzled over the high undervote and the voting distribution at Mayer (not my loss — that was over-determined, as they say). It turns out that a potential reason for the undervote was that 35 of the 325 ballots were declared spoiled by the election judges. With 5 votes for each ballot, that is 175 more votes that might have been cast. If they were, the true undervote was about 14%, not 24% (398-175=223; 223/1625=.137). I have not been able to confirm the spoiled figure officially. Compared to the rate of spoiled ballots at the county-level in general elections, a 10.7% spoilage rate is high. Many have spoilage rates below 1%. This rate is more like Florida in the 2000 general election.
According to CPS’s Office of Local School Council Relations, however, it is a low spoilage rate. Some school have a 40% spoilage rate because voters fail to make a cross mark or X in the box. Instead, they put a check mark or fill in the box or do something else. Now for most elections in Illinois that once used paper ballots, it was true that the legislature mandates that cross (X) marks be made:
(10 ILCS 5/17-11)
(from Ch. 46, par. 17-11)
On receipt of his ballot the voter shall forthwith, and without leaving the inclosed space, retire alone, or accompanied by children as provided in Section 17-8, to one of the voting booths so provided and shall prepare his ballot by making in the appropriate margin or place a cross (X) opposite the name of the candidate of his choice for each office to be filled, or by writing in the name of the candidate of his choice in a blank space on said ticket, making a cross (X) opposite thereto; and in case of a question submitted to the vote of the people, by making in the appropriate margin or place a cross (X) against the answer he desires to give.
And the state supreme court upheld this requirement in 1960 in Scribner. v. Sachs. That ruling stands. The 1990 Pullen v. Mulligan decision did not address paper ballots under the code because it was a chad-based ballot. The decision affirmed that “where the intention of the voter can be ascertained with reasonable certainty from his ballot, that intention will be given effect even though the ballot is not strictly in conformity with the law” (p.611). It would appear that CPS is on firm ground in rejecting the ballots that lack an X mark, despite the large percentage of ballots it then discards.
Except that the same election cod
e (10 ILCS 5) explicitly exempts LSC elections from the confines of the code:
All Elections - Governed by this Code - Construction of Article 2A.
(a) No public question may be submitted to any voters in this State, nor may any person be nominated for public office or elected to public or political party office in this State except pursuant to this Code, notwithstanding the provisions of any other statute or municipal charter. However, this Code shall not apply to elections for officers or public questions of local school councils established pursuant to Chapter 34 of the School Code, soil and water conservation districts or drainage districts, except as specifically made applicable by another statute.
So, in law, Illinois election law does not require that ballots in LSC elections must have cross marks on paper ballots. This might indeed be the standard for local school boards in other jurisdictions, which are governed by Article 9 of the School Code, but the LSC system was specifically created for Chicago by Chap. 34 after Article 9 was adopted, and the only reason to specifically mention the LSC system in this paragraph is to free it from the restraints of the election code. So when voters in LSC elections are shown this card:
The sentence that begins “According to Illinois law…” is flat wrong. Illinois law does not require this. CPS is improperly discarding a significant portion of the ballots in LSC races. In a limited voting system (more elected positions than voters have votes to cast), undervoting should be low. Wherever it is not, it is fair to assume that many ballots have been discarded for arbitrary reasons.
To make matters worse, the text contradict the pictures. “A valid cross mark consists of of two lines which intersect inside the the square,” it says (their emphasis), but the second boxes from the left and the right under “Examples of Invalid Marks” have two lines that do intersect unless the rules of geometry have changed. We call one street terminating perpendicular to another a T-intersection because, like the horizontal and vertical lines of the letter T, they intersect. Students of Chicago, this is what happens when you don’t pay attention in math class: whole elections are botched.
The votes for the Mayer LSC election are in:
As one can see, I did not fare very well. I was somewhat surprised by the variation in the votes among the victors and the lack of variation among the defeated. The elected candidate with the lowest number of votes (81) was separated by the elected candidate with the highest number (141) by 60 votes. Among the remaining twelve nearly the same gap, 61 votes, separated the losing candidate with the most votes (78) from the losing candidate with the least (17). The greatest gap is between the 2nd and 3rd ranked candidates (31 votes).
I was expecting less variation among the victors and a few losers followed by a sharp drop-off among the remaining losers (red s-curve).
While the tail end is along my expectations, the rest is not. There was sizable undervoting despite a strong turn-out. According to school office, 325 ballots were cast. With each voter eligible to cast five votes, that would mean 1625 votes could have been cast. In fact, only 1227 votes were cast, for 398 undervotes (votes that could have been cast but were not). Put differently, on average, each voter cast 3.78 votes. Why is there a 24% undervote?
One answer could be voter fatigue. Usually this applies to ballots that feature far more than two races, and items far down the ballot (e.g, 10th) are skipped. Faced with making multiple choices across several races, voters become tired and abstain on issues listed lower in the ballot. The community rep. race received only 82 votes in total, with the highest-ranking candidate receiving only 33 votes, compared to 1145 votes cast in the parent race. Some voters might have been too fatigued to vote further, but it seems a stretch. Fatigue is unlikely in this race given only two races in the ballot, and the fact that persons on the lower half of the ballot received high numbers of votes.
Could the undervoting be the result of a cleavage between parents and community members? If those who voted in the community race only voted for one community candidate and did not vote for any parents, which is a highly unlikely scenario, those 81 voters would account for 324 forgone votes, most of the undervote but not all of it. There would still be 74 undervotes among the remaining 244 voters (325 minus 81), which is a 30% undervote.
A third potential explanation is strategic voting. For a candidate and anyone who wanted that candidate to win, the rational action to maximize the chance of victory is to vote only for oneself or that candidate, even if you prefer that other candidates also win. Therefore, I should vote only for myself, and none of the other parents (I did not vote this way). But voting for community reps would have no effect on my chances of winning a parent slot. Strategic voters in the parent race should have voted in the community race, casting at least three votes in total. If we had many strategically voting parents, we should have seen more community votes, and we don’t. However, it might be that strategically voting parents lacked sufficient information about community reps to select one. Even if this is so, the number of undervotes is too low to support widespread strategic voting. Let’s assume that the six elected candidates employed strategic voting themselves and each convinced 19 other people to vote that way, which would create four missing votes per person (6·20·4=480): too many missing votes. Using a differnt assumption, that half the candidates used this method but each could only convince nine other people to do so would yield a smaller number of undervotes, 360, among those 90 voters, with a 16% undervote among the remaining 235 voters. While it is possible that say, 95, voters voted strategically (intentionally voting for only one candidate), I find it implausible that a candidate could successfully deploy such a strategy without word leaking out. (Let me be clear, I see noting improper about strategic voting in a multi-member race; a candidate has a legitimate interest in maximizing his or her chances of victory.)
A fourth explanation could be that voters had too little information to vote for more than three or four candidates. Voters had three mechanisms to seek information about candidates. One was candidate statements posted inside the main door, as CPS prescribes. For most parents or community members, this is ineffective because 1) the main doors are normally locked and 2) students do not enter or leave via the main doors. There is very little reason for most parents to be at the doors, and even less opportunity for community members to be there. The second mechanism was a candidate forum in late March. Having over 20 candidates give 3 minutes speeches with no opportunity for questions was hardly conducive to voter inquiry. Third, candidates could submit flyers to distribute to students on Monday. None of the community candidates distributed flyers and four of the twelve parents who lost did not submit flyers. But three of those parents ranked above five others who had distributed flyers. Variation in voting was not due to lack of available information about some candidates versus others.
Information might have mattered in other ways. First, while information might have been available, there could have been insufficient differences among the candidates for voters to form preferences. One would be hard pressed to identify any wedge issue among the candidates. Indeed, most candidates had roughly the same sets of concerns and objectives (this is a good thing since it shows consensus among the candidates). Second, there might have been too much information in too short a time. Parents who did not have time to attend the candidate forum had less than 48 hours to read through 14 flyers. Either mechanism could explain low turn-out, but turn-out was good. Nevertheless, we would expect a more random distribution votes than we see here.
A fifth explanation is a network effect: those who won were more networked in the Mayer community than those who lost. I mean networked in a social-science sense, in which interacting mutual connections create shared information. For example, if ten people know Alice, and ten different people know Zed, and half of Alice’s and Zed’s contacts know each other then, Alice and Zed each have 15 community connections. If the mutual acquaintances share information about voting preferences, then Alice and Zed are at an advantage over those with fewer connections and interactions among them. The difference between Alice and Zed and others is not popularity, but mutual connections.
In this explanation, information matters but it interacts with network effects. With so many candidates, so much issue similarity, and too little time to differentiate between them, voters would find it easier to learn about candidates from others who knew candidates first-hand or second-hand than they would be trying to piece together a ranking on their own (in psychological circles, most people are “cognitive misers”). In this scenario, the median voter (a favorite political science term) might know one candidate well, and have two to three friends who know other candidates well. Ideally, the voter would either know or have a trusted friend who knew five candidates well. But it is not an ideal world. So many people might have known one or two candidates directly and received recommendations for one or two others, leading to undervoting by one or two votes.
There is some evidence to support this explanation. The six victors currently either serve on the Friends of Mayer board of directors or chair an FoM committee, had been involved in the Mayer community for three years or more, or had more than one child enrolled. Three of the four next-highest vote-getters were also FoM board members. There are few outliers: defeated candidates with past or present board membership and long-standing ties to Mayer. They might have suffered from “fratricide.” With at least nine candidates with FoM board membership or long-standing FoM ties and only five votes among those directly networked via FoM, there could have been too few votes to go around. Among the losers with the least votes, several were at Mayer less than two years and had only one child at the school, reducing the number of potential connections.
Why weren’t there more votes for community candidates? With at least 398 undervotes, the answer cannot be that people had too few votes to cast. Rather, community candidates were the least networked among those voting.
Along with limited strategic voting and some community members turning out just to vote for community members, the network-based explanation does best accounting for who won and the variation in vote totals among the candidates.
The mayor and CPS have decided on a 7-hour school day for elementary students and 7.5 hours for high schools next year. It is good to see that the mayor and CPS have listened to parents, but disheartening that the more reasonable 6.5-hour day was not adopted. Raise Your Hand and 6.5 to Thrive did excellent work promoting a more reasonable schedule.
It is unclear what this will mean for pre-K programs. Adding an hour and 15 minutes to the current day for 3-5 year-old children rather than an hour and 45 minutes is unlikely to assuage those parents’ concerns.
What is surprising about the mayor’s announcement is that there is still no plan for funding the 7-hour day. Moreover, this decision comes without any agreement with teachers. The groundwork for the extended day is not complete.
Click here for my official candidate statement posted at Mayer, or just scroll down. When I wrote the statement, I thought there were only four people running for six parent representative positions, and since I had no particular agenda, I stated my general views on the role of the LSC, the purpose of public education, and some brief personal background.
Since we now have a highly competitive field, here are key goals, beyond the official duties, that I would pursue if I were to be elected:
- improve communication with parents beyond those already active with the LSC and Friends of Mayer. As a representative, my duty would be to solicit views from parents.
- identify priorities for curriculum and facility improvement and search for means of funding them in cooperation with the faculty and Friends of Mayer. I currently volunteer on the FoM grants committee.
- examine how the upcoming Common Core curriculum will affect the Montessori and MYB programs and the overall School Improvement Plan, and communicate with parents about these changes.
What is the relationship between longer school days in urban school districts and student achievement? CPS has referred to several studies that, as I noted below, do not support the case for a 7.5-hour day, but it has not provided much analysis of its own. CPS did compile the average annual number of instructional minutes for various urban school districts. Indeed, Chicago does have the lowest number. But how much better do the school districts with longer hours do?
Fortunately, the U.S. Department of Education conducts the National Assessment of Educational Progress (NAEP). Formally, it is National Center for Education Statistics (NCES) within the Institute of Education Sciences of the Dept. of Ed. that does this. The NAEP administers a common test to a representative sample of students at several levels. One 2011 NAEP study examined Chicago and a number of other urban school districts in reading and mathematics for the 4th and 8th grades. Using the results from the NAEP, we can get a rough estimate of how annual instructional time affects scores.
Here is the 4th grade data for the urban school districts that are included in the NAEP and for which CPS compiled instructional time. I complied the time data for Miami-Dade and converted CPS annual minutes into hours.
|School Dist.||Annual hours||NAEP math 4th grade avg. score||NAEP reading 4th grade avg. score|
Chicago has the lowest instructional hours and has the 2nd lowest average math score and the 3rd lowest average reading score. Clearly, Chicago is lagging behind most other urban school districts. But is insufficient time the cause of this gap? Philadelphia has the longest instructional day, but scores only 1 point above Chicago in math and 4 points below Chicago in reading. Below I graph instructional time against NAEP scores. Note that the y-axis of the graph ranges from 100 to 300, while the potential range for NAEP scores is 0 to 500, and I have begun the x-axis (annual instructional time) at 500 hours.
The relationship between instructional time and reading scores is negative here, indicating by the downward sloping trend-line. Houston and Dallas put in more time than San Diego, New York City, or Miami, but have lower reading scores. A bivariate regression of reading scores on instructional time gives us a negative and low B-coefficient (-.010). With only eight observations, the coefficient is not reliable or “statistically significant,” in the lingo of statisticians. In substantive terms, it seems improbable that reduced instruction would degrade reading scores. What this result indicates is that variables other than instructional time alone are at play.
This graph of math scores tells a similar story:
In this graph, at least the trend-line is sloping upward a little. But once more, despite having a longer day, Dallas has lower scores than Miami, NYC, and San Diego, and Miami outscores Houston, too. Despite having the longest day of all, Philadelphia only marginally outscores Chicago.
A bivariate regression of math scores on time yields a low B-coefficient and, like above, one that is not reliable. This is good news, because with a B-coefficient of 0.013 closing the 13-point average math gap between Chicago and Houston would require Chicago to more than double the current school day, to add on 965 more hours a year!
None of these results are that surprising because I have examined only one variable (instructional time). Multiple factors are likely to influence NAEP scores, and in complicated ways. What is troubling is that this simple analysis is far more than CPS has done to examine whether increased instructional time would improve student achievement.
We all want to see Chicago move from under-performing against other urban school districts to over-performing compared to them. But the data here suggest that time is not a crucial factor.