← Return to Commoncog

Running the First Deming Cycle at United Schools Network

United Schools Network Logo

United Schools Network (USN) is a small, non-profit charter management organisation in Columbus, Ohio. It serves as the district office for four public charter schools in the city — one elementary and one middle school on the east side of Columbus, and one elementary and one middle school on the west side of Columbus. As of 2022, it serves around 1000 students. Founder and CEO Andy Boy writes, in the introduction to John Dues’s Win-Win:

Our student body mirrors the neighborhoods we serve: 100% of our students are economically disadvantaged, 86% are students of color, and 19% have identified disabilities. However, our students continue to outscore their neighborhood and district peers, proving what’s possible in public education. At the heart of our success has always been a talented team of educators—eager to serve, to learn, and to improve.

USN started with just one middle school — Columbus Collegiate Academy-Main Street, which opened in the fall of 2008. Its initial format was 57 sixth-grade students and six staff members packed in the basement of a church in the Weinland Park neighbourhood. The early days were difficult: there was inconsistent busing; parents who signed up took a bet on an unproven school; the teachers took a pay cut on an already low salary (compared to neighbouring school districts) and lived with such cuts for the first two years. Worse, staff had to reset the classrooms every Monday, as the church they were renting space from used the basement each Sunday for service. Years later, Boy would write that “we were united around a simple belief: every child, regardless of zip code, deserved an outstanding education, and every child, regardless of background, could achieve outstanding results” — which made some of the hardships more tolerable.

The model worked. At the end of their first year, CCA gained national recognition for their academic results. People began to notice that something special was going on in the school system. USN opened its second middle school, Columbus Collegiate Academy-Dana Avenue in Franklinton, four years later, in 2012. It has since scaled to multiple institutions and a staff spanning two levels of the organisation.

John Dues was part of the founding team of CCA-Main Street, serving as the Dean of Academics in 2008. Starting in 2014, he became the Chief Learning Officer of the entire United Schools Network, and began applying ideas from statistician and Continuous Improvement pioneer W. Edwards Deming across the four schools under his care.

Some of Deming’s ideas have led to novel, if common-sensical policies. For instance, in his 2023 book Win-Win, Dues writes that Deming’s recommendation to reason about your organisation as a cohesive system led to an obvious conclusion: schools are systems that produce student outcomes, and if you want to improve a system, you need feedback. It thus makes sense to track student performance five to ten years after graduation from middle-school, as a way of providing continued feedback to teachers. This requires dedicated resources — both staff and systems — to track down and stay in touch with graduated students and their parents.

Dues writes:

USN also has an alumni services department that assists our 8th grade students and families as they make the transition to high school. This work continues as alumni move through high school and even as they explore college and career options after graduation. They too are working to improve their services and wrote an aim for the department.

(…) Like the performance of production workers, most of the differences in achievement and test scores at the individual student level are caused by sources of variation embedded within the complex and dynamic system in which they are being educated. Peter Scholtes gave us a very useful way to think about this dynamic:

The old adage, “If the student hasn’t learned, the teacher hasn’t taught” is not true or useful. Instead a much more useful characterisation is, “If the learner hasn’t learned, the system is not yet adequate.”

By taking the systems view, numerous opportunities for improvement that were hidden from view nearly jump off the page. This could include a method for regularly collecting feedback from high school graduates five to ten years after graduation. Think of the power of asking graduates from your education system the following question: Did we make a difference in your life? Alumni answers to this question will let you know the true purpose of your organization, and if you are fulfilling the mission statement on the wall.

(…) The example of collecting feedback from alumni five and ten years after high school graduation, as well as the example of the fifth graders matriculating to middle school, introduces an important consideration with the use of feedback loops. That is, what makes understanding the behavior of complex systems like schools so challenging is the existence of delays in the feedback. Feedback delays can be imperceptibly short, as when you are the lead car at a stop light, the light turns green, and the car behind you honks to prompt you to accelerate through the intersection. Other times, the feedback we need about our system is far from immediate, as in the case of students matriculating to middle school. Still longer yet is the feedback we need from alumni five to ten years after graduation from the K-12 system to understand if we have adequately prepared them for success in society.

This systems-level understanding, however, came later.

In the 2018-19 school year, Dues was leading an improvement project called “8th Grade On-Track”. In Win-Win he writes:

I had been exposed to on-track work through a grant application to the Bill & Melinda Gates Foundation. Through this application process, I had learned of the on-track indicator systems being developed by various educational organisations across the country. One such system predicted the 8th graders who went on to graduate from high school with 96% accuracy. It included a combination of indicators including GPA, grades in core classes, attendance rate, and discipline events, and each indicator had a threshold students had to meet to be considered on-track. The indicators are displayed below:

On-Track Minimum Criteria

As a part of our project work, the team had adopted an on-track system, and we were studying how our 8th grade students were performing through the lens of these indicators. The on-track had been collected for each indicator for all 8th grade students dating back to their 6th grade year and was updated each grading period. It was through this study that we noticed that James’s reading grade dropped from a B in 7th grade to a D during the first trimester of his 8th grade year. The rest of his grades were a C or higher, his attendance rate was above 96%, and he had never been in serious trouble. Most people would look at James’s academic, attendance, and behavior stats and not see a student that needs intervention, but the improvement team felt otherwise. At the very moment that his reading grade dropped, James needed extra support. What we had learned through our research is that off-track 8th graders become off-track 9th graders, and this is especially problematic because freshman year is the make-or-break year for high school graduation. In most schools, a student like James probably wouldn’t get much attention, but research suggests that his grade drop is a leading indicator of things to come. Grades tend to drop in high school, compared to middle school, by about half a GPA point, and students receiving Ds in the middle grades are likely to receive Fs in high school (emphasis added).

At this point in time, Dues had not completed an in-depth study of Deming’s ideas, but had already been exposed Deming’s Plan-Do-Study-Act cycle — a central tool in the Continuous Improvement canon. The PDSA cycle (sometimes called the ‘Deming cycle’ or the ‘Shewhart cycle’) is the idea that trial and error cycles need to be executed rigorously — especially when done in an organisational context. In most companies, trial and error loops proceed in an ad-hoc manner: folks would make a Plan, and Do the plan, but not follow through (i.e. they do not Study the results of execution, and therefore do not update for ‘Act’ — the next iteration of the cycle).

Dues was eager to test this idea. He had taken a PDSA template from the Institute of Healthcare Improvement, and set it up as a document for this particular project. The template (after a few years of iteration) now looks like this:

PDSA Template from United School Network, John A. Dues

In the future, Dues would use this template, alongside a technique called the Five Whys and an ‘empathy interview’ to figure out the root cause for why each student was off-track. The PDSA template was then used for the iterative testing of interventions to address the root cause. James, as it turns out, was the first student Dues worked with to develop this protocol.

He writes:

Let’s first look at James’s on-track indicators, Five Whys, and empathy interview, as shown in Table 8.2, before turning to his PDSA. It’s worth noting that all of the following snapshots of the on-track system, Five Whys, and empathy interview for James were originally captured on a large sheet of Post It chart paper, so we could put the information on the wall for the improvement team to see. This arrangement became a protocol as we used the same formula with other students as the project progressed. However, for now we’ll just focus on how these tools were used with James and led directly to the design of his PDSA cycle.

Initial Five Whys Table for James

The very top of the Five Whys included the six on-track indicators James’s 7th grade year as well as the first trimester of 8th grade. In 7th grade, James was doing relatively well, but was considered off-track because his 2.4 GPA fell just below the on-track standard of 2.5. He was on-track in all other areas. At the start of his 8th grade year, his GPA dropped to 2.0 because of the two-letter grade drop in reading. To transition from the on-track indicators to the Five Whys, we first drafted an explicit problem statement as it was currently understood. In this case, the problem statement was as follows: “James was on-track in 6th grade, just under on-track in 7th grade, and off-track in Trimester 1 of 8th grade.”

At this point James’s reading teacher (Dr. Brennan) and Dues began the Five Whys process, in which a question is asked five times, with each why question being asked in response to the prior answer.

The Five Whys process proceeded like so:

  1. “Why was James off-track in Trimester 1 of 8th grade?”

  2. Dr. Brennan and Dues’s answer: “His reading grade dropped from a B in 7th grade to a D in Trimester 1 of 8th grade.”

  3. “Why did James’s Trimester 1 reading grade drop from a B to a D?”

  4. Dr. Brennan and Dues dug into James’s grade data, and came up with “Despite high reading test scores, he has a low homework grade in reading class.”

  5. “Why does James have a low homework grade in reading?”

At this point, Dr. Brennan and Dues stopped asking questions and invited James to participate in the process. Dues writes: “the most important information for why James’s reading grade dropped … was in his head.”

The next question was framed as “Why do you have a low homework grade in reading?” This was asked in the context of an ’empathy interview’ — because James “had now joined Dr. Brennan and me (Dues) in the inquiry, problem-solving and intervention design process.”

Dues continues:

James’s answer to the 3rd layer question was, “I do the easy/less time-consuming homework (math, science, history) first during Focus period at the end of the day (Focus was a structured homework support period at the school). Based on this answer, the 4th layer question was, “Why do you do your reading homework last?” In response to this question, James shared that “I don’t like doing my reading homework.” In turn, we asked the following in the 5th layer, “Why do you dislike your reading homework?” to which he said, “It is too much work, so I put off doing it until the last possible moment.” It isn’t captured in James’s 5th layer answer, but he also shared that the last possible moment typically meant that he was hurriedly doing his homework on the bus ride to school in the morning. Dr. Brennan, James, and I then took all the information from our conversation and framed the root cause of James’s reading grade drop as follows: “James dislikes doing his reading homework, and as a result does it last, often on the bus ride to school in the morning.”

James's Answers to the Five Whys

Dr Brennan and Dues now had enough information to design a PDSA cycle. Critically, they let James in as part of the design process.

This was what they filled in for the Plan phase for James’ PDSA:


James's Plan Section for the PDSA Cycle

Objective: Through a Five Whys root cause analysis and an empathy interview, it was discovered that James dislikes doing his reading homework, and as a result does it last, often on the thirty minute bus ride to school in the morning. Low homework grades caused his Trimester 1 reading grade to drop to a D after earning a B in 7th grade, despite the fact that he earned an 85% in the Trimester 1 Comprehensive Exam. The objective of the PDSA is to have James do his reading homework first as opposed to last over the course of the next five school days.

Plan: Plan the test, including a plan for collecting data.

Question and predictions: Question: Will doing his reading homework first be enough to raise his homework grade to a 70% or higher during the five-day intervention?

Prediction: James will score a 70% or higher on each of his reading assignments during the PDSA Cycle.

Who, what, where, when: On 3/22 (Thursday), 3/26 (Friday), 3/26 (Monday), 3/27 (Wednesday), James has committed to working on his reading homework first during Focus at the back kidney table in the Drew classroom. Ms. Kramer will monitor Focus and ensure James is in fact working on his reading homework first during the five-day intervention.

Plan for collecting data: Dr. Brennan will enter the dates and scores for his five reading homework assignments prior to the intervention (baseline) and for his five reading homework assignments during the intervention. Grey cells in the table are pre-intervention.


Dues wrote that they picked ‘the next five schools days’ for two reasons: first, they wanted to get back data quickly to see if their intervention was working. The second reason was that James was not very fond of the idea. (Ha!) But Dues and Dr. Brennan got him to agree to try the intervention for one week.

This improvement plan was communicated to Ms. Kramer, who was the teacher assigned to monitor students in Focus. Dr. Brennan also committed to recording each homework grade in a simple table within the PDSA template, under the ‘Do’ section. The Table (with James’s results!) looked like this:

James's Reading Homework Results

And the ‘Do’ section of the PDSA template looked like this:


James's Do Section of the PDSA Loop

Do: Run the test on a small scale.

Describe what happened. What data did you collect? What observations did you make? For the past week, Dr. Brennan has collected and graded James’s homework assignments to see if the “Reading Homework First” intervention is working. First, James’s Focus teacher reported that he did indeed work on his reading homework first as called for in the intervention. He did this during the school’s end-of-day homework period called Focus. In addition to working on his least favourite homework first, James had asked for a seating assignment in the back of the room at a kidney table so he could concentrate. This request was accommodated throughout the intervention period. Five homework assignments grades were collected prior to the start of the intervention, and five additional assignments were collected after the start of the intervention.

After initial reservations, James showed enthusiasm for the change in homework routine and the success he found by completing his reading homework first.


Dues writes, years later: “… there is probably something to be said about the agency activated in James as an active participant in this process. This may be at the root of James’s change of attitude during the cycle.

After execution of this loop, Dr. Brennan and Dues recorded the following observations in the ‘Study’ section of the PDSA template:


Study section of James's PDSA Cycle

Study: Analyse the results and compare them to your predictions.

Summarise and reflect on what you learned: James earned 16 out of 30 points (53%) on the five homework assignments immediately prior to the start of the intervention which reflects his typical homework scores throughout his 8th grade year. All five of these were failing homework. After the intervention began on March 22nd five additional homework assignments were collected and graded in reading class. James passed four out of five assignments and earned 23.75 out of 30 point (79%) during the intervention period. This was 9% higher than the predicted increase in homework grades.


Notice that the specific language in the prediction was that James would earn 70% or higher on each assignment. He earned the predicted score on three out of five assignments, but if you look at all five assignments combined, he earned 79% of the points. This was 9% higher than predicted, and was 26% higher than he had earned on the baseline assignments.

Dr. Brennan and Dues then had to take this information and consider how they would act on what had been learned through the cycle.

The decision was mapped out in the following Act section of the PDSA template:


Act section of James's PDSA cycle

Act: Based on what you learned from the test, make a plan for your next step.

Determine what modifications you should make — adapt, adopt, or abandon: Given the initial success of the “Reading Homework First” intervention, we are going to keep the same design in place. For PDSA Cycle 2, we are going to check-in with James to see how he is feeling about the intervention. If he is on-board with committing to extending the intervention, we will collect data on his next ten homework assignments as well as check-in with him on his overall trimester reading grade when he receives his next progress report on April 16th.


According to the PDSA template that Dues had adapted from the Institute of Healthcare Improvement, the Act stage of the PDSA cycle can be split into three options:

  1. Abandon — drop a change idea if the results do not indicate that the idea was effective in bringing about the intended outcome. This typically won’t happen after only one testing cycle, but only if the change idea has been run through several cycles without success.

  2. Adapt — modify the intervention based on the learning from the previous cycle. It may be that the test didn’t go perfectly as planned, but some component may have worked to the extent that the team chooses to build on it in subsequent interactions.

  3. Adopt — adopt and standardise the change idea in your organisation. This typically only occurs after the idea has been tested under various conditions through multiple testing cycles.

In this particular case, Dr. Brennan, James and Dues decided to extend the testing of this intervention over the course of an additional ten homework assignments. This probably best falls into the Adapt bucket, because they were going to modify the time-period under which the change idea was tested. Dues writes: “It is not Abandon because the change idea is continuing to be tested. It is also not Adopt because we are not going as far as to say that the change idea will be a permanent part of James’s learning system

Dues closes this story with the following note:

In thinking about James in relation to Appreciation for a System, it is also worth mentioning something else that is not noted in the PDSA. With all of this attention paid to his reading grade, how were James’s grades in other subject areas impacted? As the PDSA unfolded during this final marking period, I’m happy to report that he maintained his grades in writing, math, science, and social studies while raising his reading grade. This goes back to the idea we covered earlier from Dr. Ackoff that improving one part of the system has the potential to degrade the system as a whole. If I had it to do over again, I would have incorporated this whole-system thinking more explicitly into the design of the PDSA (emphasis added).

Nonetheless, James’ PDSA is an illustration of just how powerful this process can be in bringing about improvement. It brings the invisible world of theory and observation of the visible world together. We had a theory (in James’ case, it was the Reading Homework First change idea) and then we observed what happened in the real world. The PDSA cycle gives us a disciplined framework by which to test our theories in real classrooms and schools and gives agency to the very people charged with implementing improvement ideas. This is a radical and hopefully refreshing departure from typical approaches to school improvement.

As alluded to earlier in this case, the protocol that Dues developed alongside James and Dr. Brennan was tested repeatedly with other students, before being rolled out to the rest of the USN schools, as part of an overall On-Track program.

But perhaps the most important bit is the observation that Dues makes about the rigour behind the PDSA cycle. He writes that the simplicity of the approach belies some sophisticated thinking (all emphasis added):

One of the most powerful tools that sits at the heart of Deming’s Theory of Knowledge is the Plan-Do-Study-Act (PDSA) cycle. Deductive learning, moving from a theory to the test of the theory, is combined with inductive learning, using results from a test to revise the theory, in rapid succession during the cycle. This is a key differentiator of PDSAs as a learning process as contrasted with using ideas generated through traditional research methods, even gold standard methods such as randomized controlled trials. By their very design, studies that result in evidence-based practices discount externalities instead of solving for them. In other words, evidence-based practices have been tested under specific conditions and those conditions often don’t match our contexts in important ways. The idea that many interventions are effective in some places but almost none of them work everywhere is such a common idea in the education research sector that this phenomenon has its own name—effects heterogeneity.

Beyond the concern of effects heterogeneity, there are several other reasons the PDSA is an effective tool. People learn better when they make predictions as a part of the learning process because making a prediction during the planning phase of the PDSA forces us to think ahead about the outcomes. In my experience, we are often overly optimistic in terms of both the speed and magnitude at which we think improvement will occur. But, because making a prediction causes us to examine the system, question, or theory under study closely, we are forced to think deeply about what it will take to bring about meaningful improvement. By predicting, we also get to see the thinking process of our team members. Learning about your own ability to predict, in addition to your team members’ ability to predict, gives insight into how closely connected your organization’s theories are to its reality.

The PDSA cycle was not originally developed for schools. It was created for manufacturing, rolled out to healthcare, and remains a core part of the Continuous Improvement canon. This is merely one instance of what it looks like, applied to improving educational outcomes.

Sources

Member Comments