Should the Education System Have a Single Design?

Many forces push schools toward a one-size-fits-all approach, even though school does not seem to work all that well for many students. Why is this, and how can we do better? Why don’t we provide “different strokes for different folks” as part of our education system? Click on title above to see more.

Currently, various forces encourage schools to use a simple, evidence-based approach to deciding on what learning experiences children should have, and in general, schools use the same basic set of learning experiences for all children, with relatively modest personalizations. From various sources, including government, the research world, practitioner media, and corporate marketing, school systems learn of approaches to schooling that have, or are claimed to have, merit. Considering cost and readily accessible evidence, they pick what they see as the most cost-effective alternative. They make this alternative available to teachers, often mandating its use.  Sometimes, they even invest in some professional development, so teachers can learn to use the new learning tools.

This is a very old approach. Indeed, in the Bible (Daniel 1:8-16), we learn that Daniel conducted an experiment to convince the king’s officers that they could safely allow him and his colleagues to eat a vegetarian diet rather than the royal food the king had ordered. While not a randomized clinical trial, it was a matched group experiment and probably the first to be documented. Most educational policy folks would be pleased to see school districts proceeding even this systematically today.  But, I would argue that Daniel’s approach probably would not have generalized beyond his highly selected colleagues and might not even have worked for all of them for any extended period. For education today, I believe that we need redundant learning opportunities and should expect that different students will need qualitatively different selections of those learning opportunities to prepare themselves adequately for life in the age of smart machines. The rest of this blog develops that argument.

Schools in the U. S. remain under the influence of the scientific management movement that guided factories in the twentieth century. The basic idea is that with careful study, we could figure out the optimal way for any process to be carried out and then design a system that does it that way.  At the time that “scientific management” was becoming popular, the psychology of learning was focused on simple conditioning principles and the belief that all knowledge that schools needed to teach could be broken up into small pieces, with each piece then being conditioned in the learner.  If there was a universal process by which learning happened, then schools could be most efficient if teachers applied that process optimally.  While it was assumed that students might learn at different speeds, there was no real assumption of any other differences among learners, except perhaps those due to specific challenges such as blindness or deafness.

The very notion of grades in schools was anchored in these universalist beliefs. If a child was in a particular grade, then it was assumed that they had acquired the learning associated with that grade.  The responsibility of the teacher was to assure that each child was drilled sufficiently, using basic conditioning principles, so that they remained on grade level. The idea was to simplify the teaching and learning process so that it could be done efficiently and by teachers with minimal training.  Over time, it was realized that specific techniques were needed to teach each subject and perhaps that slightly different approaches were needed for students from different backgrounds.  And, of course, it was quickly discovered that some students needed to be motivated to persist in the multiple “trials” needed to condition the knowledge elements of the curriculum.  Teacher preparation programs focused on preparing teachers to handle this variability and motivation need, or at least they provided “theory” courses that “covered” techniques for teaching each subject and motivating each student. In the best teacher preparation programs, future teachers even studied research comparing alternative approaches to teaching and learning, but virtually always with a view toward finding the one approach that was best for all, or at least almost all.

Even today, more than we sometimes realize, the basic model of schooling, at least the one that guides financial and staffing decisions and that determines what curriculum resources schools will have and how teachers will be “managed,” is one of drill to criterion and lockstep grade levels.  That is, each student will receive the same basic program of instruction, and that program will simply be repeated if it initially fails to produce adequate learning. While we seldom want to admit it, the basic approach is that if our main educational approach fails, we should simply do more of it. Extending the school day or adding days to the school year is just one example of how educational policy responds to this underlying model. In most other areas of life, we assume that if one approach fails, we should try alternatives rather than perseverating with the approach that has proven inadequate.

In recent decades, education researchers have worked to develop alternatives to the drill and practice approach that has guided educational policies, though that viewpoint has not disappeared completely. One example is the emergence of the Common Core State Standards. Entering the information age triggered changes that eventually resulted in an understanding that the traditional curriculum is too simplistic. The “mastery tests” associated with the traditional view of learning often are superficial and fail to assess the ability to apply what one has learned to real situations in the world. Also, standardized testing, which uses large numbers of tiny cognitive performances (called test items) to predict more substantial competences, is a force pushing teachers toward excessive superficiality. Education researchers are working to address these issues. In each subject area, there have been hundreds of research efforts aimed at identifying that subject’s unique requirements for effective education.

But, in spite of all of this research effort, we still mostly adhere to view that the single best general design should be applied to school facilities and curriculum, that all students should be “processed” via that uniform design, and that standardized tests in their current form should be the index of schooling effectiveness.  To some extent, this is justified, because many schools that are not tightly controlled for adherence to a single process design tend to do rather poorly in educating their students. Doing something that works for some students is better than simply not doing a decent job very often at all. But, the push for using only the one approach that is most successful is not getting all children educated well. Newspapers run stories showing tiny changes each year, sometimes just a move from 20% of students at “grade level” on a standardized test to 25%, and we settle for that level of schooling effectiveness.  The one-size-for-all approach sometimes is better than just letting school happen, but it is nowhere near good enough.

By focusing on standardized tests that are limited in what they assess, we perpetuate this underperformance. Generally, we measure the success of charter schools and other alternatives to the public system using the same standardized tests that the general public system uses, creating forces that push charters toward uniformity with the public system.  This forces charter schools to stick to what produces mediocre results in many public systems. Still, given the current public vision of education, this makes sense. Our only accepted approach to assessing learning outcomes are those standardized tests, and, absent any “accountability testing,” given that some charters are for-profit businesses, things would likely be worse without such testing.

However strong the forces pushing for a single “efficient” approach to schooling and use of standardized tests as they exist today, it remains the case that the system fails many students, and we don’t even have the deeper measures that would allow us to know whether our apparent partial successes mean that some students are prepared for life in the age of artificial intelligence.

Why Not Develop a Better General Design for Schools?

In general, when a major enterprise has a process that does not work well, we study it and try to develop a better alternative.  This has been the approach taken with education, at least with publicly funded education. Indeed, hundreds of educational researchers are conducting studies aimed at producing better ways to teach each subject, to motivate students, and to operate schools. We are, as a nation, heavily invested in finding a better uniform approach to schooling.  However, as we learn more about how people become competent in our world and useful to the community, we are seeing that what someone learns from a given experience depends heavily on what they already know. In a complex society with dramatic experiential differences based upon income and additional differences based upon cultural origins, children come to school with dramatically different prior knowledge and experience. Schools need to get better at adapting to student prior knowledge differences, especially because the differences are qualitative as well as quantitative.

Schools do try to adapt to such differences, but their efforts sometimes are pathological. We have had several decades of concern about fairness, mostly fairness in testing. Parents rightly complain when a test is easier for one group than another simply because its items reflect the favored group’s experiences.  We even have good statistical techniques for identifying unfair items on tests, a technology called differential item functioning. In the end, though, when school learning and testing is restricted to content to which children seem to have equal access, schooling tends to be too much about drill on what Whitehead called inert knowledge, knowledge that doesn’t get used when it should be used.

The differences in children’s experiences prior to a particular year of schooling are not small.  Consider only one case, verbal experience prior to kindergarten. Experience with words, in speech and especially in conversation, is essential to learning to read.  After all, written text is simply an encoding of spoken language. It is not hard to imagine that some children might get say 15 minutes more conversation time than others with adults who use language broadly, introducing a lot of new words.  People speak at about 120 words per minute on average, and in a conversation perhaps half the time is silent as participants prepare to respond to what they just heard. So, conversation might have say 60 words per minute.  In 15 minutes, that would be 900 words. Over the course of a year, that comes to about a third of a million words. If the conversations start at age two and continue until the child enters kindergarten at age five, that comes to about three quarters of a million extra words heard or spoken.  This is likely a dramatic underestimate of the difference in word experience between children lucky enough to have significant conversation time with adults and those less lucky.  Actual differences have been estimated to be in the range of two to three million words.

So, some children arrive at kindergarten with plenty of experience hearing words and thus are quite ready to learn how words are represented on the page while others arrive needing a lot of conversation to prepare them for learning to read. Many other differences like this also exist. Consider, as an extreme example, my own experience growing up as the son of an electrical engineer. At the age of three, I had the task of helping my color-blind father find the right resistors to use in building a TV set. He would tell me what resistance value was needed, and I would use a little device to translate that number into a sequence of colored stripes, after which I would look in a bin of resistors to find the one that had the right color pattern. As a result, I had a much deeper sense of number by the time I got to kindergarten than most of my peers. But, the one method of teaching arithmetic and numbers that was used for all of us in that kindergarten did not take account of the differences. Many learning experiences were wasted on me while other children did not get the experiences they needed.  Most grew up having been taught, as early as kindergarten, that math is hard.  More accurately, math is hard to learn if the way you are taught is not adapted to your prior experience.

In the past, the factory efficiency model may have been necessary given what our society is willing to spend on teachers.  Today, though, those extra 15 minutes of critical experiences need not all be provided by teachers.  Some can be provided by information technologies, either by having an intelligent agent on a screen conversing with the student or by having such intelligent agents engineer the peer interactions among students with particular experiential backgrounds. We really can afford different strokes for different folks, and it’s time to make that happen in all our schools.

Another way of thinking about different strokes for different folks is to call the different learning opportunities I’ve just called for planned redundancy. In the next blog, I will discuss the value of a redundant evolutionary approach.

Leave a Reply