Open up learning data so that teachers are free to see
Part two of a three post series looks deeper inside the data closet and possibilities of K-12 schools. Lurking in all corners is data from multiple sources that when unleashed, often compete for attention. All of a sudden BIG DATA syndrome lurches forward into an unsuspecting audience. Keeping data terminology simple is of paramount importance. Post 3 will discuss how to run with the energy you will unlock in this post and deliver a framework for teachers.
I often ask teachers who come along to “data for learning” sessions in schools, “What data really matters to you?, “What data visibility would help you to help your students?” Surprisingly, most teachers don’t lead with “All I need is Power BI and a consultant to develop charts and data analytics”. Resoundingly teachers see their priority as doing the best they can to help their students achieve the absolute best they can. As true as that is, I never really get an answer. “What have you got?” is the most common response. It’s like a standoff; “show us what you have and we will take a look”. It is a reasonable response considering many schools don’t have a system for organising, packing and unpacking data in a way that everyone feels a part of culturally. This unfortunately describes many of the to-and-fro data machinations schools go through when dealing with data.
More times than not, data in schools is rarely imagined as a fountain of collaborative knowledge. Data is thought of as more like secret silos of collective recount and just another thing to manage and transpose. Not all teachers know where to go to access data or indeed what they could ask for. Online systems like OARS and SCOUT, and many others, open a limited window into data that external providers want you to be happy with. These systems have multiple logins and access points to remember and never really bring data back into practical frame of reference for teachers. Data is described and displayed as envisioned by developers. For example, SCOUT can tell you about one student at a time, not the class or cohort. Most times, less than 20% of teachers in a school access these systems. That means 80% of teachers don’t have the time and find access too hard. Not a good run rate for any measure of progress.
So, how do we progress? There is an element of conscious incompetence when it comes to knowing what data there is and what value it holds. Another way of saying this is that many teachers don’t know what they don’t know. Until a ‘Kondo’ declutter event happens both in the physical data closet, and in our thinking, there will be no new windscreen through which teachers can explore a hunch or find a new direction. The current rear vision mirror, useful for lane changing in traffic, will persist. I recommend a simple Kondo-style approach to data organising, starting with a common framework to build momentum.
We can’t advance data conversations without a framework that underpins data. We can’t continually talk about ‘stuff’ nor can we declutter our ‘stuff’ into a containerised system if we don’t have containers. I believe there are six primary data domains in every school. That’s six ways we can declutter data.
- Behavioural – Attendance – Compliance
- Pastoral wellness – Participation
- Academic growth
- Targeted skills development – Formative – reading , writing , maths – and ‘apps’ for that
- Observation – Cognitive capture
The list of Data sources I highlighted in Post 1, all neatly slot into these Domains. In all categorisation challenges, the key phrase to remember is ‘less is more’.
Finally I get to build out my metaphor for Blog 2. De-boned, these domains represent the six faces of a Rubik cube. The Rubik cube represents the complete student learning data story of each school.
When teachers ask , “what teaching and learning data do we have?” the answer is that we have six Data Domains. Imagine each one as a side of a Rubik cube.
NAPLAN, PAT and ALLWELL are great examples of Diagnostic Data Sources. Every Data source logically belongs in one of the six Data Domains and is a row or box on the Diagnostics Domain face of the cube. Each Data Source has a context, year and other surrounding metadata that makes it either unique or simply more of the same thing. Let’s label the Diagnostic domain as Orange in color and each Diagnostic Data Source as a row on the Diagnostic cube face. The rows could be organised by year or more granular learning breakdowns like Type of Test, Reading, Writing, Comprehension (that’s the easy part done by data people).
There it is. We have a base structure into which the initial clutter of diagnostic data Sources are organised. When teachers want Diagnostic information, there is a Domain in which all diagnostic Data Sources live and everyone uses the same terminology. Make sense? Now it’s time to go one more level.
The clutter and detail found in most Data Sources is actually in the Data Elements, the pieces of information and specific content contained inside each Data Source. NAPLAN, for example, is a very rich source of many data elements. From Band to Scaled Score to question correctness, multiple data Elements exist within each data source. Don’t worry about the elements now. Good data storage will offer you choices around these data elements, hopefully in a big menu, tick box format.
Your initial burning question can now can now be expanded confidently, knowing you have the right ‘stuff’ in the right place.
- I am after Diagnostic information ( Domain)
- From NAPLAN 2018 ( Data Source)
- ..and I would like to see Band, Scaled Score and Raw Score for Reading (Data Elements).
Now, all you have to decide is the volume of data you want. Do you want this information for:
- A Student or Students?
- A Class, your Classes? (because you are a teacher with a roster so that would really help)
- A Cohort / House / Year or other aggregation?
How to think about data across Domains and Sources and then run free.
Imagine this new Rubik Super-Cube in your hands. You have data from your six Data Domains in sight. For a student you can see across all Domains of data, as you can for a cohort or class. The Cube metaphor implies that all volumes of data across all six Domains is available at any level of inquiry. Be it, Student, Class, Cohort, Subject, House or Year, the query is the same. The only difference is the amount of information you want to look into.
With this metaphorical magic in your head, you can now frame any question using your general intelligence, something you have and computers don’t.
“Can I see Pastoral data on Personal Development (a row Data Source from Pastoral Domain- Yellow) against the Reading results from NAPLAN 2018 (a row Data Source from Diagnostic Domain- Orange)?” I usually add ‘Please’ but Computers don’t know that word either.
Imagine twisting your Rubik Pastoral – Personal Development Data Source row across the face of the Diagnostic – Reading Data Source. You have your view side by side and the cognitive ability to run with it. As with a real Rubik cube, your twists and turn combinations are up to your imagination.
The only remaining question is what Data Elements you would like to see from each Data Source? This will define the level of detail you want from each Data Source to quench your insatiable thirst for learning growth insights.
Everything improves from a solid and consistent starting framework. These examples are just some simple starting gymnastics that you can do with organised Data Domains and Data Sources.
To recap the main points in this post.
- There are six Domains of data in K-12 Schools.
- In each Domain, there are multiple Data Sources. Start by finding a few but do understand that when a new Data Source arrives, it will fit into a Domain. That’s the Kondo ‘less is more’ way!
- In Each Data Source, there are Data Elements to see. This is Post 3.
The real excitement comes when your school has a platform approach to data, one that allows everyone to confidently explore data up, down and across the cube, the way they want to. I started at the top of this post with and image of the end game.
In my final post, I will talk about how all of these Cube formats with Domain and Sources completely mixed, all make sense. Sometimes the answers you seek come from multiple Domain, Source and Element data pieces. Why not? The structure of the data should support the interest you have.
When you have a framework in motion you can construct any combination of inquiry. The most important feature of any system is the ease at which you can snap back to a fully solved puzzle and start again. This would be like having Marie Kondo come back into the room and declutter all over again. Yes indeed, this is mandatory!
Mark Stanley is CEO and Founder of Literatu. : www.literatu.com
In response to recent media coverage of flat or backward NAPLAN results, I engaged in a correspondence with a reporter. Here’s what I wrote:
The perspective I can offer is one that focuses on how schools get the data as opposed to beating up the test, the schools or the government.
I can tell this story in three pictures (from screenshots of our software). This said, my point is not to flog our software, but to highlight the value of EASY ACCESS to data insights and how, without this, the lack of growth is not a surprise, but is, in fact, what we should expect.
All the screens are of actual NAPLAN data, but anonymised so as not to compromise confidentiality.
1) Flat results.
This visualisation shows 6 years of NAPLAN Band achievement across years 3, 5, 7 & 9. You can see that the real story here is one of No Growth – the results are essentially flat. This is the story your report told today. The reason I see this slightly differently is that we have schools who are just starting to use our software so 2017/18 is THE FIRST YEAR they have been able to easily see this data (and the next screens). So the point is that, without easy access to unpacking the band scores into skills and subskills, how were schools and teachers EXPECTED to make improvements? Thus schools and teachers worked very hard either doing the same things they have always done or guessing what needs fixing.
(click to enlarge)
2) Unpacking the Data – from Skill problems to identifying Subskills
No matter how hard teachers work, doing more of the same doesn’t necessarily address gaps in their students’ skills. Another visualisation shows how the data from the massive spreadsheets can be visualised in a way that goes from seeing the problem to seeing what needs targeting. Here, “traffic light colours” signal problems in specific skills and clicking one of the bubbles reveals the subskills that were assessed. NOW teachers know what they can target their teaching to:
3) Give teachers Insight into the students right in their classes!
The fact that NAPLAN data is often 1-2 years old by the time it reaches school and public attention makes it hard to use. The tests assess skills from the preceding year (e.g., Year 3 assesses Year 2 skills), then schools find out about the results toward the end of their year with the students and here we are almost upon 2018 NAPLAN and MySchool is only now updated with 2017 NAPLAN data. How is a classroom teacher meant to help the students in their classes today?
In the last screen animation, you can see the “Teacher Dashboard” where a school’s NAPLAN data is sliced and sorted for the actual students sitting in front of a classroom teacher. Yes, the data may still be a year old, but now the classroom teacher can accommodate and differentiate what he / she does based upon their students. In the animation, notice that both the data in the cards and the list of students in the right column change as I switch between classes (at the top of the dashboard). When I click on the NAPLAN Weather report card for writing, I can see which 4 students went backward from their 2015 to 2017 NAPLAN tests and which 5 achieved above expected growth targets. Then when I click the NAPLAN Skill Focus card (and its backside) I get details about the top 4 (then 8 when flipped) areas in each of the 4 NAPLAN domains where this particular class of students scored lowest. Again, clicking on the card, sorts the students according to the skill clicked so we can see who needs the most help and who could be extended.
So, to sum up, I see a big part of the problem is that classroom teachers have not been able to access the right kind of information easily in order to use the NAPLAN data (albeit a “snapshot” and a “diagnostic assessment being used as a high-stakes test” – two legitimate complaints against NAPLAN). In fact, we have run into the situation where one of the leading state’s association for schools takes the approach of helping schools unpack NAPLAN results through a workshop on using Excel spreadsheets!!!! In 2018!
Our schools are just this year getting such access and we work with them to take charge of their remediation programs and initiatives and expect to see upward trends as they continuously improve their teaching and learning practices.
I’d love to chat or even take you through this software as a way to point to other solutions than beating up teachers, schools or the government – not something your reporting has ever done, but these bash-ups tend to be what’s buzzing in the media. Perhaps a better, more productive approach is to use smart software to provide data insights?