Tag Archives: Student Data

Can Better Data Infrastructure Prevent School Violence? We Think So.

Some states want to use federal grant money to put more guns in schools in order to prevent another episode of violence like the one that we saw in Parkland, Florida. It’s a controversial idea and one that favors grand drama over real thoughtful solutions. While it won’t grab national headlines, we could actually prevent more violence and protect more students for less money with investments into information-sharing technology.

There’s no way to know with certainty what could have prevented the tragedy in Parkland, but we do know one thing: there was enough information out there to paint a troubling picture of a young person in crisis with a desperate need for supportive services. Nikolas Cruz, who  returned to his high school armed and killed seventeen people in six minutes, was known to adults as a child in need of additional support and services.

Acting on that information is a different story. Alarmingly, we have recently learned that the adults (like psychiatrists, teachers, and law enforcement officials) who held pieces of Cruz’s story weren’t talking to each other, and there was no system in place for them to share information securely, quickly, and accurately.

Part of the problem is legal: health care, education, and child welfare privacy laws constrain the ways in which systems can share personally identifying information about young people in their care. At school safety panels earlier this summer, the Attorney General and other federal leaders suggested that these statutes are interpreted too broadly and that restricted information-sharing impedes the ability of local authorities to quickly deliver services to students in crisis.

But an important — and overlooked — part of the problem is technical. Even where there are data-sharing agreements in place, and high-quality service programs available to meet every need (and enough resources to go around), databases that track services for young people are quite literally disconnected from each other and unable to connect those services to the kids who need them. Legacy data warehouses within care agencies and schools create data silos that are nearly impenetrable. Not only do systems not talk across their bureaucratic borders, they are often incompatible with their counterparts in the next city or a neighboring county.

And even where the technical infrastructures are more modern, they rarely hold all of the information that exists or hold it in a way that is useful for providers. In fact, many systems still keep paper records or require hard copies of requests for information. As a result, direct-care staff, like nurses and school counselors, end up spending much of their days tracking down paperwork, faxing things back and forth, and cold-calling other offices instead of working with young people. Continue reading

Why Can’t We Find Even the Most Basic Info About Schools in Secure Facilities?

Amid recent fuss about the accuracy of the Department of Education’s Office for Civil Rights Data Collection, it’s important to look at how those data errors can meaningfully impact education experiences for young people for whom no other substantive national research exists: students attending school in secure juvenile justice facilities.

Approximately 50,000 young people are incarcerated in juvenile justice facilities across the country on any given day, and they are supposed to attend school while they are in custody. For many of these students, attending school in a secure facility is the first time they have engaged with school consistently in three to five years. Their school experience while in custody is their last best chance to change the trajectory of their lives.

The problem is we know very little about the quality of these educational opportunities.

The biannual data collection conducted by the Department of Education’s Office for Civil Rights (OCR) is intended to be a comprehensive survey of education access in all schools in the country, and it now includes these juvenile justice schools. But our analysis from earlier this year found that states, and OCR at large, have not taken the responsibility for accurate reporting seriously. In fact, inconsistencies and incompleteness render the OCR data nearly meaningless. Alarmingly, the data still do not allow us to answer even the simplest question: How many students were enrolled in a juvenile justice school in 2013-14? Continue reading

Transformative Tech for Youth in Transition

Millions of students every year experience homelessness, a foster care placement, an incarceration, or an unmet mental or physical health need. And while the organizations and individuals that serve these youth act with the best intentions, existing technologies and practices result in fragmentation and poor communication among the adults working with a given young person. Different agencies may only be aware of particular aspects of a student’s life: one agency may know about a student’s health history while another knows about their past foster care placements.

There is hope, however: a number of districts and states have begun to innovate and design technological solutions to resolve the issue of agency fragmentation.

DC Foster Kids App home page

In Washington, D.C., the Child and Family Services Agency has developed the DC Foster Kids App, which grants foster parents and provider agencies access

to important information about their youth in care through a web-based application. The application includes medical contact information, important dates such as court hearings, and licensing and training requirements for the foster parent. Easy access to information allows the student and the adults in their lives to remain aware of milestones and data to best serve youth.
Continue reading

Is Pearson’s Scanning of Students’ Social Media Spying or Smart Security?

This month the Washington Post reported that testing giant Pearson has been monitoring students’ social media accounts, looking for evidence of test security violations on the PARCC assessment. The story broke in New Jersey; but given Pearson’s “yep, we did it” response, it’s probably reasonable to expect that it’s happening elsewhere. Cue outrage from parents, politicians, and the AFT.

Frankly, the only thing about this that is surprising is that it’s surprising to anyone.

If you have a presence on the Internet, you are being monitored. This is not black helicopter stuff. It’s just reality. The fact of the matter is that social media is, well, social. Is a person or an employer or a testing company who looks at something you actively put in public space spying on you?

Sure, it seems kind of creepy that old man Pearson is lurking on kids’ Twitter accounts and Facebook pages. But I didn’t read any evidence that they did anything other than monitor information that’s already public. The company has a responsibility to maintain the integrity of its product. Lots of districts are using it. It’s an important and consequential test for kids and schools. And they aren’t all administering it simultaneously, creating an opportunity for malfeasance. States invested a lot of resources in these assessments, and that investment must be protected. It’s part of what they paid for.

This story strikes me as a red herring on two fronts. First, the anti-testing crowd is using inflammatory words like “spying” to gin up support for their side. Second, it’s getting conflated with real concerns about the security of student data. With multi-million dollar companies like Target settling class action law suits for giant data breaches, the ability of government entities collecting massive amounts of data to protect it  is a serious issue that warrants serious debate.

Instead of demonizing Pearson or testing in general, it strikes me that there are two legitimate takeaways here. For one, students who were posting about testing items shouldn’t have been. So someone should talk to them about that. Plus, it’s a great opportunity to talk to students about the complete and utter lack of privacy the Internet affords. There is evidence that kids are dangerously naive on this front.

Are Pearson’s actions here wrong? I don’t think so. Are they discomforting? Yes. The fact that it involves kids makes it seem worse; and the fact that Pearson’s practices got singled out makes it seem egregious. But they aren’t substantively different from the practices of countless other companies (and schools, colleges, etc.) that scan all of our Internet activities every day, regardless of how old we are.