Evaluating an ECIDS

ECDataWorks has collaborated with state partners to develop openly-available resources to improve delivery of ECIDS

Do Your Analytics Deserve a Gold Star? Appropriately Defining ECIDS Success

By Phil Sirinides and Missy Coffey

Throughout this series we have discussed the concept of resilient analytics – the need to build early childhood integrated data systems (ECIDS) that can adapt to and ultimately meet the emerging needs and everyday challenges we see in the field.

This is a worthwhile – and we would argue crucial – goal, but it is not the ultimate goal. All the data tools, all the resilient analytics, all the time spent on organizational planning and shared ownership, all the stakeholder engagement and outreach, all of it leads to one end: usefulness.

Do your ECIDS analytics make it easier for policymakers, practitioners, and other stakeholders to effectively use data in their work? Will they lead to positive changes that meet the needs of your communities and address long-standing challenges for children and families?

At this point we all instinctively know that simply providing data won’t lead to change. But then how do we know if the ECIDS – and the analytics reported from the ECIDS – are actually informing policy and practice decisions in your state? (Hint: The answer is not Google Analytics)

In 2019, a group of national ECIDS leads partnered with ECDataWorks staff to answer this exact question. The result was a set of indicators that state partners found relevant in defining success of an analytic tool. They found that ECIDS and analytic tools are successful when:

Organizational and specific analytic factors are distinctly evaluated. Separate the success measures for the analytic tool from the organizational indicators. Both need to be assessed, as they impact the success of the analytics, but they are improved using different strategies and need to be assessed in unique ways.

There is a clear use case. A use case articulates the goal of an analytic tool, the intended audience, and the action that would be informed if the information were available. The tool can be evaluated based on how well it aligned to the requested information need.

It has relevant information. Relevant information should be evidence-based. The analytics need to incorporate the most recent evidence on practice. When building an analytic tool on school readiness, for example, team members need to understand the research on readiness and incorporate each of the factors that influence readiness into the tool.

Read and download our full Early Childhood Integrated Data Analytic Self-Assessment Rubric here

Simply obtaining data analytics from your ECIDS is not sufficient to help inform practice or policy. And, as many teams know, there is often not time for a full evaluation of a tool’s influence or impact. Defining success and tracking indicators to provide formative data for ECIDS project teams can be a useful strategy for maintaining buy-in among teams members and continuing to move a project forward.

Hear more on this topic on our latest podcast, featuring interviews with Steven Matherly of the Utah Department of Health; Richard Gonzales of the US Department of Health and Human Services Administration for Children and Families; and Anita Larson and Jennifer Verbrugge of the Minnesota Department of Education.

**

Phil Sirinides and Missy Coffey are the principal investigators for ECDataWorks, a nationwide, nonprofit initiative dedicated to advancing early childhood policy and programs through the strategic use of integrated data.

This blog is part of the the ECDataWorks Community initiative, produced with support from the Heising-Simons Foundation.

More Resources


ECDW Publications Repository

  • Search our Publications

  • Federal Investment in Early Childhood Data Systems

  • Early Childhood Integrated Data Analytic Self-Assessment Rubric

  • ECDataWorks Programmatic Use Case for the Development of an Analytic Tool