Exploring error identification to improve data and evidence on children in care

Full Application: Funded

Our discovery highlighted that keeping Looked After Children (LAC) data accurate is time-consuming and difficult. Consequently, leadership often don’t have the reliable insights they need for key decisions. (See p.22 for full user needs.) In alpha, we’ll explore different solutions to this problem and test our biggest assumptions and risks, following agile methodology and the GDS Service Standard.

The discovery partners (GMCA, Manchester, Stockport and Wigan) will lead the work, with the DfE. We’ll user test and assess technical feasibility with councils of different sizes and structures (mets/unitaries/counties/trusts) to ensure we solve a common problem – including partners West Berkshire, Milton Keynes, Isle of Wight (IOW), Buckinghamshire, Bracknell Forest, East Sussex and others e.g. Slough.

Our discovery concluded that an error-identification tool could meet user needs through enabling year-round error cleaning – currently not possible (p.38). We’ll build and test a prototype that tests data against the DfE’s LAC data-validation code and local validation rules. We’ll explore ways the tool can improve error-cleaning for analysts, such as automating cleaning and notifications to social workers, and develop reusable patterns for others.

To plan for alpha, we held a workshop to identify key hypotheses underlying these ideas and develop a testing approach:

  • Better data will lead to better decisions

In discovery, leadership said they need accurate data to improve decisions on LAC (p.21). In alpha, we’ll use A/B testing with cleaned and uncleaned data to quantify the impact of data quality on decisions.

  • Analysts will use the tool and clean errors identified

Analysts need to identify errors in LAC data year-round (p.41). In alpha, we’ll test how to automate cleaning, track usage and error counts over time to test if quality improves, and conduct user research into the experiences of analysts using the prototype, to assess the impact of the tool.

  • Automatic notifications will help social workers fix errors

Analysts spend time chasing social workers and business-support to fix errors (p.51). In alpha, we’ll use Wizard of Oz prototype testing and observation with social workers to test if automatic notifications improve data quality at input stage.

  • The tool will be feasible and scalable across all councils

In alpha, we’ll use semi-structured interviews, data-analysis and moderated usability testing across the ten partner councils and existing networks (e.g. Regional Information Groups (RIGs), South-East Sector-Led Improvement Partnership (SESLIP) and National Performance and Information-Management Group (NPIMG)) to test scalability, and assess applicability to other statutory returns (134 in total, p.11).

Our discovery highlighted two key linked unmet user needs:

  1. Analysts need the ability to identify and fix errors year-round, to prevent errors from building up.Impact: They can only fix errors in an intensive, time-consuming 3-month period, leaving little time for analysis.
  2. Leadership need accurate, up-to-date data so they can rely on evidence when making decisionsImpact:Leadership find “data quality makes the analysis unreliable”, meaning “evidence on how well things are working is limited.”

 There is a gap in the market to address this problem. These needs exist in all 14 councils we’ve spoken to directly – regardless of case management systems. There’s no common error-checking tool. However, a common solution is feasible. Every council submits the same dataset to the DfE – meaning error-checking is applicable to all.

There are also significant potential benefits from the other 134 annual statutory returns required of councils. In discovery, we investigated the Children in Need and School Censuses, which require time-intensive error-checking.

Our core user group is Children’s Services analysts, but social workers, leadership and LAC are also beneficiaries. Analysts clean and prepare data on LAC for leadership and the DfE. Our aim is to make cleaning more effective for them (p.19-20). This will also benefit:

  • Social workers, who also clean data(p.18).
  • Leadership, who use this data to make operational, strategic and commissioning decisions about LAC services (21).
  •  LAC who need the best support possible and an accurate record of their childhood (15).

Our hypotheses evolved in discovery. Before discovery, we knew leadership don’t have timely access to all the data and evidence needed to ensure LAC get the best support. Our original hypothesis was that a better common data model could provide leadership the evidence they need.

Our discovery confirmed the evidence gap (p.94,103), but revealed a more complex situation with several distinct problems. Of these, improving data quality is most pressing: this will drive immediate benefits and move towards fixing the plumbing.

We considered whether the DfE opening their validation portal year-round would solve the problem. However, this only solves part of the problem. It does not make it faster or easier to clean the data. Our tool and reusable patterns aim to solve the full problem.

We’ll improve the analyst user journey through enabling identification of errors year-round and more effective and efficient error cleaning. This should free up analysts to build the evidence leadership need.

Methodology
Calculations, using Green Book guidance, are based on user research and analysis in Manchester/Stockport/Wigan. They’ve been validated with eleven other councils, regional groups (SESLIP, RIGs) and DfE (having national oversight), suggesting that Manchester/Stockport/Wigan are comparable in time, costs and processes to others (largest cause of differences is council size, which we factor in).

We describe the key points below. For full details, see our Discovery-Benefits-Case.

Benefits
Our Discovery highlighted three levels of benefits (p.27):

1. Short-term: Analysts and social-workers save time cleaning data, freeing-up time for analysis and working with families.

An average council spends ~45days/year cleaning data for LAC statutory-returns (higher in larger councils: 110days in Manchester). Councils doing full year-round cleaning (e.g. Wigan) spend a further ~400days/year across social-workers, analysts and support teams (p.31).

Our Discovery showed other statutory-returns (134 in total) could benefit, in particular:

  • Children-in-Need Census (taking ~45days/year 29)
  • School Census (taking ~300days/year 29)

Automation of identification/cleaning could conservatively eliminate 50% of errors as:

  • 53% are already eliminated by 1+ councils (p.29)
  • >50% are just three common types (p.28).

Putting this together, the average council could save 320days/year, equivalent to £57,000/council/year (uncashable).

Applying conservative confidence-factors to each input (40-95% based on GDS-Benefits-Handbook methodology to assess data: age, relevance, range, quality, consistency) gives savings estimate: £22,500/council/year (p.40).

2. Medium-term: Better quality data makes analysis and tools more effective, both locally e.g. council analysis dashboards, and nationally e.g. Ofsted Children’s-Services-Analysis-Tool, DfE Local-Authority-Interactive-Tool, LSE Unit-Cost-Calculator.

Without good quality data we can’t improve LAC services, currently costing £4BN/year (2018/19) and overspending £800m/year (2018-19). Research by the What Works Centre conservatively suggests that £200m/year savings could be achieved through improving entry-rates and unit-costs to the median – but this needs evidence underpinned by quality data.

3. Long-term: Better LAC services mean better outcomes and therefore cashable savings. Currently outcomes are poor (4x more crime, 5x more exclusions, 40x more homelessness (p.5)) and costly for government (costing ~£1BN/year to MoJ, DWP, HMRC alone 16).

Better quality data also improves government education and social-care policy: currently significant inaccuracies in DfE/ONS data (p.65) undermine evidence-led policy.

We can’t currently accurately quantify medium- and long-term benefits – we’ll test these in alpha.

Just considering short-term benefits, total savings, depending on scale, would be:

  • Downside (10 GM councils): £225,000/year
  • Base-case (30 councils):        £675,000/year
  • National:                                   £3.4m/year


Costs
(details p.35):

  • Discovery, alpha & beta development: £360,000/one-off
  • Live development: £100,000/year
  • Set-up costs (onboarding & service-change): £5,500/council
  • Ongoing costs (support & hosting): £2,000/year/council

Investment case (p.37):

Scenario: Investment 5-year ROI
Downside £417,000 1.5x
Base £517,000 5.1x
National-scale £1.2m 12.5x

We tried some great new tools, ceremonies and collaborative and iterative ways of working in discovery. These enabled agile working and helped ensure we were always focused on user need, continually learning, and able to pivot when necessary. We’ll build on these in alpha, as well as trying other great approaches we’ve seen from other Local Digital projects.

Tools

We’ll continue to use:

  • Huddle as a shared project collaboration space, making our materials open
  • A public Kanban Trello board so it’s easy for everyone to share the plan
  • Email rather than Slack, as we know from discovery not all IT-departments allow Slack

We’ll use more of:

  • YouTube to livestream our show-and-tells, as it’s easy and well-known
  • Github for our prototypes and guidance
  • Pipeline for project updates and videos.

Ceremonies and approaches
We’ll continue to use an agile approach, working in sprints with daily standups and regular show-and-tells and retrospectives, following Kanban project management.

In discovery, 1-2-4-All and Walking Brainstorms worked well for idea generation, while lean canvases and WWWWWH worked well for building more thorough understanding of potential solutions – we’ll continue these.

In alpha, we’ll try new ceremonies and liberating structures (e.g. from www.liberatingstructures.com, www.sessionslab.com, www.funretrospectives.com) e.g. the Six Thinking Hats Method when designing prototypes to help consider the user need and solution from different perspectives.

Team collaboration
We’ll continue meeting remotely for sprint-planning sessions at the start of each sprint to set objectives and holding in-person show-and-tells at the end of each phase to share findings, with futurespectives to collaboratively plan the next phase. We’ll invite wider networks to our show-and-tells and livestream them to share learnings more widely. This will be particularly valuable for collaborating with our partners across the country.

 We’ll continue using retrospectives based on the FLAP, KALM and 3 Ls models, which we found helpful in discovery to identify what worked well and what to change. In alpha, we’ll use a Team Purpose and Culture workshop template at our kick-off meeting to establish how we’ll work together and ensure everyone is aligned on goals; this worked well for Stockport’s Local Digital project with Leeds.

Team structure and governance
Having small, close teams, with one representative/council worked well as a team structure in discovery – we’ll continue this. During Alpha, the project will be a standing item in SLT meetings to ensure oversight, buy-in and help build the business case for sustainability.

Support from the LDCU in discovery was very helpful. In particular, training and networking events, help with comms and feedback enabled us to more effectively develop necessary skills, share our findings more widely and get valuable insight and challenge from an ‘outside’ and expert perspective.

Training: In discovery, we attended the LCDU’s 3-day GDS Academy Agile For Teams Training. This gave us a much better understanding of agile approaches, enabling us to use Kanban project management, and better ensure our user research was effectively capturing user needs. We’d welcome the opportunity to put further team members through this training, the User Research – Working Level Training, and the Introduction to User-Centered Design training.

Community events: We really valued the Local Digital Fund events we attended. The kick-off event in London and the Roadshow in Bradford were great opportunities to meet and learn from other councils signed up to the Declaration. The kick-off gave us great stimulation to think through some project planning elements and the roadshow showed us approaches other councils had taken in their projects. We’d be keen to attend further community events.

Sharing learnings: In discovery, the LDCU retweeting and commenting on our blogs was very helpful, enabling us to share our findings more widely and get feedback from wider networks. One blog post led to us being contacted by the Children’s Society, whom we met with to discuss our respective pieces of work in Children’s Services, including better use of data and digital. We’d encourage more of this in alpha.

Feedback: Feedback from the LDCU helped ensure we were following agile methodology, working in the open and producing effective outputs. Specifically, feedback from Sam and Rushi on our user research report and benefits case meant we included demographic information in our user personas, explained our user research approach in more detail, more clearly labeled where conservative assumptions had been used in the benefits case, and included what estimates would look like with less conservative assumptions. This helped make our outputs as valuable as possible for other councils. We’ve since had very positive feedback. We’d be keen for feedback throughout alpha.

Project team membership: In alpha, we’d be keen to bring our Local Collaboration Manager closer into the project team to help steer our approach to maximize learnings between projects and leverage LDCU’s expertise in public sector digital transformation.