Designing a data monitoring MVP for a program integrity and audit team

Interaction Design, UX Research

2025

Product Designer

I was the primary designer for the Wave Watch MVP, a proactive data monitoring tool

I designed the MVP of a self-service data monitoring and notification experience for a federal oversight organization with a more than $1 trillion portfolio that needed to provide oversight on taxpayer dollars. I worked with a team of 5 data engineers and full stack developers to help users gain visibility into potential leads for investigation without having to have an expert level understanding of available datasets and data analysis.

Note: Certain aspects of this case study are restricted due to security considerations. The work is shown here reflects real constraints, stakeholder collaboration, and design tradeoffs, without exposing sensitive information. Some details including application names and labels have been anonymized or modified.

Problem

To ensure taxpayer dollars are only used for authorized purposes, the team at this federal organization needed a way to more quickly spot and address spending anomalies through either preventative or corrective actions.

Solution

Wave Watch aggregates datasets and shows users instances where important data passes a predetermined threshold indicating an anomaly.

Focus Areas

Government, Data Monitoring or Analysis

Finding a legitimate lead for an auditing project can be exceptionally difficult and time consuming translating to higher labor costs

This is often due to massive volumes of transactions and siloed data storage that requires users' continuous, manual and exhausting data processing and reconciliation. A majority of users at this agency relied on more seasoned data analysts for support for assistance with this data synthesis with otherwise straightforward data. Relying on these more experienced data analysts decreased their bandwidth, which could be used for more complex data analysis.

Wave Watch increased users' bandwidth by synthesizing and analyzing data for them, effectively reducing labor costs through automation

More seasoned data analysts didn't need to synthesize data for other users. Instead they could focus on more complex and nuanced data analysis projects.

Auditors and senior leadership didn't need to have technical experience to pull and synthesize these disparate sources of data. Instead the system did it for them.

Challenges I encountered

I surfaced risks to adoption, specifically, plans to structure reports in a way that disputed users’ mental models

I surfaced risks to adoption, specifically, plans to structure reports in a way that disputed users’ mental models

The team's initial assumption when I joined was that users research the contractor that received payments on behalf of ineligible individuals. This was a crucial assumption, which would impact our development and design time.

After confirming the team had understandably made an assumption, I did some desk research to see if this did match our users’ mental models. Specifically, I searched the agency's public audits on this topic, which indicated how the audit topic was selected.

Mostly the auditors never researched the individual contractor as our product team assumed. Rather the auditors researched individual states and the money they gave to any contractor in that state.

In this version, if a single contractor receives over $1,000,000 for any individual recipient in a month, then a watch report is generated.

In user research and from audit reports, this doesn’t align with users’ mental models and could risk adoption of our MVP.

In this version, if a single state receives over $1,000,000 for any individual ineligible recipient in a month, then a watch report is generated.

This aligns with how users select audit reports currently and does not require them to manually filter and calculate total transactions paid across multiple contractors. It’s already aggregated to the level which they care about for selecting an audit topic.

In this version, if a single state receives over $1,000,000 for any individual ineligible recipient in a month, then a watch report is generated. 





This aligns with how users select audit reports currently and does not require them to manually filter and calculate total transactions paid across multiple contractors. It’s already aggregated to the level which they care about for selecting an audit topic.

During the design process, I advocated reducing risk to adoption by using lean research, mockups, leadership conversations and documentation

During the design process, I advocated reducing risk to adoption by using lean research, mockups, leadership conversations and documentation

During low-fidelity design work, I documented this misalignment with users' mental models in team artifacts as well as presenting the issue with team members including our product management team and the data engineering team.

Instead of only communicating the problem, I showed an example of what an improved design could look like. During the usability studies I planned and facilitated before backend development, I used two versions of report designs to identify our core users' preference: reports built around the contractor and/or the state.

Most preferred the version I initiated - the designs based around the individual state rather than the state's contractor.

Impact

While this alternative version of the design was not released, it's ready for implementation should conditions change

While this alternative version of the design was not released, it's ready for implementation should conditions change

Raising this concern was important so that our team could be more intentional in the future about how we structure individual reports. Our team became more aware of the levels at which users would need information calculated. Just because we could aggregate information at certain levels, like at a contractor level, did not mean it necessarily added value - it could in fact, add undue burden.

Have any questions?

Finding a legitimate lead for an auditing project can be exceptionally difficult and time consuming translating to higher labor costs

This is often due to massive volumes of transactions and siloed data storage that requires users' continuous, manual and exhausting data processing and reconciliation. A majority of users at this agency relied on more seasoned data analysts for support for assistance with this data synthesis with otherwise straightforward data. Relying on these more experienced data analysts decreased their bandwidth, which could be used for more complex data analysis.

In this version, if a single contractor receives over $1,000,000 for any individual recipient in a month, then a watch report is generated. In user research and from audit reports, this doesn’t align with users’ mental models and could risk adoption of our MVP.

In this version, if a single state receives over $1,000,000 for any individual ineligible recipient in a month, then a watch report is generated. 





This aligns with how users select audit reports currently and does not require them to manually filter and calculate total transactions paid across multiple contractors. It’s already aggregated to the level which they care about for selecting an audit topic.

© 2025 by Sarah Coloma
"If it's not working out, it's not the end." - Bob the Drag Queen

© 2025 by Sarah Coloma
"If it's not working out, it's not the end." - Bob the Drag Queen

© 2025 by Sarah Coloma
"If it's not working out, it's not the end." - Bob the Drag Queen