WRCOG is a classic “small but mighty” government team. They support a regional footprint of ~2 million people with 36 full-time staff, and Chris personally wears multiple hats.
Their reality is what most agencies feel right now. Rising regulatory complexity, limited funding, and constant pressure to do more without adding headcount.
Chris framed AI as a capacity multiplier that removes “drudgery”, shortens time-to-answer, and frees staff to focus on the work that moves programs forward. The goal isn’t reduction of headcount; it’s avoiding unnecessary hires and protecting bandwidth for higher-value work.

Christopher Gray, Deputy Executive Director
Chris explained that most staff reports contain repeatable content, and WRCOG has clear rules for each section, so the output stays consistent, tight, and predictable.
They intentionally reduced report length after learning how elected officials read: Topic → Summary → Discussion.
Only the first few sentences earn attention. The result is a more standardized product that reduces review churn and helps staff spend time where it matters: the discussion and decision logic.
Staff Report Pro Tip: If the first four or five sentences don't really capture an elected official, they just don’t read the rest of it.
Authority questions come with real pressure, and the cost of getting them wrong is real.
Chris described a moment when the Executive Director was out, and the team needed clarity on signing authority. Staff couldn’t recall the exact policy, resolution, and when it passed. Instead of starting a file scavenger hunt, they use Madison AI to find the exact resolution and answer, then update the master document to update their master policy.
The key point: AI accelerated discovery, but the document itself is what made the decision defensible.
Christopher Gray, Deputy Executive Director
The real win here is “less file-digging, more certainty.” Chris described how AI helps them answer governance and policy questions using authoritative records, then points directly to the language that supports the conclusion. Cited authoritative sources create confidence in decisions.
Chris also emphasized the long-term value of institutional knowledge and memory, moving beyond the tenured staff. AI becomes the backstop when those people aren’t available - or eventually leave.
Christopher Gray, Deputy Executive Director
AI doesn’t own the outcome, people do. The safest posture is to treat AI as a finder and a first draft generator, then validate against source documents before anything becomes public-facing or decision-grade.
Kristine reinforced this with a simple, enforceable message. Users are responsible for what they publish, regardless of how it was produced. That clarity is what makes adoption scalable without creating risk debt.













































