Case Study: Turning Up Predictive | Be Better Analytics
Case Study: Alteryx Inspire Series

Turning Up
Predictive.

“I have seen the future and it is very much like the present, only longer.” — Kehlog Albram

The Lifecycle of Model Building

In this use case, we demonstrate the journey from raw organizational data to a production-ready predictive engine. For OnePlus Systems, the business challenge was clear: increase renewal rates without increasing organizational costs. To do this, we had to move beyond intuition and build a model that could target “at-risk” members with surgical precision.

I. Business Understanding

The most important part of the process is the question. You cannot solve a problem if you don’t know the question. Success in this phase is collaborative, requiring a deep dive with experts across business units to identify what we know and what we don’t. During this phase, we categorized stakeholders into three archetypes:

The Advocates These are your best friends. They understand data and provide the Senior Leadership air cover necessary for project survival.
The Enthusiasts They like the concept of data but haven’t been exposed to the discipline. They often try to do too much at once (e.g., “Let’s use 50 variables”).
The Resistors Threatened by new ideas, they rely on “the way we’ve always done it.” They are marginalized only through proven, iterative results.

II. Demystifying the Data

We unified disparate sources—CRM data, demographics, purchase history, and three years of marketing automation clicks and opens. Before modeling, the data required significant “prepping” to ensure accuracy.

INPUT SUMMARIZE FIELD SUM FOREST SCORING

Technical Strategy: R-based models do not tolerate blanks or nulls. We utilized the Field Summary Tool to identify data gaps and the Summarize Tool—the “salt” of Alteryx—to aggregate transaction-level data into a single member-view record.

III. Evaluation & The “Smell Test”

We tested our predictions against a 40% validation sample—testing the model’s prediction against the “truth” of historical data. While multiple models were tested, the Forest Model emerged as the clear winner for its ability to predict “False” (Non-Members) at a 96% rate.

Predictive Algorithm Overall Accuracy
Forest Model 95.75%
Decision Tree 95.25%
Logistic Step Model 87.14%
Boosted Model 11.64%

Beyond statistical accuracy, we applied the “Smell Test.” Does the result make sense intuitively? If it doesn’t, we go back to the model. Partners on the business side often have a deep feel for what is “right,” and aligning the model with their intuition is critical for long-term buy-in.

Variable Importance Ranking
PURCHASE HISTORY96%
DEMOGRAPHICS82%
VOLUNTEER ACTIVITY45%

IV. Deployment

Remember: building a model does nothing unless it is actually deployed. We moved the module into production to score data in real-time, allowing the organization to trigger automated emails for likely renewals and personal calls for at-risk members.

Key Takeaways

You don’t have to be a classically trained data scientist to build high-impact models. Anyone can become a “Citizen Data Scientist” with the right tools and a structured process. Don’t be intimidated by the terminology—focus on the insight, the learning, and answering the business question.