Machine Learning in Practice: Different than a Kaggle Competition
Michael Bostwick spent the summer as a Data Science Intern at Spreedly and returns to the 2nd year of a Master’s program in Statistics and Operations Research at the University of North Carolina at Chapel Hill in the Fall.
As I wrap-up my summer internship at Spreedly, I wanted to capture a handful of lessons learned while performing data science at a FinTech startup. Many collegiate-level data science curricula spend a lot of time focusing on the more academic aspects of machine learning – specific algorithms, the math behind them, etc… What follows is a prescriptive approach to bridging the divide between learning about machine learning and actually applying it to solve business problems.
To set the stage for the project I worked on this summer: Spreedly is a financial technology startup that provides a PCI Compliant credit card vault that allows companies to simultaneously work with one or many payment gateways. Like any other company operating in the B2B space, Spreedly would like to prioritize potential new customer follow-up in order to increase the trial to subscriber conversion rate. My goal was to use machine learning to build a lead scoring model that could surface the most likely subscribers out of the many companies that create a trial account.
As is often said of data science, considerable time was spent gathering and preparing data in order to do modeling. Once the data was prepared the building of a working machine learning model was relatively quick. By working I mean producing predictions better than random. But what is often overlooked, however, is where to go from there. I’d like to offer four lessons I’ve learned, from a combination of trial and error, online resources (like here from Andrew Ng) and the wise counsel of Spreedly’s full-time Data Scientist Shoresh Shafei, for taking a base machine learning model and refining it to achieve a specific business outcome.
The Four Characteristics of Production SAFE Scripts
Sometimes, despite the best of practices and intentions, you find yourself having to run some sideband, perhaps even one-off, script in production. There’re lots of reasons this could happen including:
- Data normalization or cleanup
- Hairy database migration
- Edge case recovery so infrequent it isn’t yet part of the app
The point is, you’re running off the rails a bit. The nature of this task means you probably don’t have much test coverage. How do you know the thing you’re about to unleash on production is safe to run? With more formal code deliverables you have automated tests, well-vetted deployment pipelines, and other concrete protections that aren’t available to your scripts. So how do you gain confidence in the execution of your scripts?
Think SAFE: Status, Automation, Failure, Environment
Consensus is Overrated, Build Clarity Instead
In stark contrast to more authoritarian models of decision making, “building consensus” sounds downright agreeable. What kind of animal wouldn’t want to strive for consensus when making important decisions!?! Well, we want to present a framework that isn’t so much based on consensus as it is on clarity instead. It’s a subtle but important distinction that can directly affect your culture.
It might seem like folly to try and create a framework for something so soft and squishy and tactical as making a decision. However, it’s something we find ourselves using quite frequently when supporting our Engineering teams, and think it’s important to have a consistent intellectual framework around how decisions are made. What better way to enforce consistency than to write it down on paper? Here it goes…
Fare Thee Well Conditional Logic - Hello Pattern Matching!
I’ve heard this conversation many times over the last few years.
Question: What are you liking about Elixir?
Answer: Oh, I like …, and …, and I LOVE pattern matching!
Pattern matching is almost always one of the features I hear in the response, and the reasoning is typically around how much conditional logic pattern matching “eliminates”. It wasn’t until I actually started using Elixir that I started to understand what that actually meant. I’d say pattern matching allows us to express the conditional aspects of our system in a simpler, more explicit way.
From Riak to Kafka: Part I
In Apache Kafka at Spreedly: Part I – Building a Common Data Substrate Ryan introduced the place Kafka will take in our infrastructure. In this series I’ll describe the implementation details of how we’re reliably and efficiently producing data from Riak to Kafka.
Apache Kafka at Spreedly: Part I – Building a Common Data Substrate
At Spreedly we’re currently undergoing a rather significant change in how we design our systems and the tooling we use to facilitate that change. One of the main enablers for this effort has been Apache Kafka, a distributed log particularly useful in building stream-based applications. For the past several months we’ve been weaving Kafka into our systems and wanted to take the opportunity to capture some of our thinking around this decision, how we decided to approach the project, and what we learned (and are still in the process of learning).
This is less an Apache Kafka tutorial – there are plenty of those out there already – and more a discussion of why Spreedly chose Kafka vs. other messaging systems like ActiveMQ or RabbitMQ. What specific use-cases did we have and why does Kafka make sense for us? Of course, a conversation like this will necessarily include a recap of various Kafka architectural details, but the intent is not to get weighed down in low-level details. From this, we hope you can apply what we’ve learned to make the most pragmatic choice for your organization.
Performing a Content Audit — A deep dive into Spreedly.com
I came on-board in April of 2016. Post on-boarding process and adjustment to the new role, preliminary design research began. Payments was a new industry and so was Spreedly. Managing the marketing website became a responsibility. It was time to learn a few things. After speaking with colleagues the path was clear: perform a content audit . And so I did.
If you’ve never heard of a content audit before, or have but don’t know what one is, no biggie; it’s pretty simple. A content audit is literally an audit of your content. What that means can vary but in most cases it’s how many pages of a site, what keywords it ranks for, usage of
h1/h2 tags, how many images, etc. Why you would perform a content audit depends upon your role and goals. Marketers may look at keyword performance, content creators may need to develop a content strategy, or designers may be improving the user’s experience.
Being as a content audit can be performed by different people for a variety of reasons, I decided I would look at things deemed beneficial to the role of designer at Spreedly. I needed to learn the nuts and bolts of the system I was working with and get an understanding of what I would/could do moving forward. Up-front there weren’t any initial assumptions of what I would find; I kept that open since this was to be an exploratory process.
Mocks and Explicit Contracts: In Practice w/ Elixir
Writing tests for your code is easy. Writing good tests is much harder. Now throw in requests to external APIs that can return (or not return at all!) a myriad of different responses and we’ve just added a whole new layer of possible cases to our tests. When it comes to the web, it’s easy to overlook the complexity when working with an external API. It’s become so second nature that writing a line of code to initiate an HTTP request can become as casual as any other line of code within your application. However that’s not always the case.
We recently released the first version of our self-service debugging tool. You can see it live at https://debug.spreedly.com. The goal we had in mind for the support application was to more clearly display customer transaction data for debugging failed transactions. We decided to build a separate web application to layer on top of the Spreedly API which could deal with the authentication mechanics as well as transaction querying to keep separate concerns between our core transactional API and querying and displaying data. I should also mention that the support application is our first public facing Elixir application in production!
How do I GenStage?
I’m new to Elixir. And Erlang. And OTP’s GenServer. And GenStage. While I’ve got beginner’s-eye, I’m going to share knowledge for a general audience. So this is a doorknob-simple look at GenStage. With clear examples and I Love Lucy references. Enjoy!
Purposeful Web Design — Designing the Spreedly Support App
As the Lead Designer at Spreedly I am fortunate (and excited) to work on many different types of projects. I was recently involved in designing our new Debug App. It’s a place where users can find answers to common support issues and do some basic debugging on their financial transactions. In this article I share insights into my process and some of the steps that were involved along the way.
I am a big fan of the design methodology behind Jeff Gothelf’s Lean UX. If you haven’t read it - check it out. Short read, easy to understand, and has a lot to do with my thinking behind the process that went into designing Spreedly’s self-service debugging app.