GPT-3’s generative AI helped unlock additional capacity for me as the Data Science Content Lead here at Snorkel AI. The python API for OpenAI’s foundation model let me automate the first draft of summaries and sample tweets for articles recently published on our blog.

This solved a real problem for me; I needed to educate my colleagues on the content we publish, and I didn’t have the bandwidth to do it properly. GPT helped me close that gap, but it’s important to emphasize that I used this tool only for rough drafts; the output from GPT-3—while useful—is not ready for prime time.

In addition to expanding my capacity, building this tool served as a useful lesson in the capabilities and limitations of GPT-3, as you’ll see below.

The problem: internal content distribution

When I joined Snorkel in November, the team asked me to increase the velocity of our content output while maintaining Snorkel’s high standards. Within months we were consistently posting three new pieces of content per week—primarily on our blog.

This introduced a new challenge: how could I most effectively transmit this content to our go-to-market team and prime them to use it?

Our sales personnel like to include our published content in emails and discussions with potential customers. Some also like to post our content on LinkedIn or Twitter. If they don’t have a strong sense of what’s at their disposal, it’s harder for them to do that.

Solution: a GPT-3-powered summarizer

I aimed to solve this problem by building a bot. At a foundational level, I wanted the bot to publish summaries of blog posts to a Slack channel available to everyone within the company. If possible, I also wanted this pipeline to propose possible tweets.

Having run a few experiments in the OpenAI playground, I thought that the GPT-3 large language model could provide a key piece of the solution I was looking for.

Image2

Implementation: GPT-3 in a Python app

I created an app that split its code across several functionalities:

  1. Finding key information about new posts on the blog.
  2. Getting generative content from GPT-3.
  3. Posting the content to Slack.
  4. Running the pipeline end-to-end.

While I encountered and solved some coding challenges elsewhere, this section will focus on how I used the openai python library.

The OpenAI library is incredibly easy

Upon finding the openai Python library, I expected to build a small suite of helper functions that prepared inputs for several openai methods and objects. I had previously followed a similar development pattern working with other libraries as a data scientist.

Ultimately, I built a single helper function that used a single openai method.

The library handles most of the instructions for GPT-3 in plain text. As a result, developers can perform most interactions with the foundation model API through the openai.Completion.create method. At the time I wrote the code, this method accepted 16 different variables, but only one of them was required: the prompt.

The prompt templates

The real “coding” with regard to the interaction with GPT-3 happened within the prompt templates. I created two:

  1. One asked the model to return a summary of a block of text.
  2. One asked GPT-3 to generate tweets for the same block of text.

As anyone who has played in the OpenAI playground will understand, this part took some tinkering.

For the summary, I initially asked GPT to summarize the targeted post in a set number of words. I tried 25, 50, and 100. In every case, GPT ignored my word limit. I changed my approach to ask for N bullet points. The model observed my limit in this case, yielding a much tighter output.

Twisting tweets

For the tweet prompt template, I initially asked for four tweets with different tones, such as urgent, clever, and funny. This portion of the pipeline was bumpier in two core ways: the tone of the tweets converged, and their presentation varied.

The output format shifted randomly. Sometimes it would return a list of tweets with the specified tone followed by the text, such as “Funny: With Snowflake and Snorkel AI, you can label your data faster than you can say ‘DataOps’! #DataOps #DataScience.” Other times, it returned the tweets as a numbered or bulleted list with no indication of the intended tone.

I accepted the presentation discrepancy as a quirk of working with GPT. But I encountered a bigger problem; the quality and diversity of the tweets plummeted after a model update. Where we previously had four tweets that legitimately sounded like different voices, we now had four tweets that all sounded “salesy.” Worse, they all sounded like minor variations of each other.

I countered this by specifically instructing GPT to focus each tweet on a different aspect of the text. That approach yielded some success, but not as much as I would have liked.

Outcome: An 80% solution thanks to GPT-3

The completed code resulted in Slack posts that followed this format:

  • Announce the new post.
  • Show the post’s title as a clickable link that leads to the post.
  • A bulleted summary of the post.
  • A list of potential tweets.

Each of these posts appeared in a Slack channel visible only to me; GPT’s output is not reliable enough to immediately broadcast to the entire company.

GPT’s summaries sometimes emphasize aspects of the piece that should not draw focus. In one case, a bullet point repeated a factual statement about the number of models available on HuggingFace, which had no relevance to the main themes of the post.

Putting the Slack summaries in a private channel gives me a chance to revise before sending them company-wide. I follow a similar approach with tweets. Four tweets becomes three, two, or one—or sometimes none at all.

Image3

That might make you wonder why I would want this tool at all. The answer is that it expands my capacity. If I had to write these posts from scratch, I simply wouldn’t. By the nature of working at a dynamic startup, I have a never-ending TODO list, and writing this kind of internal heads-up notice would never make it to the top of that list.

But when this pipeline gives me something that’s 80% done, I’m willing to spend another 3-5 minutes to polish it before sending it off.

GPT in marketing: it saves time

I set out to build a utility to help me craft internal notices of new content, and GPT-3 made that easier than I would have expected. OpenAI’s python library is a breeze to interact with, and the utility came together quickly. In all, I probably spent about 4 hours building, testing, and deploying. I spent the largest piece of that time tinkering with prompt templates.

But GPT-3 is mercurial. It can be quite powerful and quite useful, but it can also be quite wrong. It’s probably best to treat GPT as a very junior member of your team. Yes, your interns can produce very good, very useful work. But they can also go completely off the rails. That’s why you ensure their work goes to—and only you—before anyone else sees it.

Learn More

Follow Snorkel AI on LinkedInTwitter, and YouTube to be the first to see new posts and videos!