Monday, October 6, 2025

Airtable + GPT: Prototyping a Light-weight RAG System with No-Code Instruments


Airtable + GPT: Prototyping a Light-weight RAG System with No-Code Instruments
Picture by Editor | ChatGPT

 

Introduction

 
Prepared for a sensible walkthrough with little to no code concerned, relying on the strategy you select? This tutorial reveals the way to tie collectively two formidable instruments — OpenAI‘s GPT fashions and the Airtable cloud-based database — to prototype a easy, toy-sized retrieval-augmented technology (RAG) system. The system accepts question-based prompts and makes use of textual content knowledge saved in Airtable because the information base to provide grounded solutions. For those who’re unfamiliar with RAG programs, or need a refresher, don’t miss this article collection on understanding RAG.

 

The Elements

 
To follow this tutorial your self, you may want:

  • An Airtable account with a base created in your workspace.
  • An OpenAI API key (ideally a paid plan for flexibility in mannequin alternative).
  • A Pipedream account — an orchestration and automation app that permits experimentation underneath a free tier (with limits on every day runs).

 

The Retrieval-Augmented Era Recipe

 
The method to construct our RAG system isn’t purely linear, and a few steps could be taken in several methods. Relying in your stage of programming information, chances are you’ll go for a code-free or almost code-free strategy, or create the workflow programmatically.

In essence, we’ll create an orchestration workflow consisting of three elements, utilizing Pipedream:

  1. Set off: much like an internet service request, this factor initiates an motion stream that passes by the following components within the workflow. As soon as deployed, that is the place you specify the request, i.e., the person immediate for our prototype RAG system.
  2. Airtable block: establishes a connection to our Airtable base and particular desk to make use of its knowledge because the RAG system’s information base. We’ll add some textual content knowledge to it shortly inside Airtable.
  3. OpenAI block: connects to OpenAI’s GPT-based language fashions utilizing an API key and passes the person immediate alongside the context (retrieved Airtable knowledge) to the mannequin to acquire a response.

However first, we have to create a brand new desk in our Airtable base containing textual content knowledge. For this instance, I created an empty desk with three fields (ID: one-line textual content, Supply: one-line textual content, Content material: lengthy textual content), after which imported knowledge from this publicly obtainable small dataset containing textual content with fundamental information about Asian international locations. Use the CSV and hyperlink choices to import the info into the desk. Extra details about creating tables and importing knowledge is in this text.

After you have your information base prepared in Airtable, let’s swap to Pipedream for the remainder of the method! Register in case you haven’t carried out so but, and log in. There are a number of methods to provoke a workflow. One in every of them is by describing what you need to create to Pipedream’s AI agent, which is intuitive however could lead to errors needing fixes, as we’ll illustrate shortly. Pipedream has two parallel UIs: one for manually constructing and modifying workflows, and one for conversations with its AI agent, referred to as String.

 

Pipedream's AI conversational interface (String)Pipedream's AI conversational interface (String)
Pipedream’s AI conversational interface (String)

 

For those who go for letting String construct an approximate model of your required workflow, describe clearly what to construct, with a immediate that will look one thing like:

I might wish to orchestrate a easy toy RAG software that makes use of an Airtable desk containing textual content, and OpenAI’s GPT fashions (I’ve a paid API key).

 

Pipedream’s language mannequin could return a solution so that you can approve (at your personal threat), like this:

I am going to create a easy RAG (Retrieval-Augmented Era) agent that connects your Airtable desk with OpenAI’s GPT fashions. This is what I am going to construct:

Arrange a set off to simply accept queries (by way of webhook or handbook execution)
Search and retrieve related textual content out of your Airtable desk primarily based on the question
Use the retrieved textual content as context for OpenAI GPT to generate an knowledgeable response
Return the AI-generated reply that is grounded in your Airtable knowledge
This will provide you with a fundamental RAG system the place you’ll be able to ask questions and get solutions primarily based on the content material saved in your Airtable, enhanced by GPT’s reasoning capabilities.

 

For the sake of experimentation and “seeing what occurs,” approve the advised plan of action to acquire a three-block workflow much like the one within the picture proven above.

Since this isn’t the core Pipedream editor UI, swap to it by clicking “Open Pipedream” within the higher proper nook. A brand new tab will open with the precise Pipedream workflow editor.

For the set off block, a URL is mechanically generated with a syntax much like this one I acquired for mine: https://eoupscprutt37xx.m.pipedream.internet. Click on it and, within the settings pane that opens on the right-hand aspect, guarantee the primary couple of choices are set to “Full HTTP request” and “Return a static response.”

For the second block (Airtable motion) there could also be a bit work to do. First, connect with your Airtable base. For those who’re working in the identical browser, this could be simple: register to Airtable from the pop-up window that seems after clicking “Join new account,” then comply with the on-screen steps to specify the bottom and desk to entry:

 

Pipedream workflow editor: connecting to AirtablePipedream workflow editor: connecting to Airtable
Pipedream workflow editor: connecting to Airtable

 

Right here comes the tough half (and a purpose I deliberately left an imperfect immediate earlier when asking the AI agent to construct the skeleton workflow): there are a number of kinds of Airtable actions to select from, and the precise one we want for a RAG-style retrieval mechanism is “Record information.” Likelihood is, this isn’t the motion you see within the second block of your workflow. If that’s the case, take away it, add a brand new block within the center, choose “Airtable,” and select “Record information.” Then reconnect to your desk and check the connection to make sure it really works.

That is what a efficiently examined connection seems like:

 

Pipedream workflow editor: testing connection to AirtablePipedream workflow editor: testing connection to Airtable
Pipedream workflow editor: testing connection to Airtable

 

Final, arrange and configure OpenAI entry to GPT. Hold your API key useful. In case your third block’s secondary label isn’t “Generate RAG response,” take away the block and substitute it with a brand new OpenAI block with this subtype.

Begin by establishing an OpenAI connection utilizing your API key:

 

Establishing OpenAI connectionEstablishing OpenAI connection
Establishing OpenAI connection

 

The person query subject needs to be set as {{ steps.set off.occasion.physique.check }}, and the information base information (your textual content “paperwork” for RAG from Airtable) have to be set as {{ steps.list_records.$return_value }}.

You may preserve the remainder as default and check, however chances are you’ll encounter parsing errors widespread to those sorts of workflows, prompting you to leap again to String for help and automated fixes utilizing the AI agent. Alternatively, you’ll be able to immediately copy and paste the next into the OpenAI element’s code subject on the backside for a sturdy resolution:

import openai from "@pipedream/openai"

export default defineComponent({
  title: "Generate RAG Response",
  description: "Generate a response utilizing OpenAI primarily based on person query and Airtable information base content material",
  sort: "motion",
  props: {
    openai,
    mannequin: {
      propDefinition: [
        openai,
        "chatCompletionModelId",
      ],
    },
    query: {
      sort: "string",
      label: "Person Query",
      description: "The query from the webhook set off",
      default: "{{ steps.set off.occasion.physique.check }}",
    },
    knowledgeBaseRecords: {
      sort: "any",
      label: "Data Base Data",
      description: "The Airtable information containing the information base content material",
      default: "{{ steps.list_records.$return_value }}",
    },
  },
  async run({ $ }) {
    // Extract person query
    const userQuestion = this.query;
    
    if (!userQuestion) {
      throw new Error("No query supplied from the set off");
    }

    // Course of Airtable information to extract content material
    const information = this.knowledgeBaseRecords;
    let knowledgeBaseContent = "";
    
    if (information && Array.isArray(information)) {
      knowledgeBaseContent = information
        .map(file => {
          // Extract content material from fields.Content material
          const content material = file.fields?.Content material;
          return content material ? content material.trim() : "";
        })
        .filter(content material => content material.size > 0) // Take away empty content material
        .be a part of("nn---nn"); // Separate completely different information base entries
    }

    if (!knowledgeBaseContent) {
      throw new Error("No content material present in information base information");
    }

    // Create system immediate with information base context
    const systemPrompt = `You're a useful assistant that solutions questions primarily based on the supplied information base. Use solely the knowledge from the information base beneath to reply questions. If the knowledge will not be obtainable within the information base, please say so.

Data Base:
${knowledgeBaseContent}

Directions:
- Reply primarily based solely on the supplied information base content material
- Be correct and concise
- If the reply will not be within the information base, clearly state that the knowledge will not be obtainable
- Cite related elements of the information base when potential`;

    // Put together messages for OpenAI
    const messages = [
      {
        role: "system",
        content: systemPrompt,
      },
      {
        role: "user",
        content: userQuestion,
      },
    ];

    // Name OpenAI chat completion
    const response = await this.openai.createChatCompletion({
      $,
      knowledge: {
        mannequin: this.mannequin,
        messages: messages,
        temperature: 0.7,
        max_tokens: 1000,
      },
    });

    const generatedResponse = response.generated_message?.content material;

    if (!generatedResponse) {
      throw new Error("Didn't generate response from OpenAI");
    }

    // Export abstract for person suggestions
    $.export("$abstract", `Generated RAG response for query: "${userQuestion.substring(0, 50)}${userQuestion.size > 50 ? '...' : ''}"`);

    // Return the generated response
    return {
      query: userQuestion,
      response: generatedResponse,
      model_used: this.mannequin,
      knowledge_base_entries: information ? information.size : 0,
      full_openai_response: response,
    };
  },
})

 

If no errors or warnings seem, you need to be prepared to check and deploy. Deploy first, after which check by passing a person question like this within the newly opened deployment tab:

 

Testing deployed workflow with a prompt asking what is the capital of JapanTesting deployed workflow with a prompt asking what is the capital of Japan
Testing deployed workflow with a immediate asking what’s the capital of Japan

 

If the request is dealt with and every thing runs accurately, scroll right down to see the response returned by the GPT mannequin accessed within the final stage of the workflow:

 

GPT model responseGPT model response
GPT mannequin response

 

Nicely carried out! This response is grounded within the information base we inbuilt Airtable, so we now have a easy prototype RAG system that mixes Airtable and GPT fashions by way of Pipedream.

 

Wrapping Up

 
This text confirmed the way to construct, with little or no coding, an orchestration workflow to prototype a RAG system that makes use of Airtable textual content databases because the information base for retrieval and OpenAI’s GPT fashions for response technology. Pipedream permits defining orchestration workflows programmatically, manually, or aided by its conversational AI agent. Via the writer’s experiences, we succinctly showcased the professionals and cons of every strategy.
 
 

Iván Palomares Carrascosa is a frontrunner, author, speaker, and adviser in AI, machine studying, deep studying & LLMs. He trains and guides others in harnessing AI in the true world.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

PHP Code Snippets Powered By : XYZScripts.com