How to streamline data processing with no-code tools

The explosion of data in today’s world represents both an enormous opportunity and a tremendous challenge for modern businesses. As more and more information like health records and stock certificates move to the digital space, businesses have more valuable data at their disposal than ever. On the other hand, storing and processing an ever-growing stockpile of data isn’t easy, and many businesses are not properly equipped to take full advantage of their data.

These days, a massive amount of data is generated from a variety of disparate sources: customer profiles, order histories, financial records, product and inventory logs, stock market feeds, customer service tickets, service records, and much more. As more and more companies start to build their own software, much of this data is stored in a company database (like PostgreSQL or MySQL), with a few specific types of data stored within business apps like Salesforce or Zendesk.

The problem with this arrangement becomes apparent once a business wants to edit and interact with its data for data processing purposes. A company may need to edit a data record, map one data field to another, or reconcile two separate datasets — typical data processing tasks necessary for business operations. But the data stored within a company’s database is locked away in a silo and inaccessible to those without database access. In order to resolve this, companies usually resort to one of two following options:

Option 1: Build a data pipeline from their databases into a business app (like Salesforce) and interact with the data within the business app.


Option 2: Develop custom internal tools capable of interacting with the data in their databases to accomplish specific data processing tasks.

Both of these options lead to a number of complications for businesses later on.

Building a data pipeline incurs engineering costs, as you must dedicate time and effort to building a data connection between systems, choosing which data to transmit, and then replicating database data for use within the business app. Going forward, adding new data fields and maintaining the data pipeline will require on-going engineering resources. Plus, as you continue to work with and edit the replicated data in your business app, the data may start to fall out of sync with the data in your database. Over time, it can become extremely difficult to maintain a proper source of truth between datasets.

Businesses may instead choose to develop custom tools in-house that are designed to work directly with the data in their databases. This approach gives them a great deal of flexibility and control over how they want to work on data processing tasks. However, the cost of this flexibility and control is that this option almost always requires significant engineering investment, both upfront and on an on-going basis.

Initially, you’ll need to commit engineering to building internal data processing tools, taking those engineers away from other valuable projects. With engineering resources at a premium these days, usually only the absolute minimum of resources are committed — meaning as little engineering time as possible, and no design or product resources. As a result, the internal tools shipped are often poorly designed, hacked together quickly, and difficult to use.

These problems only compound as time goes on. With the rapid pace of today’s businesses, tools often become outdated as soon as they are built. Internal tools need constant retooling to keep up with new products, a growing user base, and changing ecosystems. Since internal tools tend to accumulate tech debt, they can be even more difficult to maintain; without careful management these tools can quickly turn into a black hole for engineering resources.

What’s more, developing custom internal tools can be risky, as these tools seldom have access controls in place. Most companies lack the engineering resources to include features like roles, permissions, and audit logs when they initially develop the tools; oftentimes these features are deemed unnecessary as there are not many people using the tools to begin with. However, as your company grows and more people need access to these tools, it becomes increasingly more difficult and costly to layer in access controls on top of existing systems. In many cases, companies never end up adding these controls, leading to all sorts of data abuse issues.

Ultimately, teams end up with insufficient (sometimes even broken) tools, that businesses spend a lot of resources to develop, simply because there are no great options.

Finding a new way

As software developers ourselves, we spent a lot of time grappling with these same issues at previous companies. For example, many of us were at Harbor and Zenefits, where data processing tools were key instruments, as both companies ingested and processed huge amounts of data from a variety of sources. Our experience in building, maintaining, and evolving these tools led us to realize that while these tools are custom for a given company, a lot of the building blocks for the tools were quite similar — viewing data in databases, using a front-end to update data records, clicking buttons that called internal APIs, etc.

We realized that these shared building blocks could enable a new way of building tools; one that didn’t require so much engineering effort. Our goal in creating Internal is to give businesses a new way to create custom tools without code. Internal provides a number of benefits for your business:

  1. It frees your engineers to work on other projects, while allowing non-technical teams to take part in creating their own tools.
  2. The tools are far easier to update, in part because Internal automatically syncs with changes to your data sources, and also because engineers are no longer the bottleneck to make even the smallest of changes to internal tools.
  3. Since data isn’t “piped in” and the tools always display and update from the source of truth, there’s never data discrepancies.
  4. Granular access controls and complete audit logs are baked into Internal from the start, reducing the risk of data abuse.

Simply connect Internal to your database(s) or other data sources to get started. Internal automatically generates your first tool, the admin console, for teams to view, create, and update records, with baked in access controls. From there, you can mix and match components like tables, forms, and buttons to create custom tools that work exactly how you want them to.

Here are some examples of data processing tools you can build with Internal:

1 — Comparison tool

This tool lets you compare data coming from two different sources in order to reconcile any discrepancies. In this specific case, we are comparing insurance premium records from a carrier versus records in our company’s database. Records shown in the two tables are records that have been detected to have mismatched premium amounts. Users can use this tool to correct any inaccurate premiums and then match the records together to reconcile the discrepancy.

In the first table, you can see all of the insurance carrier’s enrollment records. The second table displays corresponding enrollment records that are in your database. Filters at the top allow you to display only records associated with a particular company, carrier or customer. When a user finds a pair of records with mismatched premiums, they can use the “Update” button in-line in each row to update that record with the correct premium value (to match the carrier’s records).

Once updated, users can select a record in the first table with a matching record in the second table, and hit the “Match” button, which would set a flag indicating that that pair of records are successfully matched. Each table has been filtered to not include successfully matched records, so a user will be able to quickly work through a reconciliation.

2 — Combining tool

This tool lets you combine data coming from different sources into a single entity. In this particular scenario, we are associating investment holdings coming from one data source to customers that own those holdings (coming from a different database or data source).

The first table displays investment holdings not yet associated with any customer. The second table displays a list of customers in your database. Using the global filters up top, you can filter both tables on columns that may be shared between datasets (such as customer name or address). You can then check the customer’s details against the details shown in the investment holdings to confirm that this is the correct owner, and then use the “Map” button to connect that investment holding in the first table to the customer in the second table.

These two tools are just a small taste of what you can build in Internal without any code or engineering effort. See what you can create for your business with a free 14-day trial. Or schedule a demo to discuss how Internal can help your business.

Get started now.

Oops! Something went wrong while submitting the form.

Check your email. We sent a verification link to your email.

New verification link sent.

Send a new link

Good news! Your company already has an Internal account. Do you want to request access?

You'll get an invite to Internal once your company admin approves your request.

Request Access