Many of us are addicted to technology. You might have heard someone exclaim, “I spend way too much time on my phone,” or “Ugh, I need to get off of Instagram.” But you probably haven’t heard someone say, “I’m addicted to APIs!”
APIs are everywhere. A user may interact with a single application or website, but behind the scenes, several APIs have been integrated together to make up that application.
In some cases, the use of external APIs may be obvious, such as when a YouTube video is integrated into a news article, or when you click “Pay with PayPal” and are redirected via an API call to a PayPal pop-up to complete your order.
In other cases, we may not be aware of the API calls that occur when we interact with a web page. If I pay for my online order by entering my credit card information, instead of using PayPal, the website likely calls a different API to verify whether my information is valid. If the payment is confirmed, the application sends a response back to the original website, allowing me to complete the purchase. In this case, the API is seamlessly integrated so that the user is often not even aware of it.
API integrations allow new apps and websites to reuse existing applications and data in order to create new products. A famous example is Uber, which used the Google Maps API to build their product.
The organizations that develop APIs also benefit from these integrations. APIs may be monetized directly by charging for usage, or indirectly, by helping organizations to gain new customers or partnerships. When used internally, APIs help to automate and simplify business processes, and to reuse data and processes across a company.
As a result of the financial value of APIs, API initiatives occur in almost every company.
However, when developing these initiatives, an important question is how to efficiently create the data services that will be exposed as your APIs. Often, data first needs to be integrated across many different types of source systems, which can be a complex, time-consuming process. Traditional development methodologies for creating data services can also be complex and time-consuming, as they often require heavy coding.
In this post, I’ll explore how data virtualization can be used to create a data services layer, simplifying and accelerating your API initiatives.
Key Aspects of Data Virtualization
Data virtualization enables you to combine data spread throughout several physical locations into logical business models, without needing to move the data from the underlying sources.
Data virtualization provides several key benefits for an organization:
- The data is combined and transformed in real time, so users or applications that consume the data have access to the latest data without needing to connect directly to the sources.
- The data virtualization layer abstracts the location and format of the original data so that the end user is not exposed to the complexity of the underlying data model. This means that if changes occur in the underlying data, the logical representation for the end user remains the same.
- The logical datasets created in the data virtualization layer can be used regardless of the consumption method , establishing data consistency across the organization.
Using Data Virtualization to Create a Data Services Layer
So far I described how data virtualization decouples your data from your sources, and enables you to build logical data sets. Once the logical views have been defined, the actual data services that will be exposed to your end users must be created.
Using standard development strategies to create data services can be both slow and costly. Data virtualization technologies enable you to create secured data services over the models in your virtual layer by using a simple graphical interface. With just a couple of clicks, these services can be created and deployed using multiple different protocols and supporting all major security and documentation standards.
The logical data sets used to create the data services are the same data sets that are exposed to other users connecting through the data virtualization layer. This enables developers of applications, web portals, or other systems to use the same certified data sets as BI teams, avoiding the problem of developers creating their own non-certified, siloed data sets.
The ability to create logical data models with integrated, real-time data, combined with easy-to-use publishing options for data services, enables you to quickly create a data services layer, to accelerate your API initiative.
Other Benefits of Data Virtualization
Data virtualization offers many other benefits for your API initiatives, including:
● A centralized security layer
● Data governance capabilities
● Advanced query optimization techniques to efficiently federate queries between sources
● Sophisticated Caching features
● Workload management support
In this post, I explored how a data virtualization layer can be used to simplify the creation of a data services layer.
With this unique capability, and many other features, data virtualization can streamline your next API initiative.
Originally published at https://www.datavirtualizationblog.com on July 2, 2020.