Summary: Most enterprises building agentic AI on Gemini Enterprise hit the same wall: the model is ready, but the ERP data behind it isn't. Agents pull from stale exports and siloed systems, fill gaps with assumptions, and deliver outputs teams don't trust. The fix is three things most ERP pipelines were never built to deliver: an automated semantic layer, full business context, and live data. Incorta delivers all three to BigQuery automatically, giving Gemini Enterprise the complete, accurate foundation it needs to power production-ready agentic AI on Oracle, SAP, and Workday data.
You've seen what Gemini Enterprise can do. The reasoning is there. The agentic capabilities are there. But when you connect it to your actual ERP data, the outputs don't hold up - and your team doesn't trust them enough to act.
You're not alone. 91% of AI models experience quality degradation over time because of stale, incomplete, or fragmented data. More than 80% of AI projects fail, and the root cause is almost never the model. It's the data foundation behind it.
Right now, most enterprises feed Gemini summaries, old exports, and siloed data that was built for dashboards, not for AI to reason on. When an agent hits a gap, it fills it with an assumption. The output looks confident. The logic underneath it isn't.
That's why only 2% of organizations have implemented AI agents at scale, even though more than 65% are piloting or exploring. The gap between a Gemini pilot that works in a demo and a production deployment your team relies on comes down to one thing: whether your agents have enough context to reason on your data the way your business actually needs them to.
The enterprises that are getting agentic AI into production on Gemini Enterprise have all solved the same foundational problem. They've given their agents three things most ERP pipelines were never built to deliver:
A semantic layer. Your ERP stores data in tables and transaction codes. Gemini needs to understand what that data means in business terms — what a PO status implies, how a GL account maps to a cost center, which inventory fields signal risk. Without a semantic layer, your agents are working with numbers they can't interpret.
Full business context. A single data point means nothing without the relationships around it. A Gemini agent analyzing inventory needs to see the connections across POs, demand forecasts, supplier lead times, and warehouse capacity. Most pipelines deliver flat, summarized data that strips out this context before it ever reaches BigQuery.
Live data. Agents making decisions on data that's hours or days old aren't making real decisions. In supply chain, that's the difference between catching a stock-out early and writing down millions in SLOB inventory. In finance, it's the difference between a real-time variance explanation and a manual reconciliation that takes your team days.
The challenge is that most enterprises spend 80% of their time building data connectors instead of training agents to handle intelligent workflows. Legacy ERPs weren't designed for real-time AI access. They export CSVs, use outdated APIs, and silo critical business data across dozens of modules.
So teams face an impossible choice: limit their Gemini agents to the data that's easy to access and accept incomplete outputs, or spend months building custom pipelines for every ERP system. Most choose the second option and get stuck — McKinsey reports that only 40% of companies see any enterprise-level impact from their AI initiatives.
This is exactly why Incorta and Google Cloud built a better path.
Incorta delivers all three: an automated semantic layer, full business context, and live ERP data - directly to BigQuery. No custom pipelines. No months of implementation. Your Oracle, SAP, and Workday data arrives in BigQuery complete, accurate, and structured for Gemini Enterprise to reason on.
Your agents get the full picture. Your team gets answers they trust enough to act on. And you go from pilot to production without the bottleneck that stalls everyone else.
That's what 100% context and 100% accuracy actually looks like when you're building on Gemini Enterprise.
We'll be at Google Cloud Next, April 22–24 in Las Vegas, Booth #2811. Come see how Incorta automatically generates the business context Gemini Enterprise needs to power production-ready agentic AI on your ERP data.
Summary: Incorta solves one of the most common blockers to enterprise AI: getting clean, complete, AI-ready data out of complex ERP systems like Oracle, SAP, Workday, and NetSuite. Where custom-built pipelines typically take 18 to 24 months to deliver reliable data and absorb 75 to 80% of ongoing maintenance costs, Incorta compresses time-to-value from years to weeks using prebuilt ERP blueprints and automated data modeling. Incorta preserves full transaction-level granularity, native relational context across ERP objects, and supports near real-time ingestion, giving AI agents the data quality they need to reason, automate, and act. For data and AI leaders evaluating build vs. buy for ERP-driven AI strategies, Incorta eliminates the engineering burden of building and maintaining custom ERP pipelines and converts unpredictable infrastructure spend into a predictable subscription model.
Your AI Strategy has a data problem - your ERP is at the center of it
AI initiatives stall for plenty of reasons. Budgets get approved, use cases get mapped, stakeholder buy-in gets secured - then the data isn't ready. That's the conversation happening in data and IT leadership teams across every industry right now, and it almost always traces back to ERP systems.
Why ERP data is so hard to work with
ERP platforms hold the most valuable operational data in the enterprise: purchase orders, invoices, vendor records, general ledger entries, inventory movements. For AI to reason and automate, it needs this data granular, current, and contextually intact.
ERP systems were built to run operations, not to feed AI. Their schemas are sprawling (often tens of thousands of tables), relationships are complex and frequently undocumented, and many platforms include structural quirks like tables without primary keys that complicate replication and modeling. Getting clean, AI-ready data out of an ERP system is a multi-year undertaking for most organizations, and that timeline is incompatible with most AI roadmaps.
Why custom ERP pipeline builds fail
Time to value is longer than expected. Internal ERP data builds typically require 18 to 24 months before the business has reliable, AI-ready data. Every quarter spent building infrastructure is a quarter without AI value.
Total cost of ownership escalates. Custom builds absorb 75 to 80% of ongoing maintenance burden, including patches, updates, and accumulated technical debt. As AI scales, demands on the data layer grow: more volume, faster refresh cycles, more governance requirements. What starts as a bounded engineering project becomes an OPEX-heavy liability.
ERP pipelines require domain expertise, not just engineering. Understanding which tables matter, how business rules are embedded, and how cross-module dependencies behave takes years to develop. Pipelines built without that knowledge look correct and break in production.
Technical debt accumulates into AI risk. McKinsey estimates a 10 to 20% tech debt tax on engineering projects, diverting innovation budgets to remediation. Gartner puts the average annual cost of poor data quality at $12.9 million. Unstable pipelines produce unreliable AI outputs, and unreliable outputs kill adoption.
What AI actually needs from your ERP data
AI has three non-negotiable data requirements.
Granularity. AI models need full transaction-level detail. Pre-aggregated data reduces the signal AI needs to find patterns, anomalies, and causal relationships.
Timeliness. Agentic AI requires near real-time data. Batch pipelines with multi-hour latency make alert-triggering, approval automation, and exception flagging impossible.
Context. ERP data only makes sense relationally. An invoice needs its purchase order, receipt record, vendor history, and GL posting to be meaningful. AI needs that relational structure (PO to Receipt to Invoice to Vendor to GL) preserved, not flattened.
Custom builds routinely compromise on at least one of these. Data gets summarized to reduce complexity. Refresh cycles get extended to manage engineering load. Relationships get lost in transformation.
What a purpose-built platform can change
Purpose-built ERP data platforms address all three requirements by design. They ship with prebuilt blueprints for major ERP systems including Oracle, SAP, Workday, and NetSuite, encoding domain expertise that would otherwise take years to develop internally. They preserve native relationships and business logic without manual reconstruction and support near real-time ingestion.
On the cost side, buying converts unpredictable engineering spend into a predictable subscription. ERP schema changes, performance tuning, security patches, and infrastructure evolution become the vendor's responsibility.
The data on this is clear. IDC reports that 80% of data professional time is still spent on discovery and preparation rather than analytics. IDC Qlik research finds 81% of companies struggle with AI data quality in ways that directly undermine ROI. Gartner predicts over 40% of agentic AI projects will be canceled by 2027 because operational and data foundations were not ready.
Data leaders: build vs buy?
Build vs. buy is no longer a tooling conversation. It is a strategic decision with real timing consequences. AI value does not accrue while you are still building infrastructure. Organizations moving fastest on AI are not the ones with the most sophisticated in-house data engineering. They are the ones operating on a data foundation built for this moment.
Download our whitepaper, The AI Data Foundation Dilemma: Build vs. Buy for ERP-Driven AI Strategies, for the complete five-dimension analysis.
Summary: Learn how Incorta gets your finance and operations teams that use Oracle BI tools a path to faster insights: all with less IT dependency, and the ability to answer granular questions - like line-item audit traces or detailed inventory movements - that would require major semantic layer work in OBIEE or OAC.
What Are Oracle BI Tools?
Oracle BI tools are a suite of software products designed to help organizations collect, analyze, and act on business data. The core of Oracle's BI portfolio has historically been OBIEE (Oracle Business Intelligence Enterprise Edition), an on-premises platform for enterprise reporting, dashboards, and analytics.
Oracle has since expanded the suite, and its current primary offerings include:
Oracle Analytics Cloud (OAC): Oracle's cloud-native analytics platform and the strategic successor to on-premises OBIEE. OAC provides self-service analytics, AI-driven insights, and visualization capabilities in a SaaS model.
Oracle Analytics Server (OAS): The on-premises version of Oracle Analytics for organizations not ready to move to the cloud.
Oracle BI Publisher: A structured reporting and document generation tool for pixel-perfect formatted outputs.
Oracle BI Applications (OBIA): Prebuilt analytics content packages with domain-specific dashboards and KPIs for Finance, HR, Supply Chain, and other functions.
Benefits of Oracle BI Tools for Enterprise Analytics
Deep Oracle ecosystem integration: Oracle BI tools are purpose-built to connect with Oracle ERP, Oracle Fusion Applications, Oracle E-Business Suite, JD Edwards, PeopleSoft, and Oracle Database. For organizations heavily invested in the Oracle stack, this integration reduces complexity and accelerates time to first report.
Governed, consistent metrics: The semantic layer (RPD in OBIEE, data models in OAC) enforces consistent business definitions across all users. This is critical for large enterprises where Finance, Operations, and Sales need to work from the same version of core metrics.
Enterprise-scale architecture: Oracle BI tools are designed for high-volume, multi-user environments. They support row-level security, role-based access controls, and complex, multi-source data environments.
Prebuilt content via OBIA: For Oracle ERP customers, Oracle BI Applications provide a head start — pre-configured data pipelines, subject areas, and dashboards aligned to Oracle's source data models.
Formatted reporting: Oracle BI Publisher handles structured document generation that most visualization tools cannot. Financial close packages, regulatory reports, and audit-ready outputs require this level of layout control.
Limitations to Consider
Despite their strengths, Oracle BI tools come with well-known trade-offs. The semantic layer requires significant upfront investment and ongoing maintenance. Adding new data sources or metrics typically requires IT or BI administrator involvement. Performance on highly granular queries can degrade without proper aggregation design. And for organizations not standardized on Oracle infrastructure, the tools provide limited value compared to more platform-agnostic alternatives.
How Incorta Extends and Modernizes Oracle BI
Incorta is designed as a complement and upgrade path for organizations running Oracle BI tools. While Oracle BI delivers strong governance and Oracle-native integration, Incorta removes the performance and agility constraints that come with traditional BI architectures.
Incorta's Direct Data Mapping technology connects directly to Oracle ERP, Oracle Fusion, and Oracle databases — the same sources Oracle BI tools rely on — and makes that data available for analysis without ETL pipelines or pre-aggregated data models. The result is query performance against full granularity data that Oracle BI's aggregation-dependent architecture cannot match.
For finance and operations teams that use Oracle BI tools today, Incorta offers a path to faster insights, less IT dependency, and the ability to answer granular questions — like line-item audit traces or detailed inventory movements — that would require major semantic layer work in OBIEE or OAC.
Also known as: oracle bi | oracle bi obiee | oracle obi | oracle bi tools | oracle business intelligence tools | oracle business intelligence | What are the benefits of using Oracle BI tools for enterprise analytics?
Summary: Learn how to get started with OBIEE reporting, and how Incorta solves common challenges with OBIEE reporting. Where OBIEE requires pre-built semantic layers and aggregation tables to deliver acceptable performance, Incorta's Direct Data Mapping™ technology queries source data directly — at full granularity — without ETL or pre-aggregation.
What Is OBIEE Reporting?
OBIEE (Oracle Business Intelligence Enterprise Edition) reporting refers to the full range of analytical outputs the platform supports — from interactive dashboards and ad hoc queries to scheduled pixel-perfect reports and threshold-based alerts.
OBIEE reporting is designed for enterprise environments with large, complex data sources. It abstracts database complexity through a semantic layer, giving business users access to data using business terminology rather than raw SQL or table names.
Types of Reports in OBIEE
OBIEE supports several distinct report types, each serving a different use case.
Interactive analyses: Built through Oracle BI Answers, these are user-created analyses that combine data columns, filters, and visualizations. Users choose from tables, bar charts, pie charts, pivot tables, and more. Analyses can be saved and shared, or embedded in dashboards.
Dashboards: Collections of analyses arranged on pages. Dashboards support dynamic filtering through prompts and can be personalized by user or role. They are the most common way business users interact with OBIEE data daily.
BI Publisher reports: Structured, formatted reports with precise layouts. Used for financial statements, invoices, regulatory outputs, and other documents that require consistent formatting. BI Publisher reports can be scheduled and delivered via email, printed, or exported to PDF and Excel.
Alerts via BI Delivers: Condition-based notifications that trigger when data meets defined thresholds. For example, an alert when revenue falls below a target or when a supply chain metric exceeds an acceptable range.
How OBIEE Reporting Works
The foundation of all OBIEE reporting is the Oracle BI Repository (RPD) — the platform's semantic layer. The RPD maps physical database tables to logical business objects, defines metrics and hierarchies, and controls what users can see and query.
When a user builds a report in OBIEE, they are selecting from the business-friendly objects exposed by the RPD. OBIEE translates those selections into SQL queries against the underlying database, applies any necessary aggregations, and returns results to the user interface.
This architecture ensures consistency: every user reporting on 'Revenue' is working from the same definition, regardless of which dashboard or analysis they are using.
Getting Started with OBIEE Reporting
Access the Oracle BI web interface: OBIEE is browser-based. Your organization's BI team will provide the URL and your login credentials.
Understand the catalog: The BI Catalog stores all saved analyses, dashboards, and reports. Familiarize yourself with the folder structure to find existing reports and understand what is already built.
Use Oracle BI Answers for ad hoc analysis: Navigate to New > Analysis. Select a subject area (a business domain defined in the RPD), then drag columns into your analysis. Apply filters, choose a visualization, and save your work.
Build or access dashboards: Dashboards are assembled from saved analyses. If you have edit permissions, you can create new dashboard pages and add analyses, text, and images.
Work with BI Publisher for formatted reports: BI Publisher has a separate interface. Report templates are built using RTF, PDF, or Excel templates, then connected to data models. Most formatted reports in OBIEE are maintained by BI administrators and IT.
Understand the RPD dependency: If you cannot find the data you need for an analysis, it likely means the subject area or metric has not been defined in the RPD. Requesting new data requires involving your BI team to update the semantic layer.
Common Challenges with OBIEE Reporting
OBIEE reporting is powerful but comes with real limitations. The RPD creates a bottleneck: every new analysis or data point that falls outside the existing semantic model requires IT involvement. Performance on large, granular datasets can degrade without proper aggregation tables. And the platform's age means it lacks the modern UX and self-service flexibility that newer BI tools offer.
How Incorta Addresses OBIEE Reporting Limitations
Incorta was built to solve exactly the problems OBIEE reporting creates. Where OBIEE requires pre-built semantic layers and aggregation tables to deliver acceptable performance, Incorta's Direct Data Mapping™ technology queries source data directly — at full granularity — without ETL or pre-aggregation.
For finance and operations teams that rely on OBIEE for reporting, Incorta delivers the same governed, consistent experience with dramatically faster performance and no semantic layer bottleneck. Users can build analyses against live transactional data — drilling from a monthly P&L down to individual journal entries — without waiting for a BI team to update the RPD.
Incorta's native connectors to Oracle ERP, Oracle Fusion, and Oracle databases mean migration from OBIEE reporting to Incorta is straightforward. You keep the data quality and governance your team depends on. You eliminate the performance and agility constraints that slow analysis down.
Also known as: OBIEE reports | OBIEE reporting | OBIEE software | How do I get started with OBIEE reporting?
Summary: What is OBIEE and how does it differ from other BI tools? Learn how Incorta's Direct Data Mapping™ technology allows analysts to query granular source data directly, without building ETL pipelines or pre-aggregating data models.
OBIEE's Place in the BI Landscape
OBIEE (Oracle Business Intelligence Enterprise Edition) is a purpose-built enterprise BI platform developed by Oracle. Unlike lighter-weight tools such as Tableau, Power BI, or Looker, OBIEE was designed from the ground up for large organizations running complex, multi-source Oracle data environments.
Understanding where OBIEE fits — and where it falls short — requires looking at a few key architectural differences.
How OBIEE Differs from Modern BI Tools
Semantic layer vs. direct query: OBIEE centers on a metadata repository (the RPD file) that maps business terms to underlying database objects. This semantic layer gives business users a consistent vocabulary, but it requires significant administrative effort to build and maintain. Modern tools like Tableau and Power BI allow more direct data connections, though they trade off consistency at scale.
On-premises architecture: OBIEE is primarily an on-premises platform. While Oracle offers Oracle Analytics Cloud (OAC) as its cloud successor, many OBIEE deployments remain tied to on-prem infrastructure. Modern BI tools are largely cloud-native or SaaS, making them faster to deploy and easier to scale.
Deep Oracle integration: OBIEE is tightly integrated with Oracle ERP, Oracle Fusion, Oracle Database, and Oracle's suite of applications. For Oracle-heavy environments, this is an advantage. For mixed environments, it can create complexity.
Pre-aggregated data models: OBIEE often relies on pre-built data models and aggregations to deliver acceptable performance. This works well for standardized reporting but limits flexibility for ad hoc, granular analysis.
Self-service limitations: Compared to newer tools, OBIEE's self-service capabilities are more constrained. Building new analyses typically requires involvement from IT or BI administrators to update the semantic layer.
OBIEE vs. Tableau
Tableau prioritizes visual exploration and self-service analytics. It is faster to deploy and more accessible to non-technical users, but it lacks OBIEE's enterprise governance and deep Oracle integration. OBIEE wins on governance; Tableau wins on speed and usability.
OBIEE vs. Power BI
Microsoft Power BI is a cloud-native alternative with strong integration into the Microsoft ecosystem (Azure, Teams, Office 365). For organizations standardized on Microsoft, Power BI offers a more modern experience than OBIEE at a lower cost. However, it does not offer the same native depth with Oracle data sources.
OBIEE vs. Looker
Looker (now part of Google Cloud) takes a code-first approach to data modeling through its LookML framework. It offers strong governance similar to OBIEE's semantic layer but in a more modern, cloud-native package. Looker is better suited for cloud-first organizations; OBIEE is more entrenched in on-premises Oracle environments.
How Incorta Fits Into This Comparison
Incorta takes a different approach than all of these tools. While OBIEE, Tableau, Power BI, and Looker all require some form of data preparation, modeling, or aggregation layer before users can analyze data, Incorta's Direct Data Mapping™ technology allows analysts to query granular source data directly — without building ETL pipelines or pre-aggregating data models.
For organizations running Oracle ERP or Oracle Fusion, Incorta provides native connectors that extract and map transactional data at full granularity. This means finance, supply chain, and operations teams can run complex analyses against live, line-item detail — without the performance trade-offs that come with OBIEE's aggregation-dependent architecture.
The result is faster time to insight, reduced dependency on IT for data preparation, and the flexibility to answer questions OBIEE's data models were never designed for.
Summary: Review Oracle's Business Intelligence Enterprise Edition, who uses it, and how Incorta helps Oracle users work with live, detailed data — without the performance bottlenecks and administrative overhead that come with maintaining OBIEE's semantic layer and aggregation pipelines.
What Is OBIEE?
OBIEE stands for Oracle Business Intelligence Enterprise Edition. It is Oracle's flagship on-premises business intelligence platform, designed to help enterprise organizations collect, analyze, and visualize data from across the business. OBIEE provides interactive dashboards, ad hoc reporting, scheduled reports, mobile analytics, and data visualization tools — all delivered through a web-based interface.
Organizations running Oracle ERP systems, Oracle E-Business Suite, or JD Edwards have historically used OBIEE as their primary BI layer because of its deep integration with Oracle's broader technology stack.
OBIEE Meaning: Breaking Down the Acronym
Oracle: Developed and maintained by Oracle Corporation.
Business Intelligence: Refers to the tools, processes, and technologies used to analyze data and support decision-making.
Enterprise Edition: Indicates the platform is designed for large-scale, enterprise-grade deployments — not a lightweight or departmental solution.
In practice, OBIEE is often used interchangeably with 'Oracle BI,' 'Oracle OBI,' or 'Oracle Business Intelligence' — all referring to the same core platform.
What Does OBIEE Do?
At its core, OBIEE allows business users to query data from enterprise sources and surface insights through dashboards and reports. Key capabilities include:
Interactive dashboards that pull live data from connected sources.
Ad hoc analysis through Oracle BI Answers, which allows users to build their own queries without writing SQL.
Scheduled reporting and delivery via Oracle BI Delivers.
A semantic layer (the Oracle BI Repository or RPD file) that abstracts the underlying database so business users work with familiar business terms, not raw table names.
Who Uses OBIEE?
OBIEE is most common in large enterprises — particularly those running Oracle ERP, Oracle Fusion Applications, or legacy Oracle databases. Finance teams, supply chain teams, and operations leaders have traditionally relied on OBIEE for standardized reporting across complex, multi-system data environments.
OBIEE and Oracle Analytics Cloud (OAC)
Oracle has shifted its investment toward Oracle Analytics Cloud (OAC), the cloud-native successor to on-premises OBIEE. Many organizations are currently evaluating migration paths from OBIEE to OAC or considering third-party modern analytics platforms as part of broader cloud transformation initiatives.
How Incorta Relates to OBIEE
Incorta is a modern data platform built for organizations that have outgrown the limitations of OBIEE and traditional BI tools. Where OBIEE relies on pre-aggregated data models and complex ETL pipelines, Incorta's Direct Data Mapping™ technology eliminates those layers entirely — delivering faster query performance directly against granular source data.
For Oracle customers specifically, Incorta provides native connectors to Oracle ERP, Oracle Fusion, and JD Edwards, making it a natural upgrade path from OBIEE. Finance and operations teams get the familiar ability to work with live, detailed data — without the performance bottlenecks and administrative overhead that come with maintaining OBIEE's semantic layer and aggregation pipelines.
If your organization is evaluating what comes next after OBIEE, Incorta is built to handle exactly that transition.
Also known as: OBIEE meaning | define OBIEE | OBIEE definition | what is OBIEE | Oracle OBIEE | OBIEE Oracle | Oracle Business Intelligence Enterprise Edition OBIEE

This post is part of Incorta's Innovate with Intelligence webinar series, a four-part exploration of agentic AI built for enterprise teams. From design patterns to evaluation to governance, each session tackles a different layer of what it takes to move AI from demo to production. Catch the full series here.
When organizations talk about AI security, the conversation usually centers on the model: Is it hallucinating? Could it be manipulated? What if it says something it shouldn't?
These are real concerns. But in Episode 4 of Incorta's 2026 AI webinar series, we made a more provocative argument: the model is not where your security problem starts - the data is.
Build proper data governance first, and AI safety becomes a much more tractable problem. Skip it, and no amount of model-level safeguarding will save you. Watch the full discussion here or keep reading for our step-by-step guide:
Before getting to solutions, it helps to name the problems clearly. Enterprise AI faces four major categories of risk:
Hallucination and Reliability: LLMs generate statistically plausible output, not verified truth. They can fabricate academic citations that look real, apply logic incorrectly to unfamiliar regulatory frameworks, or confidently produce wrong answers with no indication that anything is amiss. The impact: trust erosion and compliance risk.
The Black Box Problem: Traditional deep learning models lack transparent, traceable reasoning. The same question phrased slightly differently can produce a different answer with no clear explanation. This makes enterprise AI hard to audit, hard to debug, and nearly impossible to certify in high-stakes environments.
Contextual and Common Sense Gaps: Models rely on learned text patterns, not embodied understanding. Subtle rephrasing can trigger overcomplicated or incorrect reasoning. Performance outside tightly scoped workflows is fragile.
Prompt Injection and Security: This is the most acute risk for agentic systems. Models treat all text as potential instructions. They don't inherently distinguish between trusted system prompts and malicious user input. A carefully crafted document in a RAG pipeline can instruct the model to reveal sensitive data. A user who knows the pattern can attempt to override system instructions entirely.
Traditional AI models have a limited attack surface. They hallucinate, or they produce toxic output, both addressable with guardrails.
Agentic systems are fundamentally more exposed. Because agents can take actions, calling APIs, querying databases, triggering workflows, the consequences of a security failure aren't just a bad answer. They're a compromised system.
The threat landscape for enterprise AI agents includes prompt injection (malicious instructions embedded in user input or external data), goal hijacking (altering the objective the agent is working toward), tool and API manipulation (exploiting improperly scoped access), data poisoning (corrupting the information the agent retrieves), memory poisoning (tampering with the agent's stored context between sessions), privilege escalation (an agent assigned to the wrong access group gains more than it should), and cascading failure (in multi-agent systems, a compromised agent overwhelms others through excessive requests).
Despite this list, these risks are manageable with the right architecture.
The principle is straightforward. Before you can trust AI output, you need to trust the data it's reasoning over. If an agent has access to everything, it can expose everything. Governance at the data layer is what prevents that.
In Incorta's implementation, this means four interconnected pillars:
One underappreciated governance mechanism is the semantic layer, the intermediate layer between raw technical data and the business user experience.
Rather than exposing database tables directly to an AI agent, the semantic layer presents curated, labeled views. These views can be configured with clear column labels and descriptions in any language, explicit enablement or disablement of AI features per view, and metadata enrichment that helps the model understand business context rather than just schema structure.
This means governance isn't just about what data the agent can access. It's about how well the agent understands what it's looking at. Poorly labeled metadata leads to poor AI output. Well-governed metadata leads to answers that business users can trust.
Governance without visibility is incomplete. A secure AI deployment needs a full audit trail: who asked what, when, how the agent responded, and how users rated the experience.
This serves two purposes. First, it enables compliance. You can demonstrate exactly what the agent did and why. Second, it creates a feedback loop for improvement. Thumb-down signals, repeated questions, and session drop-offs all reveal where the agent is failing users, giving teams a prioritized list of what to fix.
The principle of least privilege, where every user and agent gets only the minimum access necessary to perform their task, applies at every layer. Continuous monitoring ensures that what's true today stays true as the system evolves.
The session's central argument: treating AI security as primarily a model problem is the wrong frame. Models can be tuned, guardrailed, and monitored, but if the underlying data is ungoverned, those efforts are built on sand.
The organizations that will deploy enterprise AI with confidence are the ones that treat data governance as a prerequisite, not an afterthought. Secure the foundation. Define clear access boundaries. Audit continuously. Then trust the AI to operate within those boundaries. Governance isn't something you build after the agent is ready - it's what makes the agent ready.

This post is part of Incorta's Innovate with Intelligence webinar series, a four-part exploration of agentic AI built for enterprise teams. From design patterns to evaluation to governance, each session tackles a different layer of what it takes to move AI from demo to production. Catch the full series here.
There's a moment every AI team knows well. The demo goes perfectly, the agent answers everything correctly, stakeholders are impressed... Then you deploy it.
Users start phrasing questions in unexpected ways. Edge cases appear that nobody thought to test. A prompt tweak that improves one query quietly breaks three others. And suddenly, the team is back to manually checking outputs, hoping nothing slipped through.
In Episode 3 we tackled this problem head-on: how do you move from vibes-based testing to a rigorous, systematic evaluation framework for enterprise AI?
Watch the full episode, or keep reading for our step-by-step approach.
Traditional software is deterministic. You write a test, it passes or fails, and you know exactly why. Evaluation is largely static.
Agentic AI is fundamentally different. The same question, phrased slightly differently, can produce a different answer with no clear explanation. A change that sharpens performance on finance queries might silently break supply chain queries. And you won't know until a user complains, or worse, until a wrong answer causes a real business problem.
The shift required isn't just technical. It's philosophical: stop treating evaluation as a manual chore and start treating it as a governed data workload.
The foundation of any serious evaluation framework is a golden dataset: a curated, governed set of test cases that lives in a structured table, not a loose CSV on someone's laptop.
But the contents of that dataset matter as much as the format. A stratified approach covers three layers:
Vague metrics like "helpfulness" are hard to act on. Instead, measure the technical relationships between data components:
1. Context Recall: Did the agent retrieve the correct rows from the database? If the right data isn't in the context, the answer can't be right, no matter how well the LLM reasons.
2. Faithfulness: Is every claim in the agent's response actually supported by the retrieved data? This is the anti-hallucination check. An agent that sounds confident while making things up is worse than one that admits uncertainty.
3. Answer Relevance: Did the agent answer the specific question asked, or did it just summarize the data broadly and call it done?
Track these three signals independently. When something fails, you'll know immediately whether the problem is in retrieval or in reasoning, which tells you exactly where to fix it.
You can't have humans review thousands of test outputs every time you update a prompt. The solution: use a highly capable LLM to evaluate your agent's outputs against a strict rubric.
A well-designed judge prompt produces a structured JSON response with both a numerical score and a reasoning string, converting qualitative text into hard integers you can aggregate, average, and graph over time.
Tools like Promptfoo offer out-of-the-box assertions for common metrics (factual accuracy, format validity, keyword presence, LLM-as-judge scoring) without requiring teams to build evaluation logic from scratch.
Here's where the architecture pays off. In a traditional setup, testing a prompt change means running evaluations, exporting to CSV, uploading to a BI tool, and waiting for a dashboard refresh. Context switching at every step.
When evaluation infrastructure lives in the same ecosystem as the agent (as Incorta's implementation does, using notebooks and dashboards on shared data), the loop collapses. Tweak a parameter, hit run, see the quality dashboard update instantly.
That speed of iteration is what separates teams that improve quickly from teams that stay stuck.
Evaluation results shouldn't sit in an isolated data table. They should be the pulse of the product: visible, queryable, and directly tied to decisions.
A well-designed performance dashboard answers the questions stakeholders actually care about. Is the new model accurate enough to justify its higher cost? Did the latest prompt change improve performance or introduce regressions? Which specific test cases are failing most frequently, and what does that tell us to fix next?
Critically, this visibility should require zero manual effort. Test suites run automatically, nightly, or triggered by any model deployment or semantic layer change. Results push to the dashboard automatically. The health you see always reflects the current state of the system.
There's a gap that even a rigorous golden dataset can't close: the difference between curated test inputs and real user behavior.
In the lab, inputs are clean and controlled. In production, users are messy, impatient, and unpredictable. Closing this gap requires online observability: watching what actually happens when real users interact with the agent.
The most valuable signals are often implicit. A user who interrupts the agent mid-generation is usually signaling a relevance failure. A user who asks the exact same question twice in a row didn't get what they needed the first time. High query volume paired with low active user count typically means people are getting stuck and retrying.
These silent signals are a rich source of data for improving agent performance. Every real-world interaction, including the ones the agent struggles with, is a high-value data point.
Feed those real-world failures back into your golden dataset. Use them to upgrade the system. This is the evaluation flywheel: production traffic doesn't just expose problems, it actively makes the agent smarter over time.
One research finding worth keeping front of mind: once a single agent crosses roughly 45% accuracy on a task, adding more agents to the system often makes overall performance worse. Errors cascade. Coordination overhead grows. System performance slips.
The implication: build simple, well-evaluated agents before layering complexity. Every added layer amplifies both the capability and the failure modes. Evaluation is what keeps you honest about which one is growing faster.
Deploying an AI agent is just the starting point. The teams that succeed in production are the ones that treat evaluation as a first-class engineering discipline: structured test cases, measurable metrics, automated scoring, continuous monitoring, and a feedback loop that turns real user behavior into system improvements.

This post is part of Incorta's Innovate with Intelligence webinar series, a four-part exploration of agentic AI built for enterprise teams. From design patterns to evaluation to governance, each session tackles a different layer of what it takes to move AI from demo to production. Catch the full series here.
The core argument is simple: almost every problem an AI team encounters has been seen before. Design patterns capture the accumulated wisdom of those who've solved these problems already - the best practices, the common pitfalls, the right tool for the right job.
The goal isn't to memorize patterns. It's to recognize which pattern fits the problem in front of you, and then benefit from everything that's already been figured out.
With that framing, here are the four patterns from this session.
Instead of writing one giant, monolithic prompt and hoping the LLM does everything correctly, prompt chaining breaks a complex problem into a sequence of smaller, focused subproblems - each with its own targeted prompt.
The output of each step feeds into the next, creating a pipeline.
Why it works:
Where it fits best: Information processing workflows with predefined steps: data extraction, validation pipelines, structured information retrieval from unstructured sources.
Where it falls short: It doesn't handle situations where the agent needs to make dynamic decisions or take different branches based on input. For that, you need the next pattern.
Routing is the pattern for adaptive, dynamic workflows. Rather than following a fixed sequence, the LLM evaluates the input and decides which path, tool, or pipeline to invoke next.
Think of it as the agent reading the situation and choosing the right response - not having the response hardcoded in advance.
Sample use cases:
Routing logic can take different forms: rule-based if/else logic, a secondary LLM that decides the path, or a trained ML classifier.
The key insight: routing is what makes an agent feel intelligent and responsive rather than mechanical.
When a workflow contains components that don't depend on each other's outputs, there's no reason to run them sequentially. Parallelization means executing those independent components at the same time, cutting latency significantly.
A simple example: if your pipeline needs to search two data sources and then combine the results, you don't need to search source one, wait, then search source two. Both searches can run simultaneously, and the final step combines their outputs once both are done.
Most agentic frameworks handle the orchestration automatically through asynchronous execution - you kick off the tasks and let the framework manage the timing.
The payoff: Faster pipelines, more responsive agents, better user experience, without changing any of the underlying logic.
Parallelization pairs naturally with chaining and routing to build workflows that are both sophisticated and efficient.
This is arguably the most transformative pattern of the four. Tool use enables LLMs to reach beyond their training data and interact with the outside world - databases, APIs, code execution environments, other agents, even physical systems.
Without tools, an LLM is limited to what it learned during training. With tools, it can:
How it works in practice:
This loop - decide, call, observe, decide - is what gives modern agents their real capability. Tool use isn't an add-on. It's what makes agents useful.
After the four patterns, Abd Rahman walked through Retrieval-Augmented Generation (RAG) - one of the most practical techniques for reducing hallucination in production environments.
The core problem RAG solves: LLMs don't know what they don't know. Ask them about your internal HR policy, your proprietary data, or a recent event, and they'll either hallucinate or admit ignorance. RAG fixes this by giving the LLM a trusted knowledge base to draw from at query time.
Offline (setup):
Online (at query time):
We showed a demo of a practical implementation: BI engineers save verified question-and-SQL pairs as reference anchors. When a business user asks a question, the system semantically matches it to the closest reference question - and uses the verified SQL query as the foundation for the answer.
Crucially, it's not a text match. When the same question was asked with a different filter (changing "Oregon" to "Washington"), the system recognized the semantic similarity, reused the reference query, and updated only the relevant filter - leaving everything else intact.
The result: business users get answers grounded in queries that a BI expert already validated, and trust is built in.
One subtle but important point: how you chunk documents significantly affects RAG quality. Blindly splitting at fixed word counts can break semantic context mid-thought — the model ends up with fragments that don't make sense in isolation. Smart chunking strategies (semantic boundaries, paragraph-aware splits) are essential for reliable retrieval.
These four patterns - prompt chaining, routing, parallelization, and tool use - are the building blocks of every production-grade agentic system. And RAG is the practical answer to one of enterprise AI's most persistent problems: getting an LLM to tell you the truth about your own data.
Everyone is talking about AI right now. Boards and CEOs want to see movement, and CIOs are under pressure to show something concrete.
We recently acquired Layout.dev, an AI application-building platform that’s designed to accelerate how organizations create applications and agentic workflows. For me, this isn’t about reacting to what’s happening right now. It’s about where things are going. Here’s how I think about it.
Business users don’t just want another dashboard. They want to connect to the systems they use every day, SAP, Workday, and ServiceNow, and get answers without waiting months for IT projects and pipelines.
For years, the interface moved from writing SQL to drag-and-drop. The next step is obvious. You type what you need, and the system builds it. You don’t click through screens. You say, “Connect these sources. Show me this. Add a filter. Build this for me.” That’s the direction.
That’s why Layout.dev matters. It changes how you interact with the platform.
But this is also where companies get into trouble.
AI works well when you’re drafting documents or generating code, because a human reviews the output. It breaks down when you try to run the core of the business on it.
If you can’t trust the system you use to close your books, how can you trust an automated system to act on your data?
Closing the books is the simplest test. Can you produce numbers you would report to the government? If the answer is no, adding AI won’t fix it.
AI is an accelerator. It makes you faster. But AI is trusted only if it works on trusted data. If the foundation isn’t solid, you’re just building faster on top of a problem.
If anything, the acquisition reflects a conviction I’ve held for years: AI will not fix a weak data foundation. It will only expose it.
Right now, companies move quickly to build demos and connect a model to some data. In two weeks, they can show something that at first glance looks impressive. That’s not the same as running a business on it.
When we talk about enterprise AI, it’s worth asking a simple question. Say that you’ve invested millions in your data platform Snowflake, Databricks, Fabric, whatever it is. These are smart systems built by smart engineers. But can you use any of it to close your books?
Closing the books requires numbers that reconcile. Numbers you would submit to the IRS. If your current system can’t reliably do that, then adding AI on top won’t magically fix the underlying problem.
“Good Enough?”
The first problem is the quality of the data.
Most enterprise data environments weren’t designed to operate directly on detailed, highly relational source data at scale. The engines underneath were built decades ago. ETL pipelines exist because those engines can’t handle the raw complexity of enterprise systems directly. So, the data gets simplified just to make the engine run. Once you do that, you’ve already lost part of the context.
When you reshape and aggregate data, you lose detail and eventually start to see mismatches. Sometimes they’re small and might be acceptable. But “good enough” won’t cut it in areas that don’t tolerate approximation like finance, supply chain, or banking.
That’s why many AI initiatives look promising at the surface but then stall when they reach the core of the business. Writing emails, drafting documents, and generating code — those are productive uses. A human still reviews the output, and the error tolerance is manageable.
But when you are reconciling invoices, matching contracts, validating currency rates, or recognizing revenue, the answers must be precise. If the underlying data is incomplete or simplified, the model will produce an answer anyway. If something is missing, it still gives you an answer. That’s a problem in finance.
Building a Foundation
The second problem is control. Large language models are generalists. They won’t understand your company’s specific financial rules or contract clauses. If you expose enterprise data to a model without constraining it, you’re relying on probability where determinism is required.
Our view at Incorta has always been that the foundation should come first. We operate directly on live enterprise data using Direct Data Mapping™. Instead of rebuilding everything into aggregates before it becomes usable, we work with the data as it exists in systems of record, such as SAP, Oracle, Workday, and others, preserving detail and relationships.
When we apply AI, we don’t just hand the question to an LLM and expect it to guess the right tables. Incorta first narrows the data down to the exact tables and columns. Then we ask the model to work inside that. That way, the model is operating within the real structure of your enterprise data, not inventing context.
This is exactly why Layout.dev matters.
For decades, the interface evolved from SQL to drag-and-drop dashboards. Now the interface is shifting again. Instead of building reports manually, users want to describe what they need to connect these systems, monitor these invoices, flag anomalies, trigger actions, and have the system build it.
Layout.dev gives us that interaction model. It allows business users and developers to define workflows and agents directly on top of live enterprise data.
But here’s the key: the interface can change without compromising the foundation. The easier it becomes to build an agent, the more important it is that the data it acts on is complete and governed.
Look at something simple like invoices. Today, validating an invoice means chasing it across systems, invoices, contracts, rate tables, and currency rules, and manually reconciling the pieces. If you have hundreds of them, it takes a lot of time. An agent can automate that. But if the underlying data doesn’t reconcile across systems, you’re automating error at scale.
AI is a multiplier. It multiplies precision, or it multiplies error. If the foundation is accurate and trusted, AI can increase speed and efficiency. If the foundation is fragmented or simplified beyond recognition, AI will expose those weaknesses more quickly.
So before making AI mandates across the enterprise, ask yourself whether your current data platform produces numbers you would confidently report to the government. If not, that is the place to focus first.
AI will absolutely change how we interact with systems. That’s why we invested in Layout.dev.
To learn more, read our official press release here.

This post is part of Incorta's Innovate with Intelligence webinar series, a four-part exploration of agentic AI built for enterprise teams. From design patterns to evaluation to governance, each session tackles a different layer of what it takes to move AI from demo to production. Catch the full series here.
Here's what they covered (and why it matters):
Large language models (LLMs) have come a long way from their early days as chatbots. While those first chatbot applications sparked widespread excitement in AI, they came with real limitations - narrow context windows, static training data, and an inability to interact with the world beyond generating text.
The evolution since then has moved through four distinct phases:
This fourth phase - agentic AI at scale - is where things get both powerful and complex. And that complexity is exactly why design patterns matter.
"When we get a new problem, we don't have to come up with a new architecture from scratch. We can just find the closest design pattern, use it, and benefit from all the wisdom of everyone who's applied it before."
Autonomy is the goal. But in the enterprise world - especially in finance, legal, and compliance - unchecked autonomy is a liability.
We frame the Human in the Loop (HITL) pattern simply: use it when the cost of an error is higher than the need for speed.
The pattern works like this:
This transforms the human's role from passive requester to active expert in the loop.
1. The Ambiguity CheckWhen a request is technically answerable but semantically vague — say, "show me total sales for my favorite product category" — rather than guessing (and potentially hallucinating), the agent surfaces an input prompt asking for clarification. Once the user responds, the workflow resumes with accuracy.
2. The Strategic Dead EndWhen an agent hits a genuine wall - like being asked for "the longest flight distance" in a retail dataset that contains no flight data - it doesn't crash or fabricate a number. It analyzes the situation, presents the user with strategic options (pivot the query, use a different dataset, cancel the request), and continues from where it stopped once a decision is made.
The key shift: the agent treats the human as a partner, not just a prompter.
Once trust and safety are handled by HITL, the next challenge is scale. And a single agent, no matter how capable, can't do everything.
Ask one agent to write code, do research, manage a project, and handle exceptions simultaneously - and you'll get confusion and mistakes. A bigger agent isn't the answer - it's a team of specialized ones.
Agent-to-Agent Communication (A2A) is the open standard protocol that makes this teamwork possible. Importantly, it's platform-agnostic: an agent built by one company can collaborate with an agent from a completely different company, regardless of which underlying AI model they use.
1. Discovery: Like looking up a phone book -before an agent asks for help, it queries a registry to find out which other agents are available.
2. Identity: Each agent shares an "agent card" - a structured introduction listing its name, capabilities, and the specific tasks it can perform.
3. Communication: Agents begin assigning tasks to each other and collaborating, either synchronously (for quick tasks) or asynchronously (for heavy, long-running jobs).
4. Security: Every agent interaction in an enterprise context is encrypted and logged, producing a full audit trail of who did what and when - essential for regulated industries.
Under the hood, A2A runs on standard JSON-RPC. A request is a simple, platform-independent message (e.g., "What are the total sales for bikes?"). The response isn't a simple chat reply - it's a structured data artifact that includes the answer and the evidence: the logic used, the SQL generated, and a full record of how the result was produced.
This matters for enterprise adoption: you're not locked into a single interface. Because the protocol is standard JSON, any application in your tech stack can programmatically access these agents.
These two patterns complement each other naturally. Human in the Loop ensures that trust and safety are never sacrificed for speed. Agent-to-Agent Communication ensures that complexity doesn't become a bottleneck for scale. Together, they form a foundation for agentic AI that enterprises can actually rely on.
The session's key takeaway: agentic AI is about designing systems that know when to pause, when to ask for help, and how to collaborate effectively.
The supply chain as we know it is dead. Supply chain tech has drastically shifted over decades from basic, reactive dashboards, to command centers with digital twins that help you decide what to do next, to fully autonomous platforms that make (and execute!) decisions on their own.
Gartner®' December 2025 report, "From Insights to Guided Actions: The Visibility Journey Toward Supply Chain Orchestration Platforms", maps out this evolution and what it means for supply chain leaders.
But here's what most leaders miss: none of these advanced capabilities work without the right data foundation. Most organizations simply aren’t ready.
We feel that Gartner® breaks down the evolution of supply chain technology into four major phases:
1. Business Intelligence (Early 2000s): Organizations built traditional data warehouses using mainly structured enterprise data. BI platforms provided insights through descriptive analytics (what happened) and diagnostic analytics (why it happened). While valuable, these systems were retrospective and couldn't enable real-time decision-making.
2. Control Towers (2010s): With the rise of IoT and near-real-time data, companies deployed domain-specific control towers focused on individual functions like logistics and transportation. These tools provided visibility and alerts but remained functionally siloed. They offered "see, understand, act, learn" capabilities within their narrow scope but couldn't support cross-functional decision making.
3. Command Centers (2020s): The evolution toward command centers represented a significant leap. These frameworks connect data from multiple internal and external sources, providing cross-functional insights. Crucially, they leverage a digital supply chain twin to enable simulation, optimization, and response capabilities. However, decisions and execution still remain human-driven.
4. Orchestration Platforms (2030 and beyond): The next frontier: supply chain orchestration platforms that go beyond insights to actually prescribe and perform decisions through existing operational systems. With capabilities like knowledge graphs, generative AI, and agentic AI, these platforms will automate decision-making, rather than just augmenting it.
At the heart of this evolution is the digital supply chain twin - defined by Gartner® as "a digital representation of the physical supply chain that can be used to create plans and make decisions."
And, according to Gartner®, true digital supply chain twin must include seven essential capabilities:
"A digital supply chain twin is neither a set of data tables in a data warehouse (based on structured data) nor a data lake (based on structured and unstructured data), nor a single data model in an SCP/SCM suite solution."
In other words, you can't fake it. You can't build a true digital twin on top of slow, batch-processed data or fragmented point solutions. You need real-time, granular operational data that's properly associated and continuously refreshed.
Most organizations have accumulated a patchwork of supply chain technologies: ERP systems, warehouse management systems, transportation management systems, planning tools, and various point solutions. Each generates valuable data, but that data typically lives in silos.
Traditional approaches to integration - extracting data from source systems, transforming it, and loading it into a data warehouse or lake - create unavoidable lag. By the time your data is ready for analysis, it no longer reflects current reality.
This lag becomes increasingly problematic as you move up the maturity curve:
Gartner® emphasizes that "it all starts with building the foundation: a consistent data and applications architecture." Without that foundation, investments in advanced analytics, AI, and orchestration capabilities will underdeliver.
Incorta's Direct Data Mapping™ technology delivers exactly what Gartner® identifies as essential - real-time, granular, connected operational data that can power digital twins, AI models, and autonomous decision-making.
Real-time data for real-time decisions: Incorta connects directly to your ERP, WMS, TMS, and other source systems, delivering live data without the lag that traditional ETL creates. This is the "real-time transactions and events from granular data" that Gartner® describes as foundational for digital supply chain twins.
Preserves relationships across your entire supply chain: Unlike traditional approaches that break apart your data during transformation, Incorta maintains the connections between orders, inventory, capacity, suppliers, and customers. This gives you the "correlations and configurations between planning, transaction, and event data" needed for command centers to simulate scenarios and optimize decisions.
AI-ready from day one: Advanced analytics and AI need detailed, granular transactions with full context - not pre-aggregated summaries. Incorta delivers exactly this, enabling the "probability distributions derived from transactional analysis" that Gartner® identifies as critical for digital twins.
Built to scale with your ambitions: Whether you're launching your first control tower or planning for autonomous orchestration, Incorta grows with you. Connect new data sources and expand capabilities without ripping and replacing your infrastructure - positioning you to adopt command centers, digital twins, and eventually autonomous orchestration as these capabilities mature.
If you're evaluating supply chain technology investments, Gartner® offers a useful lens for assessment:
Where are you today? Most organizations are somewhere between early-stage control towers and emerging command center initiatives. Few have truly integrated digital twins that span end-to-end operations.
What's holding you back? Often, the constraint isn't the lack of advanced analytics tools or AI capabilities: it's the quality, speed, and integration of underlying data. This can cause distrust in your data, and hesitation in making faster decisions.
What foundation do you need? Can you access real-time operational data? Can you associate that data across functions? Can you enrich it with external signals? If not, advanced tools will struggle to deliver any value.
How do you future-proof? Build a data foundation that supports your current needs, while enabling future capabilities. A platform that provides real-time access to all operational data, supports advanced analytics, AI, and can incorporate external data sources will serve you whether you're deploying your first Command Center or planning for autonomous orchestration.
Preparing today for the supply chain of tomorrow
Supply chain orchestration platforms are an ambitious vision Gartner® acknowledges is still "a prospective concept with considerable hype in the solution market for the time being."
Autonomous decision making and execution will require technology and organizational change: "Autonomous implementation of supply chain decisions will require reorganizing existing supply chain personas, processes, operating models, and technologies."
But organizations that invest today in modern data architectures that provide real-time, granular, connected operational data will be positioned to adopt advanced capabilities as they mature. Those that continue to build on legacy data infrastructure will find themselves constantly constrained by data lag, silos, and quality issues.
Understand exactly where your supply chain technology fits in the evolving landscape - and how to build the data foundation today you need to win tomorrow.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and MAGIC QUADRANT is a registered trademark of Gartner, Inc. and/or its affiliates and are used herein with permission. All rights reserved.
Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Safety stock is buffer inventory held to protect against demand variability and supply uncertainty. Most companies set safety stock once per quarter using outdated formulas, resulting in $20-50 million in unnecessary inventory for a typical $500 million supply chain. This guide explains how safety stock works, why traditional approaches fail, and how to optimize buffer inventory for today's volatile supply chains.
Safety stock (also called buffer stock or reserve inventory) is extra inventory held beyond expected demand to protect against uncertainty. It serves as insurance against:
Safety stock sits in addition to cycle stock (inventory needed to meet average demand between replenishments) and pipeline stock (inventory in transit).
Example: If you expect to sell 1,000 units during a 2-week lead time, you might hold 1,000 units of cycle stock plus 200 units of safety stock to cover potential demand spikes or delivery delays.
Traditional safety stock formulas account for demand variability and desired service level:
Safety Stock = Z × σ × √L
Where:
Safety Stock = Z × √(L × σd² + d² × σL²)
Where:
Carrying more safety stock than necessary creates significant costs:
Industry estimates put inventory carrying costs at 20-30% of inventory value annually, including:

Capital tied up in excess safety stock cannot be deployed on:
Most organizations calculate safety stock using formulas that were designed for stable, predictable supply chains. Today's reality is different.
The issue: Most companies review and update safety stock quarterly or annually. But supply chain conditions change weekly.
The impact: A safety stock level set in January based on Q4 data may be completely wrong by March when demand patterns, supplier performance, and lead times have all shifted.
What's needed: Continuous recalibration based on current conditions, not periodic updates based on historical averages.
The issue: Safety stock formulas require inputs like demand variability and lead time variability. Most organizations calculate these from historical data that's months or years old.
The impact: If supplier reliability has improved (or degraded) in the past 6 months, your safety stock is calibrated to the wrong reality. If demand patterns have shifted due to market changes, your buffer is sized for a world that no longer exists.
What's needed: Real-time inputs that reflect current demand variability, current supplier performance, and current lead time patterns.
The issue: Many organizations apply the same service level target (e.g., 95%) across all SKUs, regardless of margin, strategic importance, or customer impact.
The impact: You over-invest in safety stock for low-margin commodities while under-investing for high-margin strategic products. You treat a $5 SKU the same as a $5,000 SKU.
What's needed: Differentiated service levels based on product value, margin, customer importance, and substitutability.
The issue: After setting safety stock, most organizations don't track whether those levels are actually achieving target service levels.
The impact: You might be carrying $10 million in excess buffer for products that never stockout, while under-buffered products experience chronic availability issues. Without feedback, you can't optimize.
What's needed: Closed-loop monitoring that connects safety stock levels to actual service level outcomes and adjusts accordingly.
Dynamic safety stock optimization continuously adjusts buffer levels based on current conditions rather than static historical formulas.

Dynamic optimization considers:
Demand signals:
Supply signals:
Business context:
Not all products deserve the same safety stock investment. Segment by:

Before optimizing, understand your baseline:
Look for:
Set up systems to track:
Move from manual quarterly reviews to automated recommendations:
Organizations that implement dynamic safety stock optimization typically achieve:

The key insight: most companies can simultaneously reduce safety stock AND improve service levels because current buffers are poorly allocated, not because they need more inventory overall.
Incorta gives supply chain teams the real-time visibility and automated workflows needed to move from static safety stock formulas to dynamic optimization.
A live digital twin of your ERP. Incorta's Direct Data Mapping creates a unified, real-time digital twin of your entire ERP and related systems. Every inventory position, every demand signal, every supplier shipment is visible in its original granularity. You see actual current conditions, not historical averages.
Real-time inputs for safety stock calculations. Instead of calculating demand variability from last year's data, Incorta shows you current demand velocity and variability. Instead of assuming supplier lead times match contracts, you see actual performance. Your safety stock inputs reflect reality.
Dynamic recommendations. Incorta adapts safety stock recommendations to current demand variability, supplier reliability, and forecast accuracy. As conditions change, recommendations auto-adjust. You're not locked into quarterly formulas that don't reflect current reality.
Closed-loop performance monitoring. Connect safety stock levels to actual service level outcomes. See which SKUs are over-buffered (high safety stock, zero stockouts) and which are under-buffered (chronic availability issues). Optimize based on evidence, not assumptions.
Embedded workflows. When safety stock should be optimized, Incorta can trigger workflows that update replenishment rules. Recommendations become action without manual intervention or IT dependency.
The result: teams optimize buffer inventory based on current conditions, freeing working capital while improving service levels.
See how other supply chain leaders are already winning with Incorta here.
Stockouts occur when inventory runs out before replenishment arrives, leading to lost sales, expedited freight costs, and damaged customer relationships. Most companies experience 2-4 significant stockouts per quarter, but struggle to prevent them because they can't see the risk coming or understand root causes after the fact. This guide explains what causes stockouts, how to predict them, and how to build systems that prevent them.
A stockout (also called an out-of-stock or OOS) happens when a product is unavailable for sale or use because inventory has been depleted. Stockouts can occur at any point in the supply chain:
Stockouts are different from backorders. A stockout means the product is simply unavailable. A backorder means the customer can still place an order for future fulfillment.
Stockouts create both direct and indirect costs:


Stockouts typically result from a combination of factors:
Demand forecast errors. Forecasts underestimate actual demand, leading to insufficient inventory. This is especially common during promotions, seasonal peaks, or when new products cannibalize existing SKUs.
Unexpected demand spikes. Viral social media moments, competitor stockouts, weather events, or news coverage can create sudden demand surges that outpace inventory.
Omnichannel complexity. Inventory allocated to one channel (retail stores) may not be visible or available to another channel (e-commerce), creating artificial stockouts.
Supplier delays. Late shipments, quality issues, or capacity constraints at suppliers push back replenishment timing.
Lead time variability. Actual lead times exceed planned lead times, causing inventory to deplete before replenishment arrives.
Transportation disruptions. Port congestion, carrier capacity issues, or logistics failures delay inbound shipments.
Safety stock set too low. Buffer inventory is insufficient to absorb demand or supply variability.
Reorder points miscalculated. Trigger points for replenishment don't account for current lead times or demand velocity.
Inventory visibility gaps. Stock exists somewhere in the network but isn't visible or accessible where it's needed.
Data latency. By the time inventory reports are generated, positions have already changed and stockout risk isn't visible until it's too late.
Most organizations struggle to prevent stockouts for three reasons:
Traditional inventory reporting is backward-looking. Weekly or monthly reports show what happened, not what's about to happen. By the time a stockout appears in a report, it's already occurred.
What's needed: Forward-looking visibility that identifies stockout risk 7-14 days in advance, giving teams time to rebalance inventory, expedite shipments, or adjust demand.
When a stockout happens, teams struggle to answer: Was it a demand spike? A supplier delay? A forecast error? A safety stock miscalculation? Without clear root cause visibility, you can't prevent the next occurrence.
What's needed: Connected data that links inventory positions to demand signals, supplier performance, forecast accuracy, and replenishment timing so root causes are immediately clear.
Even when teams spot emerging risk, acting on it requires manual escalation, cross-functional coordination, and system updates. By the time approvals are secured and orders placed, the stockout has already happened.
What's needed: Embedded workflows that automatically trigger alerts, create emergency orders, or initiate inventory transfers when risk thresholds are crossed.
Effective stockout prevention requires moving from reactive reporting to predictive monitoring:
Track current inventory against current demand velocity (not historical averages) to calculate actual days of supply remaining. When days of supply drops below lead time plus safety stock buffer, risk is emerging.
Watch for demand acceleration that could deplete inventory faster than planned. A 20-30% increase in velocity over 1-2 weeks is a warning sign.
Track actual vs. planned lead times. If a supplier is running 5 days late on average, your reorder points need to account for that.
Configure automated alerts when:
Simulate what happens if demand increases 20%, if a key supplier is delayed 2 weeks, or if a facility goes offline. Identify which SKUs are most vulnerable.
Once you can predict stockout risk, prevention requires fast action:
Inventory rebalancing. Transfer stock from facilities with excess to facilities at risk. This requires real-time visibility into inventory positions across all locations.
Expedited replenishment. Pay for faster shipping or air freight to accelerate inbound orders. Most effective when risk is identified early enough to make a difference.
Demand shaping. Reduce promotional activity, adjust pricing, or temporarily limit orders to slow demand while supply catches up.
Substitute products. Offer customers alternative SKUs that are in stock.
Safety stock optimization. Adjust buffer inventory levels based on actual demand variability and lead time performance, not outdated formulas.
Supplier diversification. Qualify backup suppliers for high-risk SKUs to reduce dependency on single sources.
Lead time reduction. Work with suppliers and logistics providers to compress lead times and reduce variability.
Forecast accuracy improvement. Invest in demand sensing capabilities that incorporate real-time signals (POS data, market trends, weather) into forecasts.
Network optimization. Position inventory closer to demand to reduce lead times and improve responsiveness.
Supplier collaboration. Share demand forecasts with key suppliers and implement vendor-managed inventory where appropriate.
Organizations that implement proactive stockout prevention typically achieve:

Incorta gives supply chain teams the real-time visibility and automated workflows needed to see stockout risk before it becomes a stockout.
A live digital twin of your ERP. Incorta's Direct Data Mapping creates a unified, real-time digital twin of your entire ERP and related systems. Every inventory position, every sales transaction, every inbound shipment is visible in its original granularity. You see stock levels and demand velocity across every facility without waiting for batch reports.
Forward-looking risk visibility. Unlike traditional BI that reports what happened last month, Incorta enables forward-looking intelligence. You see stockout risk 10 days ahead and rebalance the network before customers are impacted.
Root cause clarity. When stockout risk emerges, you immediately understand why. Incorta connects inventory data to demand signals, supplier performance, and forecast accuracy so you can see whether the issue is demand-side, supply-side, or a planning gap.
Automated alerts and workflows. When stockout risk crosses your thresholds, Incorta triggers alerts and can initiate action workflows automatically. Stock-out risk emerges, an emergency PO gets created. No manual escalation, no delays.
Unified view across systems. Inventory data lives in your ERP. Demand lives in your planning tool. Supplier performance lives elsewhere. Incorta unifies these sources so you can see the complete picture and act on it.
The result: teams see problems coming, understand root causes, and take action before stockouts damage the business.
See how other supply chain leaders are already winning with Incorta here.
SLOB (slow-moving and obsolete) inventory is stock that moves too slowly relative to demand to justify its carrying costs. For most companies, SLOB represents 20-25% of total inventory value. This guide explains what causes SLOB, how much it costs, and how to identify and reduce it before it damages your bottom line.
SLOB stands for slow-moving and obsolete inventory. It includes:
Common characteristics of SLOB inventory:
For a company with $500 million in total inventory, SLOB typically represents $100-125 million in tied-up capital.
SLOB inventory creates four categories of cost:
Industry estimates put inventory carrying costs at 20-30% of inventory value annually. This includes:
For $125 million in SLOB, carrying costs alone run $25-40 million per year.
When inventory is declared obsolete, companies must write down its value. Typical enterprises write off $15-30 million in inventory annually.
Capital tied up in SLOB cannot be deployed elsewhere. At an 8% cost of capital, $125 million in SLOB represents $10 million in annual opportunity cost.
The longer inventory sits, the less you recover when liquidating:
Inventory Age
Typical Recovery Value
0-90 days
50-60% of cost
90-180 days
30-40% of cost
180-270 days
15-25% of cost
270+ days
Often better as tax write-off
SLOB forms when companies cannot see or act on inventory problems quickly enough. Root causes include:
Demand changes
Product lifecycle issues
Planning failures
Visibility gaps
Most companies identify SLOB during quarterly reviews, when inventory is already 120+ days old and recovery value has dropped 60-70%. Early identification requires:
Monitor every SKU's velocity relative to historical patterns and current forecasts continuously, not quarterly.
Flag inventory trending toward SLOB based on:
Set triggers for warning signs:
When inventory is flagged, understand why: customer churn, marketing changes, competitive pressure, or forecast error. Context determines the right response.
Once identified, SLOB can be addressed through:
Redistribution Move excess inventory from low-demand facilities to high-demand locations.
Targeted Discounting Clear inventory before it hits 180 days while recovery value is still reasonable.
Bundling Package slow-moving SKUs with high-demand products.
Supplier Negotiation Renegotiate MOQs to prevent future SLOB formation.
Liquidation Sell to secondary markets, discount retailers, or liquidators.
Write-Off When recovery value falls below liquidation costs, write off and dispose.
Organizations that implement proactive SLOB management typically achieve:
Working capital freed: $20-50 million in year one
SLOB formation reduction: 40-60%
Inventory turns: 10-20% improvement
Write-downs: 20-50% reduction
Additional benefits include fewer emergency orders, reduced expedited freight costs, and improved customer service levels.
Effective SLOB management requires:
Live data feeds ERP, WMS, finance, and demand planning systems feeding a unified view with no aggregation or latency.
Automated calculations System-generated aging buckets, velocity metrics, forecast accuracy, and risk scores.
Embedded workflows Automatic creation of liquidation approvals, redistribution work orders, and leadership escalations.
Predictive analytics Forward-looking intelligence that identifies SKUs trending toward SLOB 6-8 weeks before they get there.
Traditional BI tools, specialized inventory software without clean data inputs, and spreadsheet tracking all fall short because they lack real-time detail, require manual intervention, or can't scale across 50,000+ SKUs.
Incorta gives supply chain teams the real-time visibility and automated workflows needed to catch SLOB before it becomes a write-off.
A live digital twin of your ERP. Incorta's Direct Data Mapping™ creates a unified, real-time digital twin of your entire ERP and related systems. Every inventory position, every sales transaction, every forecast is visible in its original granularity, not aggregated away. You see stock levels, demand patterns, and inventory health across every facility without months of pipeline building.
Live inventory data, not stale reports. Incorta's Direct Data Mapping™ technology pulls data directly from your ERP, WMS, and planning systems without aggregation or delays. You see SKU-level inventory positions, aging, and velocity as they happen, not days or weeks later.
Automated SLOB detection. Incorta continuously monitors inventory against demand signals and flags at-risk stock at 60-90 days instead of 120+. Configurable alerts notify the right people when velocity drops, aging thresholds are crossed, or discontinued products still have inventory on hand.
Root cause visibility. When inventory gets flagged, you can immediately drill into why. Incorta connects inventory data to sales, customer, and forecast data so you understand whether the problem is demand-side, supply-side, or a planning gap.
Embedded workflows. Identification without action doesn't help. Incorta triggers redistribution requests, liquidation approvals, and escalations automatically based on your business rules.
Unified view across facilities. Instead of seeing "50,000 units total," you see exactly where inventory sits, which locations have excess, and which have shortages, so you can rebalance before ordering new stock.
The result: teams catch problems earlier, recover more value, and stop SLOB from forming in the first place.
See how other supply chain leaders are already winning with Incorta here.
Getting the right information at the right time shouldn't feel impossible. Yet for years, that's exactly what it felt like for manufacturing and supply chain teams trying to make sense of their data.
We recently gathered three leaders from different corners of the manufacturing world to talk about what's actually working. Jim Shane from Composites One, Joe Persio from Zeus, and Hamal Solanki from Sooner Pipe share the messy reality of spreadsheets, the frustration of waiting weeks for answers, and the surprising speed at which things can change when you fix your data foundation.
All three companies started from the same place: data everywhere, answers nowhere.
Jim described what many in manufacturing face daily. "We have lots of disparate data sources. We needed a way to thread all of that information together and present it in a way that actually helps people make decisions more quickly."
At Zeus, Joe saw the same pattern. "The company was basically Excel spreadsheets, Apex forms, Oracle reports that were 15 years old. They really didn't allow you to take different parts of your ecosystem and put them together to get a holistic picture."
For Sooner Pipe, Hamal pointed to a critical trust issue. "You look at the summary report, it gives one number. The detailed report gives a different number. Even though the difference wasn't major, anytime an accountant is using it, even a difference of a hundred dollars matters unless you can explain it."
When your team doesn't trust the numbers, they go back to Excel. Every time.
Every manufacturer on the panel brought up inventory. Not as an aside, but as a primary pain point that demanded immediate attention.
Jim's team at Composites One tackled aged and excess inventory that had piled up during supply chain disruptions. "We were really bulking up on inventory. We had a lot of that sitting around, kind of stagnant. We did not have an elegant way of addressing that using our newer ERP and legacy systems."
The solution was more than seeing what inventory they had, it had to be a way to give sales teams the insights they needed to act in real time."We needed a platform to pull it all together, make it easy to digest, and provide the visibility that you just wouldn't have had otherwise. Quickly, concisely, and in a more real-time format."
At Zeus, Joe took inventory optimization even further by combining detailed data with AI. "We were able to do a recipe in our AI platform and push that into our system. Now we have a dashboard for our ops team that shows them every hour: you've got this on the shelf, this just came in or has been sitting, these exactly match, let's ship it out."
That dashboard, which Joe built in 30 minutes with Incorta, sped up inventory optimization by 300%.
The time reclaimed from manual work started to change what questions teams could actually ask.
Joe explained the shift: "Something that used to take three weeks now takes about three clicks. If you think about that in your ecosystem, the ability to take all that away and put it in one place has completely changed the game on how people are going to engage with your data."
The impact goes beyond speed. When answers come quickly, people start asking better questions. When data is trustworthy, teams actually use it to make decisions instead of second-guessing every number.
Jim shared a moment that captures this shift. A user questioned an on-time delivery metric that didn't match what they saw in their ERP. "We can show you where we're getting it from. Well, it turns out the business logic was different in SAP. It's not a matter of one thing's right, one thing's wrong. It's just looking at it in different ways. Now we can prove where everything's coming from, and folks in the user community love the fact that you can prove it to them."
When Hamal's team at Sooner Pipe needed to replace their reporting infrastructure, they faced a choice: spend months and millions upgrading their existing setup, or find a different path forward.
They chose speed. "We were able to convert all our legacy reports that were developed over 12 years within four months."
The data extraction process that used to take two and a half hours now completes in one hour. Reports that once required IT intervention now let different teams look at data in different ways with just a few clicks.
"Before, anytime they needed something different than what they already had, they had to come to IT. The turnaround time was so long that by the time the information came back, it was usually too late," Hamal explained.
Now their sales team, purchasing team, and operations can all view the same data through their own lens without waiting in line for developer time.
The conversation naturally moved to AI, but not in the way you might expect. These aren't theoretical use cases or future possibilities. They're practical applications happening right now.
Hamal's team is building an AI model to predict raw material prices. "In our industry, the raw material we purchase goes through a lot of ups and downs based on drilling activity, oil prices, GDP, inflation. We're going to feed a lot of data into our system and create an AI model that predicts the price of our purchases."
The goal is practical: should the purchasing team do a speculative buy now, or wait because prices are expected to drop?
Jim is working on similar predictive capabilities. "We're looking at bringing in leading economic indicators and drawing correlations between what's going on out there and how that's impacting us within our four walls. Then potentially leveraging generative AI to ask: what's that going to look like in the future?"
But here's what matters: none of these AI initiatives would be possible without getting the data foundation right first.
Joe put it plainly: "My data all lives in one place. So for me, it's very easy to get to my data in the way that I need to get it. We're able to model the data quicker. This is months of effort that I don't have to do."
Joe shared something that gets to the heart of why this matters: "We've made Incorta a household name. You talk to anybody here; that is our platform of choice. We have about 1,200 use cases under our belt now."
When people trust their data, they use it. When they can get answers quickly, they ask more questions.
Jim said it well: "We've got a lot of flexibility, and we can prove where everything's coming from. Folks in the user community love the fact that not only can they see it, but you can prove it to them. I truly believe this is the best source for all this."
None of these companies started with a perfect plan. They started with specific problems: inventory sitting too long, buildings running out of space, reports that didn't match, questions that took weeks to answer. They fixed their data foundation first, and eerything else followed from there.
As Joe reminded the group: "Bring the problem, bring what you want to do. I brought it, and everyone's happier because we're partnering and showing value in a different way."
The manufacturing leaders who are winning today aren't the ones with the most sophisticated technology. They're the ones who can get the right information to the right people fast enough to actually matter. Sometimes that's the difference between three weeks and three clicks.
Every boardroom wants AI magic. Every investor demands AI results. Every executive meeting screams for AI adoption and transformation. But here's the uncomfortable truth: You can't conjure AI success from thin air - or from an unstable data foundation.
In a recent conversation with experts from Hudson Logic and TechStone Technology Partners, we pulled back the curtain on enterprise AI readiness, and what's really holding organizations back. The diagnosis? Most companies are trying to build AI skyscrapers on a data foundation as stable as quickstand.
When asked what percentage of their executive teams truly believe their data is AI-ready, responses were scattered - but the consensus was: "We're not ready. Not even close."
Leadership sets aggressive AI targets. Investors are hungry for transformation stories. Stakeholders demand use cases and outcomes. Meanwhile, the foundation crumbles beneath everyone's feet. "The pressures from within your organization are really being felt," Kurt Whit, managing partner at Hudson Logic, observed with the weight of someone who's seen this movie before. "Your leadership is pushing. Your investors are expecting. But let's be honest - you really can't fake that AI success."
And yet, organizations keep trying. The result? A collective exercise in wishful thinking, where GenAI pilots run on untrusted data, where Copilot asks questions the organization can't reliably answer, and where the C-suite wonders why their AI investments aren't paying dividends.
Several critical challenges emerged:
Data Silos and Inconsistency
Data lives fragmented across systems - often resulting from acquisitions, departmental deployments, and legacy technology decisions - creating limited visibility and makes it nearly impossible to generate accurate insights. When asked where their organization's critical data resides, participants pointed to departmental databases, CRMs, and various line-of-business systems scattered throughout the enterprise.
Lack of Integration Across Systems
The biggest challenge cited by participants was the inability to integrate data across disparate systems. This fragmentation prevents organizations from establishing a single source of truth and creates inconsistencies that undermine trust in data-driven insights.
Poor Data Quality
Without proper governance and quality controls, organizations struggle to ensure their data is accurate, complete, and contextual - all essential requirements for AI applications. As one speaker noted, "With AI, if garbage comes in, it's going to fail."
Technology Complexity and Cost
The perceived cost of building a proper data foundation, combined with the complexity of modern technology stacks, creates significant barriers. When asked about the biggest slowdowns to AI adoption, participants overwhelmingly cited the cost of becoming data-ready as their primary concern.
Building an AI-ready data foundation requires addressing three interconnected dimensions:
1. Technology Infrastructure
Organizations need data platforms capable of supporting real-time, governed, and explainable AI pipelines. This includes having proper orchestration tools, infrastructure to support performance at scale, and integration capabilities that can handle multiple source systems seamlessly.
2. Data Governance and Quality
Clear data definitions, ownership structures, and quality controls are non-negotiable. IT should support the platform, but business entities must take ownership of their data and understand how to steward it consistently. This includes implementing controls for transparency, traceability, and compliance across the entire data lifecycle.
3. Unified Business Models
Standardized data structures that transcend applications and departments ensure that HR, finance, and operations all speak the same language. These reusable business models, driven by business needs rather than technical constraints, help avoid silos and create trust across the organization.
The path to AI readiness follows a layered approach:
Raw Data Layer
Bringing data from source systems into a unified platform with complete traceability—knowing where every piece of data originates and how it moves through the pipeline.
Transformation Layer
Creating clear visibility into how data is cleansed, mapped, and transformed as it progresses into curated, certified datasets ready for consumption.
Semantic Layer
Developing business-ready models: fact tables, dimensional models, and denormalized datasets that provide the foundation for both analytics and AI applications.
Agentic Orchestration Layer
Enabling AI agents to understand, through proper tagging and metadata, where data comes from, what processes have been applied, and how it can be trusted for decision-making.
Zach Breimier, senior solution engineer at Incorta, demonstrated how modern data platforms can address these challenges through three core capabilities:
Multi-Source Integration in Days
Rather than the traditional six-to-eight-week timeline for change requests, Incorta's approach enables organizations to harmonize multiple ERPs, CRMs, and other source systems rapidly—often delivering full data foundations in two to eight weeks.
Near Real-Time Performance at Scale
Traditional approaches often rely on overnight batch refreshes or periodic updates that leave business users working with stale data. For roles like supply chain analysts who need current information to make decisions, this delay is unacceptable. Incorta's architecture supports near real-time updates while maintaining query performance even at transactional detail levels.
Full Data Fidelity
By preserving data all the way down to the most granular transactional details—such as subledger entries from finance and accounting or order details from sales and supply chain—organizations can support both detailed operational analysis and higher-level strategic insights without sacrificing either.
One significant advantage of purpose-built data platforms is the availability of pre-built accelerators. These frameworks come with physical models already mapped to popular ERP systems (Oracle Fusion, SAP S/4HANA, NetSuite, Workday), including identified tables, pre-configured joins and relationships, and out-of-the-box semantic layers with analytic-ready models.
These accelerators don't eliminate the need for customization—every organization has unique requirements—but they dramatically reduce the engineering effort required to get started. Custom tables can be added through simple wizards, joins can be configured through drag-and-drop interfaces, and new models can be created without writing SQL or Python.
When asked to rate their organization's current data maturity, most webinar participants placed themselves between six months and two years away from where they need to be. This range reflects the complexity of the challenge but also highlights an important reality: building an AI-ready data foundation is a journey with distinct stages.
Organizations should approach this journey with a clear framework:
The consensus from these experts? AI readiness isn't about rushing to implement the latest large language model or generative AI tool. It's about building a solid foundation of integrated, governed, high-quality data that can support AI applications reliably and at scale.
As organizations face mounting pressure to demonstrate AI value, those that invest in proper data foundations will be positioned to move quickly when opportunities arise. Those that skip this step in pursuit of faster results will likely find themselves rebuilding from scratch after early AI initiatives fail to deliver trusted outcomes.
The good news? With modern data platforms and proven frameworks, organizations can establish these foundations in weeks rather than years - fast enough to meet business demands while thorough enough to support long-term AI ambitions.
For organizations looking to assess their AI readiness or explore data foundation strategies, experts recommend starting with a clear-eyed evaluation of current capabilities, identifying the highest-impact use cases, and selecting technology partners who can deliver both speed and sustainability in data platform implementation. Explore more from Incorta.
In the last year, we saw companies increase their data analytics and customer insights investment by 54%. Yet despite this spending surge, most organizations are making a critical mistake: building AI on top of broken infrastructure.
In a recent conversation on the Martech Podcast, our CMO Noha Rizk sat down with host Benjamin Shapiro to discuss why legacy dashboards and delayed data are holding businesses back - and what it takes to build systems that deliver insights and AI-powered actions in real time.
"The modern data stack is fundamentally broken," Noha explained. "It's fragmented, slow, and costly. Legacy processes require staging and conforming data to outdated models, making reporting backward-looking and often obsolete."
The parallel to the early Internet days is striking. When browsers first appeared, companies built websites like digital brochures: pretty, but static. They had powerful new rails but weren't maximizing what those rails could do. Today's AI moment requires the same fundamental rethinking.
"You can't just slap AI on top of existing infrastructure," Noha said. "You need to ask: will this infrastructure actually help me maximize what I can do with this new technology?"
The biggest obstacle to adopting real-time data infrastructure? The math doesn't compute for many companies. Real-time systems are costly and time-consuming, making organizations worry about losing data fidelity. The total cost of ownership becomes prohibitive, and ROI calculations fall apart.
Noha's advice, drawn from her experience at Meta, centers on incrementality: "You have to really think through where in your organization there will be incremental value. Pick the pockets within your company where this will actually give you measurable returns, so that pain of migration becomes worthwhile."
Not every function needs live data. But identifying where it creates significant impact - whether in reducing waste, optimizing logistics, or improving customer engagement - is the homework that separates successful transformations from expensive failures.
Certain verticals see immediate returns from real-time analytics:
Retail & Food Service: One Incorta customer with 3,000 physical branches uses live data to optimize inventory waste. By understanding which items are moving at what times of day, they can push geo-targeted promotions to nearby consumers, maximizing revenue through strategic discounting rather than throwing products away. They can also move inventory between branches based on real-time demand signals.
Manufacturing: Companies track factory floor operations across multiple locations in real time, creating massive cost savings. For manufacturers where every cent counts, this translates to hundreds of thousands of dollars in operational improvements.
Healthcare & Retail: Imagine receiving a notification from your pharmacy that says, "We've seen a 75% increase in Tamiflu sales this week - a flu epidemic in your ZIP code is coming. Get vaccinated." That's the difference between generic marketing and data-driven personalization that actually resonates.
Perhaps the most surprising? Education. As universities become more tech-enabled, they're using live data to track student outcomes and behavior in near real-time, adjusting instruction, curriculum, and grading systems dynamically.
One of the most compelling insights from the conversation centered on creativity versus control. While some argue that unlimited data access creates analysis paralysis, Noha makes a counterpoint: breakthrough moments require space to explore.
"The creative process requires enough space to go down rabbit holes and noodle and get lost a little bit," she explained. "If you close that door and become too rigid, only measuring certain KPIs, you might miss important insights."
She shared a powerful example from Meta's Marketplace product. When COVID hit and person-to-person exchanges stalled, only their deep data access and curiosity revealed that users were trying to find alternate transaction methods. This insight led to subsidized shipping, driving hockey-stick growth.
"You can't design for those moments," Noha said. "You can't put rigid rails around innovation."
The key is balance: maintain your North Star metrics and KPIs while enabling the freedom to dig deeper when trends change and ask why.
Noha's most important takeaway from her time at Meta? Data helps you move faster.
"Bureaucratic processes that slow companies down typically exist to mitigate bad decisions," she explained. "Meta flattened decision-making as much as possible, putting decisions as close to the work and the signal. To enable that, Meta had to be one of the most data-driven companies in the world."
The mantra wasn't just "move fast and break things" - it was move fast with confidence, powered by data fidelity and real-time insights. At Meta's scale, every tweak and fine-tune impacts millions of people. That requires data you can trust, delivered instantly.
Noha shared a controversial take: awareness as a marketing metric needs to die.
"There's no inherent value in just being known," she said. "Someone hearing your name or recognizing you - that's a vanity metric. You need stronger value signals: breakthrough, message comprehension, intent signals."
Awareness matters for top-of-funnel, but its usefulness dies almost as quickly as you measure it. "It's the fruit fly of metrics," as the conversation concluded.
Instead, the focus should be on insights that drive action. "Analytics is nothing without insights," Noha emphasized. "Raw data doesn't mean much. How you make connections between data: that's where value lies. Question the why behind the data constantly."
Looking ahead, Noha sees AI enabling marketing analytics to become laser-focused on hitting the right message to the right person at the right time through hyper-targeted, cross-medium personalization.
"We often contend with having the right message for our ICP, but hitting them at the wrong moments," she explained. "Maybe they're seeing it in the morning when they should see it in the evening, or on LinkedIn when they should see it on TikTok. What AI analytics will enable is getting really, really good at that precise timing."
Think 'Minority Report's hyper-personalized holographic ads - but actually useful and contextually relevant.
Companies scrambling to add AI capabilities without addressing their broken data infrastructure are building on quicksand. The winners will be those who do the homework, identify where live data creates incremental value, and build systems that can unlock creativity, rather than constrain it.
As Noha put it: "The question isn't whether you can afford to use live data. It's whether you can afford not to."
Listen to the full conversation on the Martech Podcast to hear more about Noha's experience at Meta, her approach to marketing analytics, and why she believes websites as we know them might be dying...
At NoLimits Riyadh, we brought together four visionary leaders to discuss one of the most pressing questions facing organizations today: How do you turn AI investments into real business impact?
The panel, featuring Abdullah Asiri (Founder & CEO, Lucidia), Stefano Bertamini (Board Member, AI966 and former CEO of Al Rajhi Bank), Greg Baxter (Chief Digital and Transformation Officer, HP Inc.), and Hosem Alduriay (CTO, Saudi Az), delivered insights that every business leader can use.
Asiri's company, Lucidia, is helping one of the region's largest e-commerce platforms save over $7 million by replacing 500 seasonal customer support agents with AI-powered solutions. But he was quick to add a crucial caveat: "AI isn't a silver bullet. You can't just throw it at any problem and expect it to magically solve everything."
Greg Baxter echoed this sentiment from HP's perspective, emphasizing that the key metric shouldn't be about AI at all, it should be about the problem you're solving. When HP deployed Microsoft Copilot across 20,000 employees, they saw only a 3% productivity improvement because they hadn't clearly defined the problem they were trying to solve. In contrast, their focused initiatives in software development, content creation, and customer support delivered 20-30% productivity gains.
"The important thing on ROI is focus on the problem and metrics of the problem, not on the technology," Baxter emphasized. This seemingly simple insight explains why MIT reports that 95% of AI projects fail.
Stefano Bertamini was direct about a challenge many organizations face: "A lot of organizations aren't ready because their data isn't ready. It's sitting in different systems. The definitions don't align. The systems don't talk to each other."
AI hasn't solved the data problem: if anything, it's made it more urgent. Organizations that haven't invested in proper data foundations and governance frameworks are finding their AI ambitions blocked at every turn.
Baxter shared HP's approach to organizational change: "You need to have a compelling and ambitious vision for the company in a digital and AI world. Once you've excited them and won over their heart, then it's about training and developing them - winning their heads. Only then do you get their hands involved."
For companies operating across diverse markets, cultural sensitivity isn't optional. Asiri explained how Lucidia builds proprietary models trained on local data to ensure cultural alignment: "Some things are not acceptable to say to customers in this region. Some emojis are not acceptable to share. We need AI that is culturally aware and aligns with local values."
On the governance front, Bertamini outlined the non-negotiables: secure firewalls to protect customer data, clear ethics frameworks to prevent bias, and increasing regulatory transparency requirements. "If someone wrote a newspaper article about how you went about this, would you be happy with that?" he asked - a powerful litmus test for any AI implementation.
The panel's vision for 2030 painted a picture of work fundamentally transformed:
"I think we don't need to have these frequent meetings because our AI agents can meet with our colleague AI agents to have this information sharing and exchange, which is going to free ourselves to do much better things," Asiri predicted.
Baxter offered a sobering counterpoint: "The winners and losers will be determined a lot faster now. The cycle time for companies to learn and adapt will accelerate, and those on the wrong side of the equation will be exposed a lot more rapidly."
The message from these leaders was clear: AI isn't so much about technology as it's about transformation. Success requires clear problem definition, solid data foundations, cultural change management, and unwavering focus on measurable business outcomes.
As organizations race to implement AI, the ones that will win aren't necessarily those with the most advanced technology. They'll be the ones who ask the right questions, solve real problems, and bring their entire organization along for the journey.
Want to learn more about building the data foundation your AI initiatives need? Connect with Incorta today for a 1:1 demo.
According to Gartner research, ERP implementation failure rates range from 55-75%, while McKinsey estimates that more than 70% of all digital transformations fail. For data leaders planning to unlock ERP data for AI initiatives, this is a warning sign about the path ahead.
The future of enterprise AI is tantalizing! Imagine, business users asking complex questions in natural language, AI agents automatically resolving invoice discrepancies, and intelligent workflows that adapt in real-time. Platforms like Google's Gemini Enterprise and BigQuery are making this vision achievable.
But there's a catch.
Your AI ambitions are only as good as the data foundation beneath them. And if your most critical business data is trapped inside Oracle Fusion, SAP, or across complex disparate systems, you're facing a decision that could make or break your AI strategy: Should you build custom ERP data pipelines from scratch, or invest in a purpose-built platform?
For many data leaders, the instinct to build feels right. You have talented engineers, modern tools, and the confidence that comes from past successes. How hard could it be to extract some data from your ERP?
The answer: much harder than you think. Research shows that ERP implementation and integration projects have failure rates between 55% and 75%, with only 23% of all ERP implementations considered successful. Even more sobering: the average cost overrun for ERP projects is 189%.
What looks like a straightforward data engineering project quickly becomes something far more daunting:
Oracle and SAP databases don't come with user-friendly maps. You're looking at 10,000+ cryptically named, highly normalized tables. Before writing a single line of code, your team needs to become forensic ERP experts, spending months deciphering relationships just to answer basic questions like "What does a complete customer order look like?"
Next comes building custom extraction scripts - APIs with limited functionality, risky direct database connections, and brittle code that breaks whenever your ERP vendor releases an update. What should be a one-time build becomes a permanent firefighting operation.
Raw ERP data in BigQuery is virtually unusable. Those cryptic tables need to be transformed into business-friendly models - star schemas that analysts and AI agents can actually work with. This means writing thousands of lines of SQL, building complex dbt models, and essentially recreating business logic that already exists in your source system.
Your custom solution becomes tribal knowledge, understood by a handful of developers. If and when they leave, you're left with an undocumented black box that no one dares touch.
Here's what the "build from scratch" approach typically delivers:
By the time your custom solution is ready, it's already outdated, difficult to maintain, and nowhere near "agent-ready." According to Precisely's 2025 Data Integrity report, only 12% of organizations report that their data is of sufficient quality and accessibility for effective AI implementation, while 67% still don't completely trust the data they rely on for decisions.
Modern AI agents aren't like traditional BI dashboards. They need fundamentally different data characteristics. According to recent surveys, 70% of business leaders say agentic AI is both strategically vital and market-ready, and 79% of organizations are already adopting AI agents in some capacity.
But here's the catch: 62% of enterprises exploring AI agents lack a clear starting point, largely due to data-related challenges.
AI agents need to see every transaction, every line item, every detail. Summaries and aggregates don't cut it when you're trying to automate invoice dispute resolution or predict supply chain disruptions. According to McKinsey research, AI agents in call centers can automate 60-80% of incoming requests with customer satisfaction scores comparable to or better than current systems.
Day-old data is ancient history. Agents need near real-time information to make intelligent decisions and take automated actions.
Agents must understand relationships—that an invoice connects to a purchase order, which connects to a vendor, which has payment terms, which affects cash flow. This contextual web is what enables truly intelligent automation.
The harsh reality? Most "build from scratch" solutions fail on all three dimensions, delivering stale, summarized data that's unsuitable for powering effective AI agents.
The good news: you don't need to reinvent the wheel. Purpose-built platforms like Incorta exist precisely because the ERP data problem is so complex and so common.
Pre-Built Domain ExpertiseInstead of reverse-engineering thousands of tables, you get data blueprints that encapsulate decades of ERP knowledge. Think of it as having a Rosetta Stone for your ERP—instant translation from cryptic tables to business meaning.
Automated TransformationRather than building transformation jobs in BigQuery, advanced platforms can model data into clean, query-friendly schemas automatically before it even lands in your data warehouse.
Enterprise Support and MaintenanceInstead of being on the hook for every ERP update, you get a maintained, supported platform that evolves with your source systems. Your team can focus on business value, not pipeline babysitting.
The strategic difference is measured in years versus weeks:
Build from Scratch: Wait 1-2 years before asking your first meaningful question or training your first reliable AI agent. By the time you're ready, market opportunities have passed. Moving from proof-of-concept to production remains challenging—only 30% of AI experiments typically make it to deployment.
Purpose-Built Platform: Deploy a complete, production-ready ERP data foundation in BigQuery within weeks. Start building and testing AI agents this quarter, not next year. Organizations that solve data integration challenges achieve 4x faster AI deployment and 3x higher value capture rates.
This acceleration is critical, especially while you still have organizational momentum and budget. According to Google Cloud's 2025 AI study, 52% of executives say their organizations have deployed AI agents, with 88% of executives planning to increase AI-related budgets due to agentic AI. The competitive pressure is real. The question is whether your data foundation will be ready in time.
The "build versus buy" framing suggests these are equally valid options. They're not.
Building custom ERP data pipelines from scratch is like constructing your own database engine when PostgreSQL exists. It's technically possible, but it's a specialized engineering challenge that diverts resources from your actual competitive differentiators. The data backs this up: 70% of projects exceed original timelines by an average of 45% due to complexity underestimation, and 60% of implementation failures stem from choosing the wrong approach.
Your AI strategy deserves better than to be held hostage by ERP data pipeline construction. The organizations winning with AI aren't the ones who built the most custom pipelines—they're the ones who solved the data foundation problem quickly and moved on to creating business value. Companies implementing proper data operations report 60% faster analytics delivery and 45% fewer data quality incidents.
Your AI ambitions - whether it's Gemini Enterprise, autonomous workflows, or intelligent automation - require an agent-ready data foundation. The fastest path to that foundation isn't through custom-built ERP pipelines, but through leveraging specialized platforms designed to solve this exact problem.
The statistics paint a clear picture: with failure rates between 55-75% for custom implementations, 189% average cost overruns, and one in three companies planning to allocate over $25 million to AI in 2025, the risk of the "build" approach has never been higher.
Meanwhile, organizations that get their data foundations right are seeing remarkable results: 4x faster AI deployment, 3x higher value capture rates, and 60% faster analytics delivery. The real question isn't "build or buy." It's "How quickly can we get our AI strategy off the ground with production-ready data?"
Choose the path that gets you there fastest, and invest your innovation energy where it actually matters - in the AI applications that will differentiate your business.
You're excited about Google's Gemini Enterprise. Your team has identified game-changing use cases: AI agents that could transform your finance operations, optimize your supply chain, or revolutionize HR workflows. There's just one problem: your critical business data is locked away in complex on-premises systems and sprawling SaaS applications.
Sound familiar?
If you're asking yourself, "How do we get our ERP data into BigQuery so we can actually use this?" you're not alone. And more importantly, you don't have to embark on a year-long data engineering odyssey to get there.
Getting your ERP data ready for it can feel like a massive - and expensive - data engineering project. Traditional approaches require:
By the time your data is "AI-ready," your competitive window may have already closed.
This is where Incorta changes the game.
Incorta's Direct Data Mapping™ technology eliminates the need for those complex, time-consuming ETL processes entirely. Instead of spending months (or years) building data pipelines, Incorta gets 100% of your full-fidelity, transactional data wherever you need, in a matter of weeks.
No data loss, and no endless data engineering cycles.
But here's what really matters: the combined value of Incorta and Gemini Enterprise is what the right data foundations enables your AI agents to do.
Incorta provides the "enterprise truth" that grounds your AI agents, preventing hallucinations and enabling accurate, real-time actions. With this foundation, you can build agents that:
Your AI agents become trusted advisors, not experimental novelties - because they're working with complete, accurate, real-time data.
Ready to turn your Gemini Enterprise vision into reality (without the data engineering nightmare?)
Let's show you a 15-minute demo of how we've done this for other customers. You'll see exactly how Incorta can fast-track your journey from data chaos to AI-powered intelligence: because the future of work isn't waiting for your data pipelines to catch up.
A strategic partnership positioned to accelerate Vision 2030's digital transformation goals through comprehensive workforce development
In a landmark announcement that signals Saudi Arabia's commitment to leading the global AI revolution, Incorta has partnered with the Kingdom's Ministry of Communications and Information Technology (MCIT) to launch an unprecedented national upskilling initiative. This first-of-its-kind program, hosted on the National eLearning Center's (NELC) platform, aims to certify 100,000 Saudis in critical AI and data skills, directly supporting the nation's ambitious Vision 2030 goals.
The timing of this announcement couldn't be more strategic. As the world accelerates AI adoption across industries, Saudi Arabia is positioning itself as a regional and global leader by investing in its most valuable asset: its people. The partnership represents more than just another training program—it's a comprehensive workforce transformation initiative designed to secure the Kingdom's competitive advantage in the AI-driven economy.
The scale is remarkable: 100,000 certified professionals across three distinct pillars, each targeting different segments of Saudi society:
The first pillar focuses on students and emerging professionals through university projects, national hackathons, and internship opportunities. By embedding AI and data skills into academic curricula and providing hands-on experience, the program ensures that Saudi's youth enter the workforce already equipped for tomorrow's challenges.
The second pillar addresses the immediate need to reskill existing professionals through online and bespoke certifications. This ensures that Saudi's current workforce can adapt to AI-enhanced roles across critical sectors like energy, logistics, and manufacturing.
Perhaps most importantly, the third pillar targets C-suite leaders through roundtables and global partner-led leadership training. By ensuring executives understand AI's transformative potential, the program creates a top-down culture of innovation and adoption.
The collaboration between Incorta and MCIT isn't just about technology transfer—it's about creating a sustainable ecosystem for AI excellence. Incorta brings proven expertise in data analytics and AI implementation, while MCIT provides the governmental framework and national reach necessary for large-scale transformation.
The involvement of the National eLearning Center (NELC) as the delivery platform adds another layer of significance. As an existing Incorta customer, NELC provides the digital infrastructure to ensure scalability and accessibility for learners across the Kingdom, from Riyadh to remote regions.
Unlike many ambitious announcements that take years to materialize, this initiative begins immediately. Online certifications and university pilot programs are launching right away, with hackathons, specialized courses, and executive programs rolling out throughout 2026.
This rapid deployment reflects both the urgency of the global AI race and Saudi Arabia's readiness to act decisively on its Vision 2030 commitments. The Kingdom isn't just planning for an AI-enabled future—it's actively building it.
The implications extend far beyond Saudi Arabia's borders. This initiative establishes a blueprint for how nations can systematically prepare their workforces for AI transformation at unprecedented scale. Other countries will undoubtedly study and adapt this model as they face similar challenges in preparing their citizens for an AI-driven economy.
For the global tech community, the partnership demonstrates Saudi Arabia's serious commitment to becoming an AI hub, potentially attracting increased investment and collaboration from international partners seeking access to a skilled, AI-ready workforce.
As announced at Incorta's No Limits 2025 event in Riyadh, this partnership represents just the beginning of what promises to be a transformative journey for Saudi Arabia's workforce. The program's success will be measured not just in certifications earned, but in the economic impact of having 100,000 AI and data-skilled professionals driving innovation across the Kingdom's key industries.
The initiative addresses a critical challenge facing economies worldwide: how to ensure human capital keeps pace with technological advancement. By taking a comprehensive, multi-pillar approach that spans from students to executives, Saudi Arabia is positioning itself as a case study in proactive workforce development.
For professionals in Saudi Arabia, the message is clear: the future of work is arriving faster than ever, but so are the opportunities to prepare for it. This partnership between Incorta and MCIT provides a structured, accessible pathway for Saudis to not just adapt to the AI revolution, but to lead it.
As the program scales from immediate online certifications to comprehensive hackathons and executive training, it will undoubtedly serve as a cornerstone of Vision 2030's digital transformation goals—and a model for nations worldwide seeking to thrive in the AI age.

Finance professionals know the drill all too well. A variance appears in Workday Adaptive Planning—say, $120,000 spent instead of the planned $100,000. What follows is a familiar dance of frustration: exiting the planning system, hunting through general ledger balances, downloading AP registers to Excel, building complex spreadsheets, and spending hours validating that numbers actually tie together across systems.
This process, historically so expensive to solve that the industry dubbed it "The Million-Dollar Click," just got dramatically simpler.
Incorta's new drill-through capability for Workday Adaptive Planning, featured during the "Cloud Data Connector and Optimizing Your Data Foundation to Enhance Insights" session, enables finance teams to right-click on any number and instantly access the complete transaction story behind it—journal entries, sub-ledger details, and ERP context—without leaving their workflow.
"For finance teams, drill-through solves the age-old challenge of getting from a variance to its root cause—fast," said Joe Cooper, VP of Alliances at Incorta. "Instead of pulling reports, chasing down extracts, or stitching together Excel models, you get immediate access to the full story behind the number."
The impact extends far beyond individual analyst productivity. Consider that companies like Tyson Foods maintain over 18,000 reports requiring 12,000 person-hours monthly just for data maintenance and root cause analysis. Even eliminating half this burden represents 60,000 hours annually—equivalent to 50 full-time employees.
For smaller organizations using Adaptive Planning, the typical monthly reconciliation burden ranges from 500 to over 12,000 hours, with teams spending more time as "data jockeys" than strategic business partners.
What sets this integration apart is its depth. While Workday's initial drill-through implementation was limited to basic GL-to-journal-line queries, Incorta's comprehensive data foundation enables drill-through to operational root causes across multiple systems:
Perhaps most critically, the solution addresses something unmeasurable in traditional ROI calculations: executive confidence. As our own Incorta CFO, Rob Dillon noted, "It gives a level of comfort to the CFO—you're not lying awake at night wondering if your numbers have total fidelity."
This confidence cascades through the organization: controllers complete month-end closes with greater certainty, analysts focus on analysis rather than data hunting, and auditors receive complete, traceable information faster.
The drill-through capability builds on Incorta's designated Workday Innovation Partnership announced in March 2025, with solutions available through Workday Marketplace for easy customer deployment.
Learn more about our partnership with Workday, or visit Incorta’s page on Workday Marketplace.
The technology represents a fundamental shift from reactive data validation to proactive business analysis. Finance teams can finally redirect thousands of monthly hours from manual reconciliation to strategic activities that drive business value.
For organizations currently managing complex data landscapes under tight reporting deadlines, the question isn't whether they need this capability—it's how quickly they can implement it.
Learn more about Incorta's drill-through capabilities for Workday Adaptive Planning at www.incorta.com.
Enterprise leaders are asking the wrong questions about their data. While boardrooms debate cloud migration strategies and dashboard aesthetics, a fundamental transformation is reshaping how successful organizations make decisions. The companies that recognize this shift early will dominate their markets. Those that don't will find themselves perpetually playing catch-up with competitors who seem to anticipate every market move.
Most enterprises are drowning in data yet starving for actionable insights. Despite decades of investment in business intelligence platforms, data warehouses, and analytics tools, the vast majority of business decisions still rely heavily on executive intuition, incomplete information, and delayed reporting that's obsolete by the time it reaches decision-makers.
Consider this scenario: Your sales team reports that Q3 revenue is tracking 12% below target. Traditional BI systems can tell you this happened. They can show you which regions are underperforming, which products are lagging, and how current numbers compare to historical trends. But they cannot tell you that the decline correlates with a specific competitor's pricing strategy in your top three markets, that customer sentiment surveys indicate quality concerns about a recent product update, and that your most effective response would be a targeted discount program combined with proactive customer outreach—executed within the next 14 days before the quarterly buying cycle closes.
This gap between insight and action represents billions in lost revenue across the enterprise landscape. More critically, it represents a fundamental misunderstanding of what modern data capabilities should deliver.
Decision intelligence represents the next evolution beyond traditional business intelligence. While BI focuses on what happened and why, decision intelligence integrates artificial intelligence, predictive analytics, and automated execution capabilities to answer the most important business question: "What should we do next?"
The transformation from BI to decision intelligence involves several critical shifts:
From Reactive to Proactive: Instead of analyzing past performance, decision intelligence systems continuously monitor business conditions and surface recommendations before problems become critical.
From Siloed to Contextual: Rather than presenting isolated metrics, decision intelligence platforms understand the interconnections between different business functions, market conditions, and operational constraints.
From Human-Dependent to AI-Augmented: While human judgment remains essential for complex strategic decisions, routine operational choices can be automated based on comprehensive data analysis and proven decision frameworks.
From Insight to Action: Perhaps most importantly, decision intelligence closes the loop between discovery and execution, enabling organizations to act on insights immediately rather than waiting for manual processes to implement recommendations.
True decision intelligence requires three foundational capabilities that most organizations lack:
Decision intelligence demands access to live, detailed data from every system that impacts business outcomes. This isn't just about having data warehouse connections—it requires real-time integration with ERP systems, CRM platforms, supply chain management tools, customer service applications, and external data sources - like market feeds and competitive intelligence.
The challenge is that most enterprise data architectures rely on complex ETL processes that introduce delays, data degradation, and maintenance overhead. By the time data flows through traditional pipelines, the business conditions that generated it may have already changed.
Effective decision intelligence platforms eliminate these pipeline dependencies through direct system connectivity that maintains data fidelity while providing immediate access to operational details that drive accurate decision-making.
Raw data without business context is meaningless for decision-making. Decision intelligence systems must understand not just what the numbers are, but what they mean within specific business contexts, how they relate to other metrics, and what actions they suggest.
This requires sophisticated semantic layers that automatically enrich data with business metadata, relationship mapping, and decision logic. The system needs to understand that a 15% increase in customer service tickets might indicate quality issues if it coincides with recent product releases, but could signal successful market expansion if it correlates with new customer acquisition in targeted segments.
Most importantly, this contextual understanding must evolve continuously as business conditions change, learning from outcomes to improve future recommendations.
The most sophisticated analytics are worthless if they don't translate into business action. Decision intelligence platforms must provide embedded capabilities that enable immediate execution of recommended actions within existing business workflows.
This might involve automatically adjusting pricing in e-commerce systems based on competitive analysis, triggering supply chain adjustments when demand forecasting indicates inventory shortages, or initiating customer retention campaigns when predictive models identify at-risk accounts.
The key is seamless integration with existing business tools rather than requiring users to switch between systems or manually implement recommendations through separate processes.
MIT research indicates that 95% of enterprise AI initiatives fail to deliver meaningful business value. The primary reason isn't technical—it's foundational. Organizations are attempting to implement AI solutions on data infrastructures that lack the fundamental capabilities required for effective decision intelligence. Many enterprises are using existing data from warehouse where data is aggregated and standardized, or they are using data lakes that are not governed and has no context to the data.
Common failure patterns include:
Fragmented Data Foundations: AI models trained on incomplete, inconsistent, conformed, or aggrated data produce unreliable recommendations that business users quickly learn to ignore.
Lack of Business Context: Technical teams build sophisticated algorithms that don't understand business logic and no context of the data being used, resulting in recommendations that are mathematically correct but operationally impossible.
Integration Complexity: AI insights that require manual implementation through multiple systems create workflow friction that undermines adoption and execution speed.
Governance and Trust Issues: Without transparent decision logic and clear audit trails, business leaders remain reluctant to rely on AI recommendations for important decisions.
Cultural Resistance: Organizations that haven't established data-driven decision cultures struggle to adopt AI-augmented workflows, regardless of technical capabilities.
Organizations that successfully implement decision intelligence capabilities gain several critical advantages:
Operational Agility: The ability to identify and respond to market changes, operational issues, and competitive threats faster than organizations dependent on traditional reporting cycles.
Resource Optimization: AI-augmented decision-making enables more effective allocation of marketing spend, inventory investment, staffing resources, and capital deployment based on comprehensive data analysis rather than intuition.
Risk Mitigation: Predictive capabilities that identify potential problems before they impact business outcomes, from supply chain disruptions to customer churn to quality issues.
Innovation Acceleration: Data-driven insights that reveal new market opportunities, product development possibilities, and operational efficiencies that might not be apparent through traditional analysis.
Scalable Decision-Making: The ability to maintain decision quality and speed as organizations grow, without proportional increases in management overhead or decision-making bottlenecks.
Implementing effective decision intelligence requires specific technological capabilities that differ significantly from traditional BI architectures:
Real-Time Data Integration: Direct connectivity to source systems that eliminates ETL delays while maintaining full data fidelity and automatic adaptation to system changes.
Self-Learning Semantic Layers: Automated discovery and maintenance of business context, relationships, and decision logic that evolves with organizational changes.
Conversational AI Interfaces: Natural language query capabilities that enable business users to explore data and receive recommendations without technical training.
Embedded Automation: Workflow integration that enables immediate action on insights within existing business tools and processes.
Transparent Decision Logic: Clear audit trails and explainable AI that enable business leaders to understand and trust automated recommendations.
The shift to decision intelligence represents both an opportunity and an imperative. Organizations that recognize this transformation early can gain significant competitive advantages by making faster, more accurate decisions based on comprehensive data analysis.
However, this transition requires more than technology implementation—it demands fundamental changes in how organizations approach decision-making, data governance, and business process design.
The critical question for enterprise leaders is not whether to pursue decision intelligence capabilities, but how quickly they can implement the foundational technologies and organizational changes required to compete effectively in a data-driven marketplace.
For organizations still dependent on traditional BI approaches, the window for strategic advantage is closing rapidly. The companies that successfully bridge the gap between data insights and business action will define the competitive landscape for the next decade.
The future belongs to organizations that can ask their data what to do next—and then act on the answer immediately. Everything else is just reporting on what already happened while competitors shape what happens next.
Decision intelligence represents a fundamental shift in how organizations leverage data for competitive advantage. The question is not whether this transformation will happen, but whether your organization will lead it or be disrupted by it.
As organizations accumulate massive volumes of data across CRMs, ERPs, and countless other systems, the ability to effectively manage and extract meaningful insights from this information has become a strategic imperative.The Medallion architecture, championed by Databricks, offers a structured approach to this challenge by organizing data into distinct layers: Bronze, Silver, and Gold. But how does Incorta, with its unique capabilities, fit into this powerful framework? Let's explore.
The Medallion architecture is designed to progressively refine data, ensuring quality, and provide a clean, business-ready view for analysis. Incorta's architecture, with its focus on speed, Direct Data Mapping™, and schema-on-query, complements this approach beautifully.

The Bronze layer is your data's untouched sanctuary. It's where raw, unvalidated data lands in the Incorta Data Lake, preserving its original format. Think of it as a historical record, capturing every change (including updates and deletions) through Change Data Capture (CDC) logic using our connectors. Your data lake can also be integrated into Incorta through remote tables, giving you unified access to all your data sources.
What you can achieve in Incorta’s Bronze Layer:
The Silver layer is where the magic of data preparation begins. This layer is source-aligned, meaning it's still close to the original data but undergoes crucial validation and transformation. It's your staging area, where data quality is prioritized.
What You Can Achieve in Incorta's Silver Layer:
The Gold layer is your business-aligned layer, meticulously modeled to serve specific business needs. Think of it as your "Business Views" or “AI-ready data” for critical processes like Order-to-Cash, Procure-to-Pay, or AI use cases.
Incorta Offers Three Powerful Options for Gold Layer Modeling:

By integrating Incorta into the Medallion architecture, organizations can create a powerful and efficient data pipeline. Incorta's ability to directly ingest and rapidly query large datasets, combined with its flexible modeling capabilities, accelerates data transformation from raw to refined.
From the raw intake of the Bronze layer, through the meticulous cleaning and staging of the Silver layer, to the business-ready insights of the Gold layer, Incorta empowers you to build a true data powerhouse. This approach not only ensures data quality and governance but also provides the agility needed to respond quickly to evolving business demands, transforming your data into a strategic asset.
The enterprise data landscape stands at an inflection point. After decades of investment in data infrastructure, most businesses still struggle to transform raw information into meaningful insights that drive faster, smarter decisions. Meanwhile, the promise of AI remains largely unfulfilled, with MIT research showing that 95% of enterprise AI initiatives fail to deliver meaningful business value.
The modern data stack is fundamentally flawed. Organizations have assembled complex, fragmented architectures of stitched-together tools that make extracting intelligence from core applications like ERP and CRM systems slow, expensive, and limiting. The result? Reporting that's backward-looking, narrowly scoped, and often obsolete by the time it reaches decision-makers.
More critically, as enterprises rush to implement AI solutions, they're attempting to bolt artificial intelligence onto these broken foundations. Without proper context, semantic understanding, and real-time data access, AI initiatives inevitably fall short of their transformative potential.
Incorta's new vision represents a fundamental shift from passive analytics to proactive decision intelligence. Decision intelligence is a systematic, data-driven approach to making decisions by integrating insights from artificial intelligence, analytics, business rules, and process automation. It moves organizations away from intuition-based choices toward decisions backed by trusted historical data, augmented with advanced technologies like machine learning and AI.
This transformation encompasses several key aspects:
Our vision is built upon three foundational pillars that address the complete decision intelligence lifecycle:
Unlike traditional approaches that require complex ETL processes, Incorta provides:
Our platform delivers:
The final pillar enables:
While competitors scramble to rebuild their architectures through acquisitions and mergers—creating new patchworks of disparate tools—Incorta already possesses the critical capabilities needed for the AI era:
Direct Data Mapping™ Technology: Our proprietary approach eliminates pipeline dependencies while maintaining full data fidelity without loss through aggregation. This provides the live, detailed data that AI systems require for accurate decision-making.
Deep Source System Expertise: Years of specialization with complex systems like Oracle, SAP, and Workday give us unmatched connectivity to where enterprise data actually lives.
Built-in Semantic Layer: Our platform delivers the contextual intelligence necessary for successful agentic workflow implementation—something competitors are still struggling to achieve.
Predictable Cost Structure: Transparent pricing that enables organizations to scale without fear of runaway expenses.
Looking ahead, Incorta envisions "Nexus"—Incorta’s intelligent platform that will let enterprises seamlessly connect to their live, detailed data using natural language. Nexus represents the evolution from traditional business intelligence to agentic, conversational data interaction that can understand business context, recommend actions, and execute decisions while maintaining human oversight for governance, trust, and security.
This vision delivers measurable transformational benefits across multiple dimensions:
Operational Transformation:
Strategic Business Value:
Our announcement comes at a pivotal moment. Gartner predicts that by 2026, 50% of organizations will demand unified, composable analytics platforms, and by 2028, fragmented data management markets will converge into a single ecosystem enabled by data fabric and generative AI.
While the market undergoes this transformation, Incorta stands ready to lead. Our decade of innovation with Direct Data Mapping™ technology positions us uniquely to deliver what enterprises desperately need: a unified platform that closes the gap between curiosity and action.
As Osama announced at No Limits, we're issuing a specific call to action for the data engineering community and business technical teams to join us in building the future of decision intelligence.
For Data Engineers and Architects: We invite you to explore how Direct Data Mapping™ can eliminate the pipeline maintenance that consumes so much of your time. Partner with us to build proof-of-concepts agents for AI that demonstrate real-time data access without ETL complexity. Share your integration challenges with legacy systems—let's solve them together using our adaptive connectivity framework.
For Business Intelligence Teams: Help us define what conversational AI interfaces should look like in your specific industry context. Beta test Nexus capabilities and provide feedback on how natural language queries can better serve your business users' needs.
For IT Leadership: Engage with us on pilot programs that demonstrate cost consolidation opportunities. Let's quantify together how unifying fragmented data stacks can reduce both technical debt and operational expenses while accelerating AI initiative success rates.
For System Integrators and Partners: Join our partner enablement programs to learn how to position decision intelligence solutions that deliver faster time-to-value than traditional modern data stack implementations. Come help us build the future and develop smart agents on the Incorta Nexus Platform and build a marketplace for agents that will help accelerate our customers’ business.
The era of passive analytics is ending. The age of proactive, AI-driven decision intelligence has begun. Organizations that embrace this transformation will gain unprecedented competitive advantages, while those that cling to fragmented, outdated approaches will struggle to keep pace.
Stay tuned for the full session on demand, coming soon.

A shocking new MIT study reveals that 95% of enterprise generative AI pilots are failing to deliver meaningful business impact. Despite massive investments in AI initiatives, the vast majority of companies are seeing their ambitious AI projects stall at the pilot stage, delivering little to no measurable return on investment.
But here's what the research reveals: it's not the AI models that are the problem—it's the data foundation.
MIT's comprehensive research, based on 150 executive interviews and analysis of 300 public AI deployments, exposes a critical insight that most organizations are missing. As lead researcher Aditya Challapally explains, "Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don't learn from or adapt to workflows."
The core issue isn't regulation or model performance—it's flawed enterprise integration and the inability of AI systems to access, learn from, and adapt to real-time organizational data.
Traditional enterprise data architectures create fundamental barriers to AI success:
Disconnected Data Silos: AI models can't learn from fragmented data spread across multiple systems, databases, and applications.
Stale Data: Most enterprise data warehouses provide historical snapshots, not the real-time insights that modern AI systems need to adapt and improve.
Complex Integration: Generic AI tools can't seamlessly integrate with existing workflows because they lack deep, contextual understanding of your business processes and data.
Resource Misallocation: MIT found that over half of AI budgets go to sales and marketing tools, while the biggest ROI opportunities lie in back-office automation—areas that require deep data integration.
Incorta is the data foundation that makes AI successful. Here's how we solve the core problems that cause 95% of AI initiatives to fail:
Unlike traditional data warehouses that provide static snapshots, Incorta delivers real-time access to all your enterprise data. Your AI and ML models can:
Incorta eliminates the data silos that cripple AI initiatives by creating a unified data layer that:
Where generic AI tools fail at enterprise integration, Incorta excels by:
MIT's research shows the biggest AI returns come from back-office automation—exactly where Incorta's real-time data foundation delivers maximum impact:
The 5% of companies succeeding with AI share common characteristics that Incorta directly enables:
Deep Integration: Successful AI isn't bolted on—it's built into the fabric of the organization through proper data foundation.
Real-Time Learning: AI systems that can adapt and learn from current business conditions deliver measurable results.
Workflow Integration: Rather than generic tools, successful AI is deeply integrated into specific business processes.
Data-Driven Decision Making: AI success requires access to comprehensive, real-time business intelligence.
Don't become part of the 95% failure statistic. The difference between AI pilots that stall and AI implementations that scale is having the right data foundation from the start.
Incorta provides the real-time data infrastructure that transforms AI from experimental to essential. With direct access to live business data, your AI initiatives can:
The MIT research is clear: AI success isn't about the models—it's about the data foundation. Make sure your AI initiatives have the real-time data access they need to succeed.
Ready to build AI on a foundation that delivers results? Discover how Incorta's real-time data platform transforms AI pilots into production successes. Contact us today to learn more.
There are very few people in a company who see across the entire organization — typically your CEO, CFO, and COO. They have a seat at the table in every conversation, and they are constantly making critical decisions.
These leaders can make decisions in one of two ways: based on gut instinct or based on data. Everyone knows that making decisions with data is the right approach, but it’s never really that simple.
Making decisions based on data is hard when the data isn’t timely or accurate — which is often the case with data that goes through a lot of transformation. This is a big reason why leaders often follow their gut instincts. Most want data, they just can’t get what they need.
That’s why I decided to join Incorta as CFO. Incorta solves a big business problem that I have been wrestling with for over a decade: Pulling together accurate data from all the different sources inside and outside of the company, in real time, to support data-driven decision making.

In my previous role we wanted to do holistic customer 360s. The last thing I wanted was for a salesperson to get blindsided by a customer who was unhappy for some reason.
Our vision was to make sure that when anyone in our company talked to a customer, they could see how that customer had been interacting with all facets of the organization. That meant marrying disparate data sets in order to get a visualization of the entire relationship. We started with payment history and some Salesforce information. Then, we brought in CSAT scores, customer cases out of Jira, and other qualitative information.
We eventually built a comprehensive dashboard, but to do it, we had to bolt together a whole bunch of different solutions. It took about six months to get the first cut of data, and the better part of a year to fully realize our vision. Once it was done, it took a full-time employee on the sales ops team — a data analyst who knew Salesforce and was also a Tableau Ninja — to maintain it day to day.
If I had known about Incorta then, I would have gone in a different direction.
With Incorta, you can bring high-fidelity, easily auditable information together in real time because you don’t have to go through all the typical heavy-duty transformation processes. We could have had fewer tools in our stack, and the time to get it up and running would have been much faster. Administration could have been done in the finance organization, rather than devoting analyst headcount to it.
For a CFO, the simplicity of a short audit trail is a big deal. Any time you're pulling data, especially financial data, you have to be able to trace that back to its ultimate source because every time it passes through another set of hands, there’s an opportunity for error.
Not only might that lead you to make the wrong decision, but errors can lead to material financial misstatement. There are significant risks associated with that, especially for public companies — and their leaders.
Taking data transformation out of the analytics process has a direct impact on improving data accuracy. There's a cost benefit as well because auditors only need to focus on validating the original source data. They don't have to attempt tracking it back through all the transformations.
Once again, Incorta can help. It excels anywhere you have to drill into multiple systems to get information, and that makes it a powerful tool for finance teams — especially those leading complex organizations.
For most of my career, I’ve worked in companies where the information required for day-to-day financial operations is all inside of an ERP system that is integrated with a planning tool, a human resources tool, and a CRM. When I’ve done acquisitions, the first order of business has always been to bring the acquired company onto my ERP system.
But there are a lot of organizations where you can’t achieve that level of integration. For example, if you’re a multibillion-dollar multi-national acquiring other multibillion-dollar multi-nationals, ripping and replacing systems can be too big and costly a challenge.
The same is true when you acquire companies that operate in a substantially different way. For example, a software company acquires a hardware manufacturer. The manufacturer is going to have warehouse management and inventory management integrations that are unique to their operations. ERP conversion in that situation is very challenging.
In these types of scenarios, Incorta can be used to pull together the information from multiple ERPs for financial reporting so you have total visibility into all your data. You know the data is accurate because it's not going through a huge, multi-step transformation process — it’s a real time view that’s 100% identical to the source, so you can move quickly and make decisions with confidence.
Incorta is the perfect tool for CFOs and other leaders who need visibility across everything — and need to be able to trust what they see. This platform is a total game changer for the office of finance and I am proud to be a part of the journey.
Primient stands as a leading force in sustainable manufacturing, transforming crops into premium plant-based ingredients for global giants like Coca-Cola and Pepsi. With over 1,800 employees across operations in the U.S. Midwest and Brazil, this renewable innovation company had mastered the art of sustainable production—but their data landscape told a different story.
Following their separation from their parent company, Primient inherited a digital nightmare that threatened their growth ambitions. Despite sitting on a goldmine of SAP data, critical insights remained locked away across hundreds of legacy systems and custom BW reports.
The reality:
"We were drowning in spreadsheets and shared drives," recalls Tom Kirkham, CIO at Primient. "It was impossible to be consistent, and impossible to make real-time decisions."
The company's forecast-driven operating model demanded precision and speed—qualities that their fragmented data infrastructure simply couldn't deliver. Excel workarounds had evolved from temporary fixes into a complex labyrinth of manual processes.
Primient's transformation began with a clear vision: establish one version of the truth that could power intelligent decision-making across the entire organization.
While other vendor evaluations dragged on for months with minimal results, Incorta delivered immediate impact. In just four days, Kirkham had his entire SAP test environment running live in Incorta—making it "the easiest selection I've ever made."
The timing couldn't have been better. Just as Primient was evaluating enterprise planning solutions, Workday and Incorta announced their strategic partnership. This alignment of technology partners created an integrated finance stack that would prove transformative.
Incorta's direct data platform approach meant no complex ETL processes or data modeling delays. The solution delivered:
"The great thing about Incorta is that we get detailed data—down to each transaction—without having to deal with SAP's complexity," explains Anastas Harizanov, Director of FP&A. "You just log into Incorta and get exactly what you need."
The impact was immediate and measurable across multiple dimensions:
Finance Evolution: "Incorta allowed us to put the 'A' back in FP&A," notes Harizanov. The team shifted from data collectors to strategic business partners, with 70% more time for analysis and insights.
Cross-Functional Alignment: With everyone working from the same data source, reconciliation debates dropped by 80%. Finance, supply chain, and operations teams now collaborate from shared insights rather than competing spreadsheets.
Operational Insights: Cost accountants, plant managers, and operations teams gained unprecedented visibility into granular performance metrics, enabling proactive decision-making at every level.
During their first annual planning cycle with the new system, operations managers caught a costing error that would have previously gone unnoticed—demonstrating the power of accurate, real-time data to surface insights that manual processes simply missed.
"We found a costing error during AOP because the system flagged an SKU overestimation," Harizanov explains. "That's the power of accurate, real-time data."
Perhaps most importantly, Primient's clean, centralized data foundation positions them for the next wave of innovation. The company is already leveraging AI tools like Microsoft Copilot for predictive forecasting and plans to expand machine learning capabilities across demand planning and scenario modeling.
"AI will essentially become our analyst," explains Kirkham. "We're building for scale without the need for massive headcount, and Incorta gave us the foundation to use it effectively."
Today, Primient operates with the agility and precision their forecast-driven business model demands. Financial scenario modeling that once took weeks now happens in near real-time. The company can instantly "financialize" supply chain scenarios, running complex what-if analyses in minutes rather than months.
For a commodities-based business where corn prices change every thirty seconds, this real-time analytical capability represents a fundamental competitive advantage.
Primient's journey demonstrates that even the most complex SAP environments can become engines of insight and growth. By establishing a modern data foundation first, they've not only solved immediate pain points but created a platform for continuous innovation.
For manufacturers drowning in spreadsheets and legacy complexity, Primient offers a clear roadmap: invest in the right data infrastructure, partner with proven technology leaders, and transform your organization's relationship with data from obstacle to advantage.
The result? A sustainable, scalable foundation for growth that turns data chaos into competitive intelligence—proving that the deeper you dig into your data, the more value you can create.
"Incorta gave us SAP-level detail without SAP-level complexity."
— Anastas Harizanov, Director of FP&A, Primient
Watch the full discussion on demand here.

Workday is a powerful cloud-based ERP and HCM platform, but extracting and analyzing its data presents unique challenges. Unlike traditional databases, where you can query tables directly, Workday operates on a complex object model accessible only through APIs. This complexity often leads to prolonged data projects (12-24 months), segmented views of business data, and significant manual effort.
Incorta’s Workday Data Applications address these challenges head-on, helping organizations deploy analytics faster, while maintaining high-fidelity data and Workday’s security framework.
Workday’s data is structured as nested business objects (e.g., an Employee object contains Address, which itself has sub-objects like City or PostalCode). Extracting this data requires navigating API calls, not SQL queries.
Workday lacks native bulk extraction tools. Traditional methods like SOAP/WQL APIs are slow and have a 1-million-row limit per call, forcing workarounds for large datasets.
Combining Workday data with external sources (e.g., Salesforce, Excel) is cumbersome. Workday’s Prism integration tool is limited in transformation capabilities.
Data extraction requires precise prompts/filters (e.g., CompanyID, Year). Missing these triggers API errors, adding complexity.

Incorta leverages the CData Workday Connector to bridge Workday’s APIs with Incorta’s analytics engine. The connector supports:

For initial full loads, split extractions by time ranges (e.g., Year IN (2023, 2024)) or business dimensions (e.g., CompanyID). Merge datasets in Incorta post-extraction.
WQL automatically flattens XML/JSON columns (e.g., Journal Lines), while SOAP requires manual parsing in Incorta.
Each Workday data source requires specific prompts (e.g., CompanyID). The CData connector allows filtering during extraction:
sql
SELECT * FROM Workday_Journal
WHERE EntryDate >= '2023-01-01' AND CompanyID IN (1001, 1002)
Incorta’s pre-built Data Apps for Workday Adaptive Planning accelerate analytics for:
Pegasus combined Workday Financials with logistics data in Incorta to:

Incorta’s Data Apps for Workday turn a historically complex integration into a streamlined analytics pipeline. By combining CData’s robust connector with Incorta’s no-code transformations, organizations can unlock their Workday data in days, with no compromises on fidelity or security.
To learn more:
At Incorta, we believe data should have no limits: no delays, no barriers, no compromises. That’s why we created NoLimits, the premier global event where business leaders, data innovators, and technology pioneers come together to transform raw data into real-world impact.
This event spans from San Francisco to Saudi Arabia, united by one mission: breaking through data challenges to drive faster, smarter decisions. Whether you’re in finance, healthcare, retail, or government, attending NoLimits gives you the strategies, tools, and connections to:
Last year, we proved what’s possible when data moves at the speed of business. This year, we’re going even further. Will you join us?
Last year, NoLimits Riyadh brought together the brightest minds in data and analytics to tackle the biggest data challenges they face. The message was clear: data is powerful, but only if you can use it. Let’s take a look at some of our standout sessions:
As a global insurance leader, GIG faced a critical problem: legacy systems couldn’t keep up with regulatory demands. This was resulting in costly delays and regulatory penalties.
Manual processes and third-party errors made compliance a high-risk, high-cost nightmare.
Incorta transformed GIG’s data chaos into clarity with:
Now, GIG’s teams simply change filters: no manual extraction, no errors, no penalties for faster, smarter, risk-free compliance.
As Hassan Abdulrahman (IT Senior Architect Manager) stated:
“We extract millions of records in milliseconds—no reshaping, no delays.”
Neoleap, a leading financial services provider, struggled with slow, fragmented data access across teams. Without a unified system, detecting fraud was reactive - costing time, money, and security.
Incorta transformed Neoleap’s operations with instant data access & AI-powered fraud detection:
With Incorta, Neoleap identified fraud patterns faster and sent automated alerts in real-time.
Most importantly, Incorta adapted to Neoleap’s existing systems and processes - without costly overhauls or disruptive changes.
Saudi Arabia’s Tourism Development Fund (TDF) fuels the Kingdom’s ambitious vision - helping private investors to build the future of tourism. But with data trapped in disconnected databases (MySQL, MongoDB, Excel) and teams wasting 2+ hours daily on manual SQL queries, agility was impossible. Manual data chaos in a fast-growing sector meant:
Incorta replaced chaos with clarity by:
With Incorta, TDF’s team could finally focus on strategy, not spreadsheets, for faster decisions and stronger tourism growth.
The future of data is here. Are you ready? Join us on September 4th.
🔗 Watch more from last year’s sessions on-demand here.

For over a decade, enterprise teams chased the vision of business intelligence delivering "decision-making at the speed of thought." But the reality fell short—clunky data pipelines, abandoned dashboards, and analysts buried in manual prep work became the norm.
By investing deeply in AI agents, companies aim to bridge the persistent gap between raw data and real-world decisions. These agents operate quietly in the background, empowering business users to ask natural questions and get meaningful answers instantly. The impact could be transformative: freeing analysts from routine tasks and reshaping their roles into strategic drivers of innovation across industries.
AI agents are one of the hottest topics in tech right now. But what exactly are they? Are they just a repackaging of existing AI concepts, or is there something fundamentally new happening?
In this blog, we’ll break down:
AI agents are defined as software systems that use AI to pursue goals and complete tasks. They show reasoning, planning, memory, and a level of autonomy to make decisions, learn, and adapt.
For example:


You begin with an LLM and a prompt.
Next, you apply RAG (Retrieval-Augmented Generation) to pull relevant facts from data sources—via search—and insert them into the prompt.
Then, using a capability called function calling, the LLM can decide which APIs to invoke, generate the correct arguments, and execute them.
With MCP (Model Context Protocol), you standardize how models connect to tools, data, and external systems—making these interactions more modular, reusable, and interoperable.
On top of that, you introduce a reasoning loop: checking whether the user’s intent is understood, validating the plan, verifying tool choices, and reviewing results before returning a response.
Finally, you scale this up to multiple agents—each specialized for specific tools, domains, or processes—that collaborate with one another.
Each of these steps could be a full college course in itself, full of techniques, trade-offs, and subtleties. Every new variation or pattern represents a developer or researcher’s attempt to produce higher-quality results—more reliably, more efficiently, and at lower cost.
To summarize:
According to Gartner, the top deployments are (In order):
Unlike traditional data warehouses that provide static snapshots, Incorta delivers real-time access to all your enterprise data. Your AI and ML models can:
Incorta eliminates the data silos that cripple AI initiatives by creating a unified data layer that:
Agents are stochastic (non-deterministic), making them hard to test.
Key Evaluation Metrics:
Capability – Can it do the task?
Reliability – Can it do it consistently?
Current agents forget past interactions. Research now is focusing on:
What do you think? Are AI agents overhyped, or is this the next big shift in AI? Write in the comments.
As announced today, Incorta has been recognized as a Niche Player for the fourth consecutive year in the 2025 Gartner® Magic Quadrant™ for Analytics and Business Intelligence. Our platform continues to push the boundaries of innovation, delivering cutting-edge capabilities that empower customers to unlock the full potential of their complex operational data.
"Incorta is a lakehouse-native unified data and analytics platform tailored for analytics on live, operational data across multiple enterprise applications. Our unique approach integrates complex data from multiple systems of record to deliver harmonized, business ready analytics in your systems of intelligence." said Ashwin Warrier, Incorta's Head of Product.
We believe this year's Magic Quadrant™ recognition underscores that Incorta's vision and strategy are headed in the right direction as we focus on making complex data from operational systems even easier to access, with real-time updates and helping our customer deliver AI and Analytical ready data.
Since our 2024 Magic Quadrant recognition, Incorta has pursued and implemented transformative capabilities that redefine speed and productivity for business users, analysts, data engineers, and IT teams alike:
Google Cloud Agent-to-Agent Protocol: Incorta is among the first partners backing Agent2Agent (A2A)—a new open protocol from Google Cloud that enables AI agents to securely communicate, coordinate, and collaborate, regardless of vendor or platform. This groundbreaking partnership positions Incorta at the forefront of multi-agent AI collaboration.
Intelligent Accounts Payable Agent: We've launched an AI-powered accounts payable agent for Google Cloud's Agentspace that combines natural language querying, ERP data integration, and intelligent automation specifically designed for enterprise finance teams. This solution demonstrates our commitment to bringing AI directly to critical business processes.
Incorta continues to expand strategic partnerships that bring best-of-breed joint solutions to our customers. Notable partnerships launched since mid-2024 include:
● Google Cloud Integration: A powerful joint solution that simplifies Oracle ERP data complexity for BigQuery customers, enabling seamless access to live, transaction-level financial data
● Workday Adaptive Planning Integration: Enhanced planning capabilities that integrate seamlessly with Workday environments, announced in May 2024
We feel our continued recognition as a Niche Player reflects the unique value we deliver to customers through three core differentiators that define our approach:
Rapid Implementation Without Compromise: Through our Direct Data MappingTM (DDM) technology, we eliminate traditional ETL bottlenecks by integrating and harmonizing data from multiple sources without transformations, preserving original data fidelity. This streamlined approach enables customers to create business-ready datasets and generate insights in record time.
Real-Time Operational Intelligence: With incremental data refreshes delivered in micro batches at scale, Incorta provides the low-latency data availability that modern businesses demand. This capability proves essential for time-critical operations like inventory optimization and financial period close processes.
Industry-Ready Analytics Solutions: Our comprehensive packaged applications for enterprise systems like Oracle, Salesforce, SAP, and Workday come equipped with prebuilt business schemas and analytic dashboards that embody industry best practices, enabling rapid deployment for complex use cases across finance, supply chain, and human resources.
Before implementing Incorta, Kito Crosby struggled with silos of fragmented data from multiple business units, each with their own systems. Manually produced reports based on inconsistent data which never seemed to be aligned, often outdated, and sometimes even produced contradictory conclusions. The team was constantly trying to piece together different versions of the truth from various sources. Now, after fully harmonizing their reporting through Incorta, Johnson K. Lai, Chief Information Officer (CIO) for Kito Crosby is providing his executive team, as well as the broader organization more automated, timely and accurate reports. Johnson shared his excitement with the team saying, “The first thing I saw today when I woke up was an Incorta revenue report waiting for me to review!"
Incorta continues to redefine data analytics by enabling companies of all sizes to capitalize on the true potential of their complex, operational data. Our pre-built solutions, powered by AI and enhanced through strategic partnerships, accelerate analytics development while delivering faster time to value.
As we reflect on four consecutive years of Gartner Magic Quadrant recognition, we remain committed to transforming decision-making processes and driving substantial business growth for our customers through innovative operational analytics capabilities.
Get your own copy of the report here.
Gartner, Magic Quadrant for Analytics and Business Intelligence Platforms, Anirudh Ganeshan, Edgar Macari, et al., 16 June 2025
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and MAGIC QUADRANT is a registered trademark of Gartner, Inc. and/or its affiliates and are used herein with permission. All rights reserved.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Data teams know the pain all too well: waiting hours for pipelines to run, rebuilding schemas for simple questions, and debugging brittle transformation jobs at 2 AM. ETL (Extract, Transform, Load) was once the gold standard, but today it’s a growing source of frustration—delaying insights, eating budgets, and stifling agility.
The good news? You don’t need to eliminate ETL entirely. The problem isn’t moving data—it’s how we transform it.
ETL forces teams to wait. Extracting data, reshaping it, and loading it into warehouses can take days—while the business moves in minutes. By the time reports are ready, decisions have already been made in the dark.
The shift: Modern approaches like Direct Data Mapping® keep extraction and loading but skip the slow, pre-analysis transformations. Data stays in its raw state, so it’s ready for analysis immediately after loading—no delays.
ETL demands predefined data models. Need to ask a new question? Prepare to rebuild pipelines, reconfigure joins, and beg IT for resources. This rigidity kills exploration and forces analysts to work with stale, pre-filtered data.
The fix: Instead of forcing data into star schemas, tools like Incorta’s Direct Data Mapping® analyze data in its natural state. Business users explore freely without waiting for IT to remodel everything.
ETL pipelines are fragile. A small change in one transformation can break dependencies across the entire system. Teams waste more time fixing pipelines than delivering value.
The solution: Reduce transformation complexity. By minimizing post-load reshaping, you cut the maintenance burden while keeping the critical "extract" and "load" steps intact.
ETL isn’t inherently broken—it’s the transform step that creates most of the pain. Reshaping data into analytical models:
The answer isn’t to abandon ETL but to rethink transformation. With approaches like Incorta's Direct Data Mapping®, you:
You don’t need a rip-and-replace project. Start by:
ETL doesn’t have to hurt. By focusing on the real bottleneck—unnecessary transformation—you keep what works (reliable data movement) while ditching what doesn’t (slow, rigid reshaping). The result? Faster insights, happier teams, and a data stack that finally keeps up with the business.
Ready to ease the pain? See how Direct Data Mapping® can work for you.
In today’s data-driven business landscape, organizations rely on seamless access to real-time analytics for decision-making. Oracle E-Business Suite (EBS) remains a cornerstone for enterprise resource planning (ERP), but its data is often locked in complex, transactional systems that are difficult to integrate with modern business intelligence (BI) tools and cloud data warehouses like Google BigQuery.
Extracting, transforming, and loading (ETL) data from Oracle EBS into BI platforms or BigQuery presents numerous challenges—ranging from performance bottlenecks to schema incompatibilities. Many businesses struggle with slow reporting, high IT overhead, and inefficient data pipelines when attempting to modernize their analytics stack.
This in-depth guide explores:
By the end, you’ll understand the best strategies to overcome these obstacles and achieve faster, more cost-effective analytics.
Oracle EBS is a robust ERP system, but its architecture was designed for transactional processing—not real-time analytics. As businesses adopt cloud-based BI tools (Tableau, Power BI, Looker) and data warehouses (BigQuery, Snowflake), they face several pain points:
Modern enterprises demand:
✔ Real-time data access (not batch updates)
✔ Self-service analytics (without heavy IT dependency)
✔ Scalable, cost-effective cloud storage (vs. expensive Oracle licensing)
Google BigQuery, with its serverless architecture and pay-as-you-go pricing, has become a preferred destination for Oracle data. However, migrating from EBS to BigQuery is far from simple.
Oracle EBS contains thousands of tables with intricate relationships. For example:
Impact on BI Tools:
Most BI tools require near real-time data, but Oracle EBS:
While BigQuery offers scalability and cost savings, moving Oracle EBS data presents unique challenges:
Oracle Data TypeBigQuery EquivalentMigration ChallengeNUMBER(38)NUMERIC/BIGNUMERICPrecision handlingVARCHAR2STRINGCharacter set differencesCLOBSTRING (limited to 2MB)Large object handlingDATE/TIMESTAMPDATETIME/TIMESTAMPTimezone conversions
Solution Requirement: A robust data type mapping and transformation layer is needed.
Traditional ETL approaches (Informatica, Talend, SSIS) require:
✔ Custom SQL scripts for extraction
✔ Staging tables for transformation
✔ Scheduled jobs for incremental loads
Problems:
Ensuring data consistency post-migration requires:
✔ Row-count matching (source vs. target)
✔ Data sampling checks (spot-validate key tables)
✔ Reconciliation reports (financial data integrity)
Unlike traditional ETL, Incorta’s Direct Data Mapping technology:
Incorta provides out-of-the-box accelerators for:
Benefit: Cuts migration time from months to weeks.
Incorta automatically:
✔ Maps Oracle tables to optimized BigQuery datasets.
✔ Handles partitioning and clustering for cost efficiency.
Challenge:
Solution:
Results:
✔ $2M/year saved in Oracle licensing.
✔ Real-time inventory analytics in BigQuery.
MethodProsConsFull RefreshSimple to implementHigh initial load timeIncremental (CDC)Low latencyComplex setupIncorta Direct MappingNo ETL, real-timeRequires Incorta license
Migrating from Oracle EBS to modern BI tools and Google BigQuery is complex—but necessary for real-time, scalable analytics. Traditional ETL approaches are slow, expensive, and fragile, while Incorta’s Direct Data Mapping offers a faster, more efficient alternative.
Key Takeaways:
By leveraging Incorta, businesses can unlock the full potential of their Oracle EBS data in Google BigQuery—reducing costs, improving agility, and enabling true self-service analytics. Learn more.
In today’s data-driven enterprises, real-time insights across diverse systems are crucial. Yet integrating Oracle Cloud Applications with Google BigQuery at scale means dealing with inconsistent schemas, brittle pipelines, and long development cycles. Incorta Connect was built to address these exact challenges—offering a high-performance, schema-aware, and fully observable data delivery platform.
This blog highlights how Incorta Connect streamlines end-to-end data integration from Oracle Cloud Applications to Google BigQuery and why it outperforms traditional ELT frameworks for this use case.
Incorta offers native connectors for Oracle Cloud Applications, supporting both:
While BICC is ideal for extracting standard data sets, it has limitations when certain PVOs don’t capture all relevant fields. To address these gaps, Incorta Connect’s BIP-based integration lets users run custom SQL queries directly on Oracle source tables. This eliminates the need to manually create and manage BIP reports and empowers teams with ad hoc data access.
Incorta Connect not only extracts and loads data but also understands data relationships, hierarchies, and metadata from Oracle Cloud Applications. It enables seamless data unification by connecting Oracle data with other enterprise sources in a centralized semantic layer.
Unlike legacy ETL pipelines that require modeling data in the rigid physical schemas, Incorta Connect allows users to define business-friendly metrics and dimensions in a low / no code interface in the semantic layer. Think hundreds of reusable star schemas configured as logical views over raw or joined data, which can then be pushed to BigQuery or any Data Warehouse or Lakehouse of your choice.
This gives users the flexibility to iterate on business definitions without refactoring pipelines or materializing intermediate datasets unnecessarily. The result: a faster, leaner, and more scalable pipeline that minimizes duplication and accelerates analytics and AI delivery to the business
Once data is prepared within Incorta, it can be delivered to Google BigQuery using a medallion architecture, supporting:
Incorta Connect uses a Parquet and Delta Lake–compatible architecture, optimizing integration with data warehouses and lakehouses. It supports high-throughput data delivery—up to one billion rows per minute, with built-in support for:

Incorta Connect provides robust data orchestration and monitoring capabilities, including:
These tools allow users to trace every step of the pipeline—from Oracle Cloud to BigQuery—helping quickly identify issues, validate transformations, and ensure data consistency and reliability.

In summary, Incorta Connect provides a fundamentally different approach from traditional ETL pipelines. It combines native Oracle integration, semantic-layer modeling, high-throughput data delivery, and built-in observability into a single, cohesive platform. If you're building a modern data stack that integrates Oracle Cloud and BigQuery, Incorta Connect enables you to do it faster, smarter, and with fewer moving parts.
Your business is racing toward AI-driven insights, real-time analytics, and automated decision-making. But there’s a problem. Your data is stuck.
Trapped in siloed ERP systems. Bottlenecked by slow, brittle ETL pipelines. Refreshing only once a day (if you’re lucky). The reality is: If your data isn’t accessible, integrated, and near real-time today, you’re not set up for success tomorrow.
Yet most companies are still being held back by
- Legacy ETL tools that take months to deploy.
- Batch-based refreshes that leave decisions lagging.
- Sky-high TCO from complex modeling and maintenance.
Picture this: Your finance team needs a consolidated view of revenue across 5 systems. Your operations group is waiting 48 hours for inventory updates. Your analysts spend 70% of their time stitching together reports instead of generating insights.
This isn't just inefficient—it's costing you opportunities.
Every day, businesses struggle with:
The reality? If you can't access ALL your data—live, in detail, across every source—you're making decisions in the dark.
Fortune 500 leaders from Broadcom to Shutterfly trust to deliver live, detailed, analytic-ready data from any source - now with modular flexibility. Start solving your core data problems today (getting your data live and where it needs to go), then add on AI and advanced analytics when you’re ready.
Instant Access to Live Data (No More Waiting)
Automated Harmonization (No More Manual Stitching)
Flexible Delivery to Any Destination
Modular Scalability (Start Now, Scale Smart)
Incorta Connect gives you what you need today—with a clear path to more.
✔ Oracle/SAP-centric organizations drowning in migration complexity.
✔ Teams with Snowflake/Databricks tired of waiting for fresh data.
✔ AI/ML leaders who need real-time ERP feeds for agents.
Bottom line: If your data isn’t fast, flexible, and fully integrated, nothing else matters.
Case Study: Fortune 500 CPG Company
Problem:
See the full announcement here:
Learn more about how it works:
Organizations rely on efficient data delivery platforms to extract insights, drive decisions, and maintain a competitive edge. However, the Total Cost of Ownership (TCO) of these platforms goes far beyond initial licensing fees—it includes infrastructure, maintenance, personnel, and hidden inefficiencies.
In this article, we'll explore
TCO encompasses all direct and indirect costs associated with deploying and maintaining a data analytics solution over its lifecycle. For a data delivery platform, this includes:
Traditional data platforms (like legacy data warehouses or BI tools) often have high TCO due to complex data modeling, excessive ETL processes, and reliance on IT teams for every change.
Incorta’s Direct Data Mapping architecture eliminates many of the costly inefficiencies found in traditional platforms. Here’s how:
When evaluating a data delivery platform, TCO is a critical factor. Traditional solutions burden organizations with hidden costs—slow queries, complex pipelines, and excessive cloud bills.
Incorta’s Direct Data Mapping eliminates these inefficiencies, delivering:
✅ Lower infrastructure costs (less storage/compute needed)
✅ Reduced labor costs (minimal ETL and IT dependency)
✅ Faster insights (real-time analytics without delays)
For businesses looking to cut costs while accelerating analytics, Incorta provides a proven, high-performance alternative to legacy systems.
Want to see how much you could save? See Incorta in action.
Picture this: It's 8:45 AM on Monday. Your CEO needs the Q2 sales report for an investor meeting at 9:30. Your data team is frantically:
This isn't a nightmare - it's reality for most. The culprit? Outdated approaches to data ingestion and data access that turn analytics into an obstacle course. Let's expose the six biggest hurdles:
Why It Sucks:
Traditional data ingestion forces you onto a hamster wheel of:
The Irony: You spend 80% of your data engineering time maintaining pipelines that deliver diminishing returns.
The Leap: Modern solutions use Direct Data Mapping™ to skip the transformation treadmill entirely, ingesting raw data in its natural state while maintaining full analytical power.
(We show you how to break up with traditional delivery here, if you want to bookmark for later...)
Why It Sucks:
Even after data arrives, getting to it requires:
The Irony: Your "self-service" tools require more support than the systems they replaced.
The Leap: True data access means business users querying live data with plain language questions, not waiting for pre-approved datasets.
Why It Sucks:
Connecting new sources means:
The Irony: Your "single source of truth" requires 17 different copies of customer data.
The Leap: Smart data integration automatically adapts to schema changes and preserves business context across systems.
Why It Sucks:
Running reports feels like:
The Irony: Your "cloud data warehouse" is slower than the Excel spreadsheets it replaced.
The Leap: Modern engines deliver sub-second responses on raw data without pre-aggregation.
Why It Sucks:
Meeting governance requirements means:
The Irony: Your security measures make the data too slow to be useful.
The Leap: Next-gen systems apply security at speed, enforcing policies without performance penalties.
What if you could:
That's exactly what leading enterprises achieve with Incorta:
The organizations winning today aren't those with the most data - they're the ones who've removed all friction between data and decisions. They've replaced:
Your choice is simple: Keep struggling with the same old hurdles, or take one leap to clear them all.
Ready to stop tripping over your data? See how modern enterprises are flying over these hurdles.
Migrating data from Oracle to Google BigQuery can unlock powerful analytics capabilities, but the journey isn’t always smooth. Many organizations struggle with data ingestion, data access, and ensuring live data availability when moving from Oracle’s on-premises or cloud databases to BigQuery’s serverless, scalable environment.
Lets explore the top three challenges Oracle users encounter when transferring data to BigQuery—and how modern solutions like Incorta can streamline data delivery for faster, more accurate insights.
Moving large volumes of data from Oracle to BigQuery requires efficient data ingestion processes. Traditional ETL (Extract, Transform, Load) tools often struggle with:
Many organizations rely on manual scripting or third-party connectors, which slow down data delivery and create bottlenecks.
Without real-time or near-real-time data ingestion, businesses miss out on timely insights, leading to outdated reporting and slower decision-making.
Oracle databases often contain mission-critical data that must remain secure yet accessible. When moving to BigQuery, users face:
Many teams resort to custom pipelines or partial migrations, leading to incomplete or siloed data in BigQuery.
Poor data access and integration lead to fragmented analytics, where teams can’t trust the consistency or completeness of their reports.
Businesses increasingly demand live data in BigQuery for real-time dashboards and AI-driven insights. However, syncing Oracle data continuously is difficult because:
Without a seamless sync mechanism, BigQuery users work with stale data, reducing the value of their analytics investments.
Delayed data updates mean missed opportunities—whether in customer personalization, supply chain optimization, or financial forecasting.
While traditional ETL and manual migrations create friction, Incorta provides a seamless solution for data ingestion, data access, and live data synchronization from Oracle to Google BigQuery.
✔ Direct Data Mapping – Incorta eliminates complex transformations by directly reading Oracle data structures.
✔ Real-Time Data Delivery – With built-in CDC (Change Data Capture), Incorta ensures live data flows into BigQuery without delays.
✔ Multisource Data Integration – Easily combine Oracle data with other sources (ERP, CRM, etc.) for unified analytics.
✔ No Intermediate Staging – Unlike traditional ETL, Incorta loads raw data at speed, reducing latency.
Migrating from Oracle to Google BigQuery doesn’t have to mean slow data ingestion, restricted data access, or stale live data. By leveraging Incorta’s next-gen data pipeline automation, businesses can overcome these challenges and unlock faster, more reliable insights in BigQuery.
Ready to streamline your Oracle-to-BigQuery data delivery? See Incorta in action.

AI is changing what's possible in every industry with advanced business analytics - driving efficiency, sharper decisions, and a stronger competitive edge. Incorta helps you get there, applying AI across data workflows, validation, analysis, and ML development so every team spends less time on prep and more time on what matters most.
Incorta Data Studio uses AI to automate the creation and management of data pipelines. AI models learn from existing data workflows, suggest optimizations, and build new, more efficient workflows - handling larger data volumes without the manual overhead of setting up and maintaining integrations.
This means faster, more reliable data integration from across your source systems, and less time spent on pipeline maintenance so your team can focus on higher-value work.
.gif)
Data cleaning and validation is another area where Data Studio delivers immediate value. Rather than relying on manual audits, it automatically checks data for accuracy, completeness, and consistency by learning from historical data to surface patterns, anomalies, and errors that human reviewers might miss. Specific capabilities include:
The result is data that's accurate, consistent, and trustworthy - which matters when that data is powering decisions, forecasts, and automated workflows.
Incorta Co-Pilot uses AI to analyze large datasets and surface insights that would be difficult to identify manually. It predicts trends, runs complex what-if analyses, and generates reports and visualizations automatically — freeing analysts from repetitive tasks so they can focus on strategic work.
Every answer is grounded in your real data and scored for accuracy, so the insights your team acts on are ones they can actually trust.
For data engineers and machine learning teams, Incorta Business Notebooks supports code generation, optimization, and testing. It suggests code improvements, detects potential bugs, and automates aspects of testing and quality assurance.
This translates to faster development cycles, better software quality, and a reduced workload on data science and ML teams - giving them more time for the complex, creative work of model creation, testing, and deployment.
Learn more at incorta.com.
The manufacturing industry has always been driven by efficiency, innovation, and process optimization. But as the sector faces new challenges and opportunities, the conversation is shifting. While technology remains a key enabler of transformation, it’s clear that people, culture, and processes must evolve alongside it. In a recent panel discussion, industry experts Ebrahim Alareqi (Principal Machine Learning Engineer, Incorta), Fabien Duboeuf (Industry Lead, Manufacturing, Google Cloud), and Jan Griffiths (Founder, Gravitas Detroit)shared their perspectives on the forces shaping the future of manufacturing—particularly in the context of AI, data, and cultural change.
Adopting emerging technologies like AI or machine learning is futile if the underlying processes and organizational culture remain outdated. As Jan emphasized, “There’s no point in applying technology to an outdated process.” The real value of innovation lies in rethinking how we operate before embracing new tools. For manufacturers, this means fundamentally reworking the way they design, manufacture, and deliver products. Without a cultural shift that prioritizes adaptability and forward-thinking, even the most advanced technologies will fall short.
Fabian echoed this sentiment, highlighting that the real challenge often lies in aligning people and processes with technological advancements. “You can’t just implement technology without aligning it with your business priorities,” he explained. Success starts with a clear understanding of why change is necessary and how it aligns with broader business goals.
One of the most significant obstacles for manufacturers, especially in industries like automotive, is the reliance on legacy systems. While these systems are critical to operations, they are often rigid and disconnected, making it difficult to harness the full potential of new technologies. Integrating these systems to enable seamless data sharing remains a pervasive challenge.
Ebrahim shared his experience from his time at Volvo Cars, where integrating legacy systems was a significant hurdle. “For a long time, we focused on the algorithms, but the real challenge was getting all the systems integrated so that the data could flow seamlessly,” he explained. Without a unified data infrastructure, manufacturers struggle to leverage new technologies effectively, hindering their ability to innovate and compete.
GenAI has emerged as a game-changer in the manufacturing world. Fabian Dubuff discussed how AI can help solve the problem of disparate data by enabling manufacturers to quickly access insights—even from legacy systems. The goal, he explained, is to create an ecosystem where employees can retrieve the information they need without being bogged down by the complexities of underlying systems.
AI should not be seen in isolation. Instead, it must be integrated into the broader manufacturing ecosystem to bridge gaps in the supply chain, streamline operations, and optimize customer experiences. “The future of manufacturing lies in combining cutting-edge technology with a culture of innovation that allows companies to be more agile and responsive,” Jan added.
AI and data analytics push manufacturers to explore “what-if” scenarios, which can predict and respond to disruptions before they occur. Fabian posed the question, “What if there’s a strike in Mexico or a regulatory change in Europe?” Simulating different scenarios in real-time is a game-changer, allowing manufacturers to make faster, more informed decisions and stay ahead of the curve in an increasingly volatile environment.
While technology is powerful, its success depends on the trust of the people who use it. Ebrahim emphasized that one of the biggest challenges with new technologies, especially AI, is overcoming skepticism among employees. “We have to make sure that employees can rely on the data they’re using,” he explained. Ensuring that data is clean, accessible, and trustworthy is critical to fostering this trust.
Whether it’s making decisions on the shop floor or negotiating with suppliers, having accurate, real-time data is essential. For many manufacturers, this means breaking down data silos and integrating information across systems to create a single source of truth. Only then can organizations fully leverage the potential of their data and technology investments.
A common theme in the discussion was the importance of starting small when implementing new technologies. Focusing on small, high-impact projects and gradually scaling them up ensures that new technology is tested and trusted before being rolled out across the organization.
Fabian agreed, stressing the importance of defining clear business priorities before deploying technology. “You have to start with a vision and then scale quickly,” he advised. By aligning technology deployments with business goals, manufacturers can ensure that their investments deliver meaningful results.
The future of manufacturing will require organizations to embrace complexity and change. As Fabian warned, “If you don’t embrace complexity, your competition will.” Digital-native companies, unburdened by legacy systems, are setting new standards for agility and innovation. For traditional manufacturers, this presents both a challenge and an opportunity. By understanding the complexities of modern manufacturing and using technology to their advantage, manufacturers can unlock new levels of efficiency and competitiveness. The key lies in combining cutting-edge tools with a culture that embraces change and innovation.
The future of manufacturing will not be defined by technology alone. It will be shaped by the ability of organizations to adapt their culture, processes, and people to the demands of the modern world. By embracing AI, leveraging real-time data, and starting with small but impactful projects, manufacturers can position themselves for success in the years ahead.
Want to learn more? Check out the full discussion on demand.

The advent of Generative AI (GenAI) platforms has significantly lowered the barrier to entry for business users looking to adopt AI technologies within their operations. These platforms have made it remarkably easier for non-technical users to leverage the power of AI. By providing access to pre-built models, customizable templates, intuitive interfaces, and drag-and-drop functionalities, GenAI platforms enable business professionals to generate insights, automate tasks, and create content without the need for deep programming knowledge or AI expertise.
This democratization of AI technology means that businesses across industries can now explore innovative solutions and optimize their processes more efficiently than ever before. As a result, GenAI is not just a tool for data scientists and AI researchers but has become an accessible and powerful asset for business users. The broader adoption of AI across the corporate landscape is driving transformative changes in how businesses operate and compete. This emerging technology heralds a new age of human efficiency and effectiveness, with its full potential and societal impact still being unraveled.

Figure 1: Cloud AI platforms and foundational models.
Public fascination with GenAI has grown through its applications in creative tasks, such as drafting humorous content or visualizing imaginative scenarios like astronauts riding horses. However, for business leaders & enterprise data, the emphasis is on employing GenAI in economically beneficial ways.
To effectively implement GenAI, businesses are turning to advanced AI platforms that offer comprehensive control over GenAI foundational models and their derivatives. These platforms as shown in figure [1] and the foundational models provide the necessary tools to extract insights from enterprise data, encompassing various levels of engagement:
These comprehensive GenAI levels are becoming an integral part of the corporate strategy. To fully leverage these technologies, leaders must incorporate GenAI into existing workflows, transforming them into an embedded part of customer and employee experiences.
Reflecting on the evolution from initial Large Language Model (LLM) applications to the current shift towards versatile, platform-centric models, it's evident that GenAI is becoming more sophisticated and integral to business strategies. As businesses continue to explore the potential of GenAI, it's clear that its integration into existing organizational structures is not just a technological upgrade but a strategic transformation. This transformation requires a dedicated approach, a willingness to experiment, and visionary leadership to ensure that GenAI's potential is fully realized in solving real-world problems and driving innovation.
As organizations navigate this journey, the ones that can quickly adapt and effectively integrate GenAI into their strategies will be the ones to gain a significant competitive advantage in the rapidly evolving business landscape.
Learn how Incorta continues to push the boundaries of what's possible in the realm of analytics and data intelligence in our recent webinar. Watch here.
“You’re only as good as your last data set”. This phrase kicked off Incorta’s NoLimits event in a powerful keynote from Sol Rashidi, and alchemized into a rally cry throughout the day. Combining GenAI with live operational data unlocks new dimensions of data analysis, automation, and insights leading to unparalleled operational efficiency - but how can you ensure that your data is ready for the power of GenAI?
The potential of GenAI is only as strong as the data foundations upon which it relies. We gathered some of the most brilliant industry leaders together for this event to learn how the pros are fortifying their data foundation to achieve real outcomes, and setting up their organizations for future success with analytics and GenAI.
If you missed these inspiring sessions in person, we’ve recapped our key takeaways and highlights, including leading analysts, GenAI pioneers, inspiring customer success stories, and our highly-anticipated launch of Incorta’s first Operational GenAI offering. Let’s dive in.


We were joined by our very own Incorta customers - showing us how they revolutionized day-to-day data processes, gained unparalleled insights, and realized success from building a strong data foundation. Some of our highlights include:
This event brought the groundbreaking launch of Incorta's new Operational GenAI. Access to live, trusted, and detailed data is vital to achieve the potential of GenAI - this new offering from Incorta delivers best-in-class access to live, detailed operational data at scale - a solution customized for each customer at a fraction of the cost of alternatives. This new offering:
This event showcased the power of solutions developed by Incorta’s new alliances with Vectara
and aiXplain - built together with the leading, state-of-the-art technologies including Retrieval Augmented Generation (RAG) from Vectara and model serving and fine-tuning from aiXplain.

“Through our partnerships with Vectara and aiXplain, we now provide the first fully-managed GenAI infrastructure that combines enterprise operational data with unstructured documents and data.”, announced Osama Elkady, CEO and Co-founder of Incorta. “This enables customers with a click of the button to have access to an unlimited and private GenAI solution on top of the best data foundation for GenAI.”
See the full press release here for more, and check back soon for access to sessions on-demand!
Tourism Development Fund (TDF) was established in June 2020, by royal decree with a pivotal mission: to enable the private sector and investors—both local and international—to drive the growth of Saudi Arabia's tourism sector. The Fund supports projects of all sizes, from mega projects worth billions of Saudi Riyals to micro businesses starting at just ten thousand Saudi Riyals. TDF's vision is clear: unlock the capital of the private sector and position it as the leader in the Kingdom's growing tourism industry.
From its inception, TDF recognized the crucial role of technology in achieving its goals. The Fund's technology strategy has always been centered around establishing a robust foundation for data. Understanding the evolving demands of a new market, identifying market shifts, and serving diverse market segments required a sophisticated approach to data management.
A key component of TDF's strategy was implementing a user-friendly tool that could provide dynamic, easy-to-use access to data for both technical and non-technical users. The aim was to lay the groundwork for big data analytics, ensuring that all decisions were based on actionable data insights. After extensive research and benchmarking various tools, TDF chose Incorta as its data solution, a decision that has proven to be highly effective.
Implementing Incorta into an existing comprehensive technology landscape within TDF presented several challenges, particularly in terms of reporting. The Fund's tourism investment portal, a microservice-based architecture, required integrating data from multiple databases with varying formats, structures, and connectivity. Previously, the team had to run complex SQL queries, collect data from different databases, and manually aggregate the information. This process was time-consuming, taking approximately two hours per person each day to prepare the reports.
The manual data aggregation not only consumed valuable time but also impeded the team's ability to focus on building new technologies. The need to handle a wide range of data sources, including structured data from databases like MySQL and unstructured data from Excel sheets and MongoDB, added to the complexity.
The introduction of Incorta transformed TDF's data management processes. By integrating Incorta, TDF was able to save an impressive 400 hours that were previously spent on manual report generation. The platform enabled instant connections to 20 data sources and supported 43 schemas, providing real-time data operations that significantly enhanced the Fund's efficiency.
Incorta's ability to handle diverse data sources and facilitate real-time analytics allowed TDF to meet its data needs with ease. The platform's user-friendly interface ensured that even those without prior training could quickly become proficient in using it. TDF's data infrastructure now supports seamless data aggregation and reporting, freeing up the team to focus on strategic initiatives.

Beyond internal efficiency, Incorta's integration also enabled TDF to expose APIs, making Incorta a crucial backend component for their data operations. This capability has extended TDF's reach, allowing the Fund to offer valuable data services and insights through API integrations.
The successful implementation of Incorta has revolutionized TDF's approach to data management, enabling the Fund to achieve remarkable operational efficiency and support the growth of Saudi Arabia's tourism sector. By leveraging advanced data solutions, TDF has positioned itself as a leader in the market, capable of responding to dynamic market needs and driving data-driven decision making.
TDF's journey underscores the importance of adopting innovative data technologies to unlock new opportunities and overcome challenges in a competitive landscape. As TDF continues to expand its capabilities, the Fund remains committed to fostering a data-centric culture that supports sustainable growth and development in the tourism industry.
About the Author:
Nasser Alhimeidi is the Associate Director of Applications Development at the Tourism Development Fund (TDF). With a Computer Science degree from Concordia University in Montreal, Canada, he brings a wealth of knowledge and expertise in applications development and management. In his current role at TDF, Nasser leads a team responsible for innovating and optimizing TDF’s data strategy to support the Fund's mission to drive growth in Saudi Arabia's tourism sector.
FP&A is about far more than simply collecting numbers for a budget. FP&A is a strategy team, foundational to helping drive a unified vision across an organization. Fundamentals in process and access to the foundational business data are the key differentiators in success.
The value in your FP&A investments is in how it extends your ability to make data accessible to drive answers to daily business questions (analytics) and how it reduces the demand on the personnel resources to get that information. There is no single feature or function that drives success. FP&A is successful when it helps the broader enterprise:
The value of any platform or tool is found in how it simplifies these processes. Until you do this well, nothing truly works - not reporting, not ML. The demand for strategic, data-driven insights is outpacing traditional, tactical FP&A practices.
This transformation is exemplified by the partnership between Incorta and Workday Adaptive Planning, redefining the FP&A landscape
FP&A leaders are at the forefront of this evolution, tasked with enhancing the accuracy of financial forecasts and budgets. To achieve this, they need comprehensive and detailed insights into the financial and operational data that drives their organizations. These insights empower FP&A teams to analyze data effectively, inform stakeholders, and take precise actions. However, the complexity lies in accessing complete, business-ready data from diverse sources in a timely and cost-effective manner.
Challenges facing FP&A leaders today include:
It's important to note that the data accessibility challenge is pervasive. A 2022 global Workday survey revealed that only 12% of organizations have fully accessible data for those who need it. This scarcity of accessible data further underscores the urgency for a transformative solution.
Workday Adaptive Planning with Incorta provides a groundbreaking yet simple solution to these challenges. Incorta offers a purpose-built operational data lake for the finance and accounting team. Workday Adaptive Planning and Incorta seamlessly connect to form a complete solutions platform. With the combined solution, FP&A teams gain the ability to take control of their financial planning and analysis.
With Incorta and Workday Adaptive Planning, FP&A teams accomplish:
In the pursuit of heightened forecast accuracy, FP&A leaders are now equipped with a comprehensive and detailed view of financial and operational data. Incorta and Workday Adaptive Planning eliminate the challenges of accessing complete, decision-ready data from disparate sources.

If you were at Google Next in Las Vegas last week, you likely found yourself inundated with conversations about Generative AI, with the term seemingly marked on every booth and echoed in countless educational sessions. The buzz surrounding Generative AI is palpable, promising transformative potential across industries. However, amid the excitement, a crucial question emerges: Is your organization's data infrastructure primed for the GenAI revolution?
For many, the answer is uncertain and daunting. Fortunately, amidst the whirlwind of possibilities, Incorta is focused on making GenAI a success in your business, offering clarity and direction on how to ready your organization's data for the impending wave of GenAI innovations.
At the heart of any GenAI endeavor lies the necessity for robust data access. Without access to comprehensive and diverse datasets, the efficacy of Generative AI models remains constrained. Incorta, with its operational lakehouse, provides live, detailed access to complex, multi-source ERP data, including Oracle EBS and Fusion data, directly integrated into Google BigQuery.
Founded by Oracle experts, Incorta's operational lakehouse represents a paradigm shift in data accessibility, bridging the gap between disparate data sources and empowering organizations to harness the full potential of Generative AI. Only Incorta lets you train your models on all your detailed operational data, laying a solid foundation for your GenAI journey and actually making your models useful.
As the GenAI revolution gathers momentum, the importance of robust data infrastructure cannot be overstated. With Incorta leading the charge, organizations can navigate the complexities of the GenAI landscape with ease, embracing the future with optimism and readiness.
Want to learn more about Incorta and Google’s solution for GenAI? Join us April 24th for a webinar where we’ll discuss the challenges organizations face in integrating GenAI across their enterprises and solutions for establishing a strong data foundation.
The modern retail customer journey is more complex than ever - and the era of digital transformation has inundated retail and CPG suppliers with data from countless sources. Harnessing this customer data is key to understanding nuanced customer behavior and ensuring supply chain resilience - but access to live, fresh data for effective, timely decision-making remains a constant challenge.
Understanding the challenges CPG and Retail leaders face day in and day out, Incorta hosted a discussion with industry experts from Skechers, Shutterfly, and James Avery. In the discussion we learned how these pros revolutionized their retail data landscape using Incorta - accessing 100% of their data from multiple sources, unleashing instant access for lightning-fast answers from any device, and driving mission-critical decision making with confidence and ease. Watch the full discussion here, or read our key takeaways.
Retailers & CPG suppliers run into similar challenges when it comes to their data:
Unfortunately, data projects meant to solve these challenges are often stymied by lengthy implementation times and time-consuming manual tasks. Data quality and trust are often roadblockers as well - with organizations needing a single source of truth to ensure their daily business decisions are based on accurate, fresh data.
"Incorta has revolutionized how we manage our data, particularly with complex metrics like 'days on hand' for inventory. This has enabled us to place purchase orders at optimal times and manage our stock levels effectively, which is crucial for our highly seasonal business. The ability for non-technical professionals to handle end-to-end data processes has been a game-changer for us."
Shutterfly’s Director of Supply Chain Management, Rachel McCutcheon, needed a way to effectively manage their inventory in a highly seasonal business environment. Incorta’s flexibility in handling ERP system data enabled the creation of complex metrics like “days on hand” for inventory management - which was previously difficult to achieve.
This metric helps them manage inventory levels more effectively by considering their highly seasonal business patterns. Using Incorta, they were able to:
"With Incorta, we were able to streamline our traditional BI processes significantly. What used to take months now takes weeks, allowing us to react much faster to market trends and internal business needs. This speed, coupled with improved data quality and trust, has empowered our teams to make better, data-driven decisions independently."
Manish Agarwal, VP of Data and Analytics at Skechers, found that traditional BI processes were painfully slow - often taking months to deliver actionable insights. With Incorta, Skechers reduced this timeframe to mere weeks: enhancing their ability to make timely decisions, and empowering their BI and analytics team with more control over data processing and visualization. They’ve now seen:
"Incorta has been instrumental in providing a complete data management and analytical solution for us. It has significantly reduced our reliance on the IT team for generating reports, allowing our business users to self-serve their data needs. This capability has led to quicker decision-making processes and better utilization of our data."
James Avery’s Data Services Integration Manager, Sandeep Vittalam, needed a way to provide critical data to executives at a self-serve level: getting the right data to the right people at the right time was a must. Letting business users self-serve their reporting needs reduced the backlog of IT requests and allowed users to independently generate insights relevant to their department needs. Incorta’s self-serve capabilities led to:
By easily enabling integration of complex, diverse data from multiple ERPs and source systems for a comprehensive view, reducing data delivery times, and enabling self-service, Incorta empowers businesses to become more agile and data-driven. Whether managing inventory, streamlining BI processes, or enhancing executive decision-making, Incorta helps retail organizations stay ahead of the curve in a competitive landscape.For more information, schedule a 1:1 demo today.
No one wants to miss out on the business benefits AI brings to the table: predictive analytics and forecasts, intelligent scenario planning, natural language chat interfaces, and so much more. PwC found that 73% of US companies have already adopted AI in at least some areas of their business.
But AI solutions are only as good as the data that fuels them. What’s the use of a forecast based on old data, when so much changes in an instant? Yet, traditional data management solutions require hours of manual labor (and often multiple tools) to extract and unify data across sources, including operational sources like ERPs.
Let’s take a look at how to solve three challenges posed by traditional data management solutions when it comes to fueling AI innovation.
You’d be hard-pressed to find an organization out there with a single source of enterprise data. Most organizations have different operational source systems for each business unit or region they operate in, including those inherited through acquisitions. Beyond your data sources, you may also have multiple data management solutions – from data warehouses to data lakes – that each have their own formats and challenges.
It's not the number or variety of data sources that’s the problem – that’s just reality. The issue is bringing all those sources together. If your data isn’t unified, your AI solutions won’t be able to provide a holistic view across your organization, instead delivering answers that are only informed by pieces of the larger whole. And if you aggregate this data to too high of a level, you may miss critical details from the source systems that you need for later analysis.
You need a way to easily bring together data from each of your sources. Better yet, you need a solution that provides both aggregate and transaction-level data, so you and your AI tools can drill down as far as you need to for the most accurate analysis.
In respect to aggregation, it takes hundreds of hours of manual labor to move data from one system to another. Your team ends up bogged down extracting data from the various source systems, then shaping, aggregating, and transforming that data so that it can be used by your other business applications. And the more your data is manipulated, the more room there is for human error to creep in. An AI solution using inaccurate data can generate false assumptions that can become costly for your business.
Not to mention that all that effort required means that your AI solutions won’t be getting live data. Instead, data could lag by weeks, sometimes even months, meaning your predictions and forecasts can’t respond in real time as the situations change and new variables enter the picture. Think of a manufacturer who has to quickly respond to an event like a late delivery from a vendor. They need fresh data to decide how best to move forward.
A solution that not only automates data ingestion and integration – freeing up your staff’s time for more value-added work – but also is able to do so in real time ensures your models are running off the latest, most accurate information, without the added burden on your team.
There’s obviously a cost component to all of those hours of manual labor. But the true cost of integrating all of these data sources using traditional means is even higher. From various data prep tools to pricey consultants, the dollar amount for unifying your data to use in AI solutions can skyrocket. Every organization has a threshold when resources (dollars and people) can no longer keep up with analytics needs. This often leads to stagnant and piecemeal data supporting your strategic models.
Turning to a complete solution for data ingestion, transformation, and analysis can save you money in the long run, as well as reduce operational overhead. Your internal decision-makers can become empowered to find better answers faster without the need to bring in external consultants or spend unbudgeted time and money, transforming the decision-making process.
The scope of these challenges does not have to be overwhelming. AI solutions require a modern approach to working with operational data. Incorta is a complete operational ingestion and integration platform that helps you power AI solutions with live, accurate data while reducing cost and manual effort. Using its proprietary Direct Data Mapping® functionality, Incorta creates a digital twin of your operational data from multiple ERPs and other sources, so your data reflects what is happening in your business in real time.
By integrating Incorta with Google Cloud solutions like Google BigQuery and Google Cortex Cloud Framework, you can use this live data to make more informed decisions, fuel AI models for more accurate predictions and answers, and save money by reducing manual effort.
Want to learn more about how Incorta and Google Cloud solutions help unify data, while reducing manual effort and saving costs? Read our eBook, “Empower AI with live data,” and check out our integration on the Google Cloud Marketplace.
As announced today, Incorta is a Niche Player for the third consecutive year in the 2024 Gartner® Magic Quadrant™ for Analytics and Business Intelligence. Incorta’s operational lakehouse, continues to innovate its critical capabilities, providing customers with a cutting-edge platform.
“Incorta is an operational lakehouse, ideal for self-service analytics on live, operational data across multiple enterprise applications,” said Ashwin Warrier, Incorta’s Head of Product. “We believe our unique approach to integrating complex data from multiple sources and flexible delivery options, gives customers across industries the ability to drive smarter insights, enhance their analytics and foster innovation for sustainable business growth."
We believe with the write included in this year's Magic Quadrant™, Incorta customers continue to applaud the platform's ability in reporting - not only the business user usability but the ability to drill down into transaction-level detail, openness of the platform - allowing customers to use Incorta, Microsoft Excel, and leading ABI platforms as their front end reporting system of choice.
Incorta’s pre-built solutions accelerate and speed the development of analytics, reporting, and AI. With these applications, customers can get up and running quickly to see faster time to value.
Since the release of the 2023 Gartner® Analytics and Business Intelligence Magic Quadrant™, Incorta has pursued and implemented game-changing capabilities that redefine the speed and productivity of business users, analysts, data engineers, and IT alike.
While Incorta continues to innovate its platform and the capabilities to allow customers to reimagine workflows, take advantage of AI, and increase productivity, they are continuing to expand their partnerships to bring joint solutions to customers with the best-of-breed technologies to augment, accelerate, and enhance their analytics and reporting. This past year, Incorta announces a joint solution with Google Cloud that unravels the complexity of Oracle ERP data for BigQuery customers. Incorta, an open data delivery platform simplifying and speeding users access to live, transaction level financial data, and a Workday, Inc. partner, announced an integration with Workday Adaptive Planning in May 2024.
Incorta continues to redefine data analytics by enabling companies of all sizes to capitalize on the true potential of their complex, operational data, transforming their decision-making processes and driving substantial business grow.
“Incorta stands out as an easy-to-implement, easy-to-use analytics engine. The breadth of connections to complex ERPs such as SAP, legacy systems, and front-end applications, enabled PHC Group to modernize their BI and reporting while maintaining the prior years of work and data.” Zach Juneau, Business Intelligence and Data Analytics Lead, PHC Group, stated simply, “Incorta allows us to do more with less. Less technical expertise to implement, build, and maintain, less cost to maintain, and less time to implement, train, and adopt. Incorta offered us the simplicity and agility to modernize our infrastructure and democratize critical data to all users.”
Access the 2024 Gartner® Magic Quadrant™ for Analytics and Business Intelligence, click here.
Want to learn more? Register for a weekly Solution Spotlight demo to learn about Incorta.
Gartner® , “Magic Quadrant™ for Analytics and Business Intelligence Platforms”, Magic Quadrant for Analytics and Business Intelligence Platforms, 20 June 2024.
Gartner® is a registered trademark and service mark and Magic Quadrant™ is a registered trademark of Gartner®, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.
Gartner® does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner® research publications consist of the opinions of Gartner®’s research organization and should not be construed as statements of fact. Gartner® disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Through 2025, at least 30% of Generative AI projects will be abandoned after proof of concept due to poor data quality, inadequate risk controls, escalating costs, or unclear business value. While generative AI holds immense transformative potential, scaling it effectively comes with significant challenges. CTOs and roles responsible for the deployment of GenAI projects must prioritize business value, focus on AI literacy, nurture cross-functional collaboration, and stress continuous learning in order to successfully deploy and scale these projects.
Gartner analyst Arun Chandrasekaran shared his latest research at an Incorta-hosted event: addressing the current state of GenAI, its key use cases/technology landscape, and emerging best practices to safely deploy it in the enterprise. Read our key highlights below, or access the full report.
Arun’s research showed that around six out of ten customers have deployed generative AI in either pilot or production environments: a significant increase from March 2023 when only about two out of ten had done so. Customer service, software development, and marketing roles all show a significant uptick in interest in GenAI tools to boost productivity. However, many enterprise clients abandon generative AI projects after the pilot stage due to four main hurdles:
Arun’s research covers a detailed 10-step strategy along with emerging best practices to overcome these hurdles.
By implementing these best practices, CTOs can navigate the complexities of scaling GenAI, ensuring it drives business value while managing risks effectively.Access the full report below to learn more about the 10 Best Practices for Scaling Generative AI Across the Enterprise:

The Transformative Journey of Walaa Insurance:
Walaa Insurance, a prominent player in the insurance industry in the Middle East, faced significant challenges in managing and utilizing its vast data assets. The company's ambition was to align with Saudi Arabia’s Vision 2030, and to meet the increasing market demands which required a robust data management strategy. Ms. Suad Alayed, a data analyst from Wala Insurance, emphasized that instead of viewing these challenges as obstacles, they were seen as requirements for growth and transformation.
Central to this transformation project was a data management strategy and a single platform that could support many use cases for the company today and in the future. Wala reviewed many solutions. Upon their conclusion, they confirmed that the adoption of Incorta was critical to the success of this transformation project. Incorta, a leading operational lakehouse, played a pivotal role in addressing what Ms. Alayed referred to as the "Triple S'' requirements: Speed, Simplicity, and Scalability.
In today's fast-paced business environment, real-time data access is crucial for timely decision-making. Ms. Alayed highlighted that Incorta enabled Wala Insurance to accelerate the process of data access and analysis. The platform's capability to deliver high-speed data analytics ensured teams that data to draw critical insights quicker were available in real-time, facilitating more informed decisions.
Managing large and complex datasets can often lead to cumbersome processes and delays. Incorta's user-friendly platform simplified these complexities, making it easier for business users to create their own dashboards and reports without heavy reliance on the IT department. This empowerment of business users streamlined operations and improved efficiency across various departments.
Handling large volumes of diverse data sources posed a significant challenge for Wala Insurance. Incorta’s scalable architecture allowed the company to manage and seamlessly integrate multiple data sources. Whether dealing with legacy systems or external data, Incorta provided a unified and simplified data environment that could handle the growing data needs of the organization.

Ms. Alayed explained that Incorta became the backbone of Wala Insurance’s data infrastructure, providing real-time data capabilities and advanced visualizations. The platform's integration into the company’s operations spanned across departments, from medical and finance to claims and motor services. Each department benefited from live data availability and enhanced reporting capabilities, leading to more efficient and effective operations.
Before adopting Incorta, Wala Insurance faced the challenge of managing multiple data sources and ensuring real-time data availability. With Incorta, the company successfully overcome these challenges, handling over 200 gigabytes of data, across five different data sources and producing more than 50 reports. The platform’s ability to provide access to live data for every business user on-demand was a game-changer, significantly enhancing the company's data management capabilities.
Ms. Alayed reiterated the significant impact of Incorta on Wala Insurance’s transformative project. The platform not only streamlined the company's data analytics processes but also plays a crucial role in driving operational excellence and achieving strategic objectives. The success story of Wala Insurance serves as a testament to the power of streamlined data management and the potential of advanced data analytics tools in fostering business growth and innovation.
As Wala Insurance continues to leverage Incorta for its data needs, the company is well-positioned to navigate the challenges of the competitive insurance industry and capitalize on new opportunities. This journey underscores the importance of embracing modern data analytics solutions to achieve remarkable business outcomes.
Watch Ms. Walaa's on-demand session to discover how Incorta empowered Walaa Insurance to streamline data analytics, enabling faster, data-driven decisions and achieving unparalleled operational efficiency.
About the Author:
Suad Alayed is a data analyst at Wala Insurance with a Master’s degree in Computer Science and extensive experience in data analysis and management. Her expertise in leveraging data analytics tools has been instrumental in transforming Wala Insurance's data strategy and operational efficiency.
According to a recent MIT Tech Review report, all 600 CIOs surveyed stated they are increasing their investment in AI—71% are planning to build their own custom Language Learning Models (LLMs) or other GenAI models. Despite this, many organizations struggle to successfully implement their GenAI initiatives because they lack access to live, up-to-date data for their models.
Successfully deploying GenAI models to any area of your organization requires a strong data foundation - with access to fresh, detailed data while preserving security and compliance. For Incorta's first Operational GenAI offering, Nexus, we've joined forces with our strategic partner Vectara to integrate their state-of-the-art Retrieval-Augmented-Generation (RAG) capabilities - helping organizations mobilize their GenAI initiatives more accurately than ever.
Access to live, detailed, operational data is critical when deploying GenAI initiatives. However,
unstructured data poses a challenge for companies aiming to leverage GenAI - the sheer volume and diversity of this data can make it difficult to glean any actionable insights.
When applying GenAI, language models must prioritize specific organizational data for reasoning tasks. This is where Retrieval-Augmented-Generation (RAG), powering Incorta Nexus, significantly boosts AI-generated accuracy, relevance, and reliability.
Vectara's trusted Gen AI platform allows organizations to rapidly create AI assistants and agents grounded in their data, documents, and knowledge. Vectara's serverless RAG also provides the critical trust and control capabilities required for enterprise adoption.
By leveraging high-quality, consistent, and real-time structured data from Incorta for model training and performance and Vectara's RAG capabilities for access to relevant unstructured data, organizations can enrich their AI's contextual understanding and response generation for better accuracy.

Using a RAG model, Incorta Nexus is elevated with:
Vectara's RAG approach combines the precision of retrieval with the flexibility of generation, resulting in a powerful tool for various applications that require accurate and contextually relevant information retrieval and response generation. This is particularly useful for cases like:
As a CFO, you are responsible for your company's profitability margin. If there's any noticeable decline in the operating income margin, it's up to you to determine if this decline is attributed to internal inefficiencies or external market factors.
Traditionally, this analysis involves extensive, time-consuming, and error-prone manual work, including data extraction from various operational systems, manual calculations, and scouring available competitors' financial reports.
Using Incorta Nexus, CFOs can leverage the platform's ability to integrate and analyze structured and unstructured data in real-time. Here's how a CFO would utilize Incorta Nexus to tackle this challenge:
With comprehensive insights provided instantly by Incorta Nexus, CFOs can confidently present the findings to the board, highlighting the cause of margin decrease and providing strategic insights - such as reviewing compensation policies and controlling hiring practices - to improve profitability margins.
With Incorta Nexus, the possibilities are truly limitless. Through our strategic partnership with Vectara, our customers not only unlock new levels of efficiency and agility but also fortify the foundation for future growth and innovation for future operational GenAI initiatives.
See Vectara's entire session at Incorta's NoLimits event on-demand now.
Let’s start by imagining a scenario: you’re late for a critical meeting and need to choose between a 1960s car and a brand-new 2024 model to get you there. Naturally, you’d pick the modern car, right? This analogy perfectly encapsulates the transformation experienced at GIG. GIG formerly AXA, is a GIG is amongst the largest and most diversified insurance groups in the Middle East and North Africa. One of the top three players in the GCC, present in 13 markets, GIG is one of the top 10 most valuable insurance companies according to Forbes Middle East and was recently awarded as the 2021 General Insurance Company of the Year by MIIA.
GIG’s previous data architecture was like the old car—slow, exhausting, and full of constraints. Generating reports required an extensive process that often involved writing thousands of lines of code and relying heavily on external vendors –who weren’t always available when needed for answers.
In contrast, GIG’s new architecture with Incorta is like the 2024 car—fast, reliable, and efficient. With the integration of Incorta into its data management and reporting processes, GIG has showcased innovation and efficiency. This analytics transformation addressed complex data challenges and resulted in significant time savings, cost reductions, and exceptional value generation.
Incorta has allowed GIG to revolutionize their data management, providing quick access to accurate, real-time data without the lengthy delays they previously faced. The data management system is now capable of generating complex reports in just minutes, a task that once took days.

By integrating Incorta, GIG has significantly improved their data architecture. They can now generate reports in minutes, handle complex data from multiple sources, and create customized dashboards that cater to the unique needs of each department. Some key benefits GIG has experienced include:
One of the most significant results garnered from the implementation of Incorta is the transformation of GIG’s regulatory reporting process. Previously, preparing a quarterly report for the insurance authority took an average of 6.5 days. With Incorta, this process now takes just minutes, saving GIG almost a month’s worth of work each year. This dramatic improvement has freed up valuable time and resources, allowing GIG to focus on more strategic initiatives.
Integrating Incorta has been a game-changer for GIG. They moved from a cumbersome, outdated, data architecture infrastructure to a modern, efficient system that meets their growing business needs. This transformation has not only improved their operational efficiency but also positioned GIG as a leader in the financial services industry.Watch an on demand webinar to learn about GIG’s transformation journey and how it used Incorta to beat the data delay and build a faster data value for financial services.
Saudi Arabia’s journey towards becoming a digitally advanced nation involves significant efforts in both the public and private sectors. The National Data Management Office (NDMO) has been instrumental in creating a standardized approach to data management across the kingdom. Drawing inspiration from international standards, the NDMO has developed a comprehensive framework that treats data as a national asset, ensuring it is discoverable, actionable, and well-maintained.
Key principles such as data protection by design, open data by default, and the promotion of ethical data use are foundational to this framework. These principles aim to foster a culture of transparency and trust, where data is a shared responsibly and utilized for the common good. This approach not only enhances the effectiveness of decision-making processes but also ensures that the vast amounts of data collected are used to drive meaningful outcomes.
Incorta’s platform has played a pivotal role in transforming data management practices. The integration of Incorta at large to small enterprises and government entities across industries has brought about significant improvements, including:
The NDMO's comprehensive framework for data management, consisting of 15 domains and over 190 specifications, is designed to ensure that data governance is effective and that data assets are maximized for value. This framework covers various aspects of data management, including data governance, data accessibility, data protection, and data classification.
By implementing these standards, organizations in Saudi Arabia can create a robust data management strategy that aligns with national goals and supports the broader vision of digital transformation. The integration of Incorta further advances this strategy. Incorta enables organizations to harness the full potential of their data and achieve compliance with regulatory requirements.

Earlier this year, at the No Limits event in Riyadh, hosted by Incorta, Ahmed Tarek, Director of Sales Engineering, and Ahmed Moawad, Customer Success Manager delivered an insightful presentation on the National Data Management Office (NDMO). The session highlighted the transformative impact of advanced data solutions on reshaping the data landscape, aligning with Saudi Arabia's vision for a digitally empowered future.
One of the key messages from the No Limits event was the importance of innovation and trust in data management. With the rapid advancements in technology, it is crucial to continually adapt and improve data management practices to stay ahead of the curve. The event underscored the need for a learning culture that fosters continuous development and the adoption of world-class standards in data management.
Trust, both between government entities and the public, is paramount in ensuring the success of data initiatives. High-quality data that is well-managed and protected instills confidence in the data ecosystem, enabling better collaboration and more informed decision-making.
The No Limits event highlighted the transformative impact of Incorta on data management practices in Saudi Arabia. By integrating advanced data solutions and adhering to the principles set forth by the NDMO, organizations can drive significant value from their data assets and contribute to the country’s vision of a digitally empowered future.
According to a recent MIT Tech Review report, all 600 CIOs surveyed stated they are increasing their investment in AI—71% are planning to build their custom Large Language Models(LLMs) or other GenAI models. Despite this, many organizations struggle to successfully implement their GenAI initiatives because they lack access to live, up-to-date data for their models.
Successfully deploying GenAI models to any area of your organization requires a strong data foundation—access to fresh, detailed data while preserving security and compliance. For Incorta's first Operational GenAI offering, Nexus, we've joined forces with partner aiXplain to integrate their industry-leading model serving and fine-tuning capabilities - helping customers easily mobilize their GenAI initiatives.
For operational GenAI systems, efficient model serving ensures that these models can be deployed reliably at scale. Fine-tuning enables continuous improvement and adaptation, allowing the models to stay up-to-date in dynamic environments. With aiXplain, you can simplify your AI workflows, generate predictions, and gain insights on structured and unstructured data within Incorta's secure environment, ensuring scalability and governance.
Model Serving: Once a GenAI model is trained, it needs to be deployed into production environments where it can generate outputs in real-time or on demand. Efficient and scalable model serving ensures the model can handle varying workloads and provide quick responses.
Fine-tuning: GenAI models require continuous improvement and adaptation to evolving data. Fine-tuning involves adjusting the model's parameters or architecture based on new data or feedback. This iterative process helps maintain the model's relevance and accuracy over time.
aiXplain's model serving capabilities ensure that Incorta's GenAI models can be scaled, allowing them to handle large requests from users or other systems. This scalability is crucial for ensuring consistent performance and responsiveness, especially in environments with high demand - providing:
Leveraging model serving and fine-tuning capabilities from aiXplain enhances Incorta Nexus: enabling scalable deployment, real-time insights, continuous improvement, and adaptability to changing data and business conditions.
As a CFO, you are responsible for your company's profitability margin. If there's any noticeable decline in the operating income margin, it's up to you to determine if this decline is attributed to internal inefficiencies or external market factors.
Traditionally, this analysis involves extensive, time-consuming, and error-prone manual work, including data extraction from various operational systems, manual calculations, and scouring available competitors' financial reports.
Using Incorta Nexus, CFOs can leverage the platform's ability to integrate and analyze structured and unstructured data in real-time:
With comprehensive insights provided instantly by Incorta Nexus, CFOs can confidently present the findings to the board, highlighting the cause of margin decrease and providing strategic insights - such as reviewing compensation policies and controlling hiring practices - to improve profitability margins.
With Incorta Nexus, the possibilities are truly limitless. Through our partnership with aiXplain our customers not only unlock new levels of efficiency and agility but also fortify the foundation for future growth and innovation in future operational GenAI initiatives.
To see this demo live & aiXplain's full session at Incorta NoLimits, watch here on-demand.
"If you don't have the latest, most up-to-date data - especially today - you're going to make misinformed decisions." - Howard Dresner
The ability to access and utilize data efficiently isn’t a luxury anymore—it’s a necessity. Whether you're an executive, IT professional, or BI specialist, you need access to live data for timely insights and better decision-making. Leveraging self-service business intelligence (BI) and AI capabilities on top of live, detailed operational data gives business users the data they need when they need it.
The latest 2024 Wisdom of Crowds® Self-Service BI Market Study from Dresner explores the current and future trends in self-service BI, focusing on governance, collaboration, genAI features, implementation success rates, and leading vendor comparisons.
During our virtual discussion with Howard Dresner, founder of Dresner Advisory Services, we dug into democratizing data - and how self-service BI (Business Intelligence) can empower users across organizations to make better, faster decisions.
"Information democracy" refers to delivering timely, relevant insights to all organizational stakeholders, ensuring everyone has the data they need to perform their roles effectively. Howard Dresner, Chief Research Officer at Dresner Advisory Services, LLC and a significant figure in the data and analytics space for over 35 years, emphasized the importance of this concept - explaining that the goal is to provide every stakeholder with the insights necessary to make informed decisions, aligning their actions with the organization's overall mission.
Despite the clear benefits of information democracy, many organizations struggle to achieve it. A poll conducted during the discussion revealed that only 13% of participants felt that most of their employees had access to the data needed to make critical business decisions.
This gap underscores the ongoing challenges in data accessibility and the need for improved data infrastructure and governance. Achieving widespread data access can prove difficult, largely due to:
Self-service BI is critical to democratizing data - enabling users to generate their own reports and insights without relying heavily on IT departments. However, the success of self-service BI varies widely among organizations. Factors such as user data fluency, the complexity of data systems, and the availability of tools all influence the effectiveness of self-service BI implementations. To overcome these challenges, Howard recommended that organizations should consider the following best practices:
Achieving information democracy requires combining the right technology, data governance, and a commitment to fostering a data-driven culture. By ensuring all stakeholders have access to timely and relevant insights, organizations can improve decision-making and ultimately drive better business outcomes.
See where Incorta stacks up with leading vendors in the 2024 Wisdom of Crowds® Self-Service BI Market Study from Dresner.
As a CFO, you are responsible for your company's profitability margin. If the operating income margin noticeably declines, you must determine whether this decline is attributed to internal inefficiencies or external market factors. Traditionally, this analysis involves extensive, time-consuming, and error-prone manual work, including data extraction from various operational systems, manual calculations, and scouring available competitors' financial reports.
Using Incorta Nexus, CFOs can leverage the platform's ability to integrate and analyze structured and unstructured data in real-time. Watch our live demo below, or keep reading for our step-by-step breakdown.
With comprehensive insights provided instantly by Incorta Nexus, CFOs can confidently present the findings to the board, highlighting the cause of margin decrease and providing strategic insights - such as reviewing compensation policies and controlling hiring practices - to improve profitability margins.
With Incorta Nexus, the possibilities are truly limitless. Customers not only unlock new levels of efficiency and agility but also fortify the foundation for future growth and innovation in future operational GenAI initiatives.
"With Incorta, we unified data from multiple ERPs in just six months, transforming our financial reporting and analytics capabilities. Accessing and analyzing live operational data in real time has been a game-changer, helping us make confident, data-driven decisions. Incorta’s no-ETL approach has really revolutionized our analytics journey.”
PlayPower, a global leader in commercial playgrounds and recreational structures, faced significant challenges through organic growth and multiple acquisitions. The company had inherited a fragmented ecosystem of multiple ERP and transactional systems - gaining a unified view of data across various brands and business units proved challenging. The lack of consolidated data made making informed, timely decisions challenging, impacting overall efficiency and agility.
PlayPower faced the challenge of integrating data from various sources, including 15 different ERP systems across 11 brands, each with its own unique configurations and accounting standards. This fragmentation created a complex environment where harmonizing data for accurate reporting and analytics was cumbersome and time-consuming. The company needed a solution that could unify these disparate data sources and provide real-time insights without disrupting existing operations.
PlayPower chose Incorta as its analytics platform, leveraging its no-ETL approach to streamline data ingestion, transformation, and harmonization. The implementation began with a focus on revenue analytics, allowing PlayPower to consolidate and standardize key business metrics across different brands and systems. This phased approach was crucial in gaining the confidence of stakeholders and showcasing the platform's capabilities.
Incorta's platform was used to build and deploy various analytics solutions, including sales, revenue, procurement, and financial analytics. The platform's ability to ingest data directly from multiple sources, harmonize it, and make it available for advanced analytics and machine learning was instrumental in achieving PlayPower's goals.
PlayPower now had a single pane of glass to view and manage - enabling fast, agile decision-making. The time to deploy analytics solutions was also drastically reduced. For instance, the financial analytics implementation - which involved data from 15 sources, was completed in under six months: a fraction of the time it would have taken with traditional ETL tools.
Real-time insights into financial health were now available daily rather than only at the end of the month. The success of the initial implementations allowed PlayPower to quickly scale analytics capabilities to other areas of the business, such as operations and profitability analytics. PlayPower transformed a fragmented data landscape into a unified, agile system, empowering them to better serve their customers and drive long-term business success.
To learn more, watch the full discussion on demand here.
neoleap, a subsidiary of Alrajhi bank—the largest bank in Saudi Arabia—is a rising fintech company providing a wide range of payment solutions. With over 430 employees and more than four million users globally, neoleap faced significant challenges in managing and utilizing its extensive data resources. The company recognized the need for a robust data management solution to support its ambitious growth and compliance goals.
As a fintech company regulated by the Saudi Arabian Monetary Authority (SAMA), neoleap had to navigate complex reporting and compliance requirements. Traditional methods of data aggregation and reporting were time-consuming and inefficient, leading to significant operational bottlenecks and presenting larger margins for error. Challenges included:
To tackle these challenges, neoleap turned to Incorta’s operational lakehouse which provides customers with analytics on live, detailed operational data at scale . Incorta's user-friendly interface and powerful data processing capabilities enabled neoleap to streamline and automate its data governance and analytics processes.
neoleap’s journey with Incorta highlights the transformative power of effective data management and governance. By leveraging Incorta’s capabilities, neoleap overcame significant data challenges, streamlined operations, and enhanced its ability to detect and prevent fraud. The company’s commitment to data-driven decision-making and continuous innovation positions it for continued success in the competitive fintech landscape.
neoleap's experience underscores the importance of a robust data governance framework in achieving operational excellence and compliance. As neoleap continues to grow, its partnership with Incorta will remain a cornerstone of its strategy to harness the power of data for business success.

Workday Rising brought together business leaders and IT professionals from around the globe. Packed with pivotal announcements, insightful keynotes, and forward-thinking discussions, it served as a platform for Workday to unveil its latest innovations and partnerships aimed at shaping the future of work.
Among the announcements was Workday Extend—a suite of application development capabilities designed to help businesses do more with their data. Workday Extend provides tools like XpressO, a proprietary programming language for managing business logic and data within the Workday object model, and REST, SOAP, and custom APIs for sourcing external data.
While these capabilities are powerful, Workday Extend is about extending what is within the Workday platform. The goal of Extend is to enable businesses to develop custom applications that leverage Workday's platform. However, a critical component to doing this successfully is a foundation to provide data to the larger Workday platform in supporting these advanced capabilities.
This is where Incorta comes in. As Workday's Adaptive partner, Incorta seamlessly integrates with Workday to provide the best data foundation. Incorta's advanced data connectors allow organizations to access detailed data from various sources like POS, CRM, GL, inventory, and operational systems.
With Incorta, users can easily:

Incorta serves as the bridge between Workday Extend's powerful application development capabilities and the need for a comprehensive data platform. This joint solution enhances the user experience by allowing non-technical users to analyze data and make informed decisions.
With Incorta with Workday Extend, you can help your organization:
Workday Extend is a significant step forward in helping organizations do more with their data, but its effectiveness is limited without a solid data foundation. Incorta bridges this gap, providing the necessary infrastructure for advanced data analytics and reporting.
Ready to unlock the full potential of your data? Contact us today to learn how Incorta can transform your Workday Extend experience. Learn more by requesting a demo here.
The way we leverage data isn't just evolving—it's transforming at lightning speed, and companies that make their decisions with live data are already miles ahead of the competition.
The future belongs to those who can harness the power of live data to anticipate market trends and customer needs, untangle operational inefficiencies, and make quick decisions across departments.
Unfortunately, over 40% of organizations cite disparate analytics tools, data sources, and poor data quality as their biggest challenges. How is your organization accessing - and leveraging - live, detailed, operational data to ensure that crucial decisions are based on accurate information?
Stale data doesn't just slow you down; it actively hinders your ability to compete. In an environment where speed and accuracy are paramount, relying on outdated information results in:
In today's business landscape, minutes matter. Acting on data just minutes old versus a day old can mean the difference between seizing a market opportunity and losing it to a competitor.
Having access to live, detailed data fosters a data-driven culture within an organization from the top down. When employees at all levels can access the data they need, they are empowered to make decisions based on facts rather than intuition. This shift towards data-driven decision-making can lead to more innovative solutions and a more agile organization.
Gate City Bank is a prime example of how democratizing data can transform an organization. By providing access to fresh, detailed data, they have empowered their teams to make faster, more informed decisions. With a single source of truth, every department can trust that the data guiding their strategies is accurate and up-to-date.
(Read their full story here!)
Fueling AI with Live Data: The integration of live, operational, and detailed data is also crucial for organizations looking to implement AI and ML solutions. AI thrives on accurate, up-to-the-minute information to provide meaningful insights and predictions. Whether it's optimizing supply chains, enhancing customer interactions, or improving financial forecasts, live data is the fuel that powers AI-driven innovation.
With Incorta, businesses can access 100% of their data live, enabling proactive decision-making rather than reactive responses. By accessing live, detailed, operational data, you gain a clear competitive edge in any industry.
To learn more, get a demo here.
As businesses modernize their data strategies, Oracle E-Business Suite (EBS) users transitioning to Oracle Cloud Applications seek to optimize how they leverage data. Ensuring data is accessible and actionable is critical for driving smarter decisions—and integrating with tools like Google BigQuery enables more accurate, advanced insights.
The Oracle-Google Cloud partnership combines Oracle’s database services with Google Cloud’s AI and infrastructure, allowing users to:
While Oracle Cloud Applications offer native reporting tools (OTBI, BI Publisher, Financial Reporting Studio, Smart View), they have limitations:
Incorta streamlines data extraction, transformation, and migration between Oracle Cloud Applications and Google BigQuery, delivering live, decision-ready data with minimal effort.
Watch our on-demand webinar to learn how Incorta helps you:
Businesses migrate to the cloud for flexibility, scalability, and cost efficiency. Shifting on-premises applications to cloud SaaS cuts infrastructure and maintenance costs, reduces risks, and simplifies updates while making data management and analysis faster and more accessible. Cloud platforms also offer advanced tools like AI and machine learning to automate insights, enhance analytics, and get more value from data than ever before.
As businesses migrate to the cloud to enhance analytics, Oracle E-Business Suite (EBS) users transitioning to Oracle Cloud Applications aim to modernize the ways they leverage their data. Ensuring that data is accessible and actionable is key to driving smarter business decisions - and integrating with advanced tools like Google BigQuery helps users unlock more accurate insights from their data.
The recent partnership between Oracle and Google Cloud combines Oracle's database services with Google Cloud's AI and advanced infrastructure, allowing users to “continue to benefit from Oracle’s database and enterprise application offerings while seamlessly complemented with Google Cloud offerings and AI solutions such as Gemini and Vertex AI to modernize and accelerate cloud migrations.”
But while Oracle Cloud Applications include some native reporting tools (OTBI, BI Publisher, Financial Reporting Studio, and Smart View), these tools are limited to accessing data held in Oracle Fusion - and can’t offer a complete solution to those looking to access data from multiple complex data sources. Additionally, there’s no native, seamless integration for moving data between Oracle Cloud Applications and Google BigQuery:
Incorta acts as a bridge between Oracle Cloud Applications and Google BigQuery by simplifying data extraction, transformation, and migration processes, delivering live, decision-ready data into BigQuery for advanced analytics with minimal effort and complexity.
Oracle Cloud Applications data can be complex and often requires specialized tools for extraction. Incorta provides direct connectors to Oracle Cloud Applications, allowing users to extract data without the need for extensive data transformation or custom-built extraction logic. Incorta’s ability to natively connect to Oracle Cloud and handle complex data structures makes it easier and more efficient than manual methods.
Incorta allows for real-time data ingestion from Oracle Cloud Applications. Users can get fresh, up-to-date data without the typical delays that occur with batch ETL processes, which can be especially beneficial for use cases requiring current data insights in Google BigQuery for real-time analytics.
Before moving data to BigQuery, Incorta can handle any necessary data transformation within its platform. Its Direct Data Mapping reduces the need for complex ETL processes, allowing data to be formatted and cleaned for easier loading into Google BigQuery. This ensures that the data is analytics-ready when it reaches BigQuery.
Typically, moving data from systems like Oracle Cloud and BigQuery involves complex ETL pipelines. Incorta eliminates much of this complexity by providing no-code or low-code data pipelines, making it easier for business users and technical teams alike to set up and maintain the data flow from Oracle to BigQuery.
Watch this on-demand webinar to learn more about how Incorta helps you:
At Incorta, we believe in nurturing talent and empowering the next generation of innovators. Our summer intern program is designed to provide hands-on experience and allow interns to showcase what they’ve learned through interactive demo presentations. Let's take a look at this year's Incorta Summer Intern demos:
Interns Faris Mohamed and Marwan Essam presented their design to streamline controls for customers, improve job metadata and resource management, and enhance all aspects of our job operations.
Abdullah Fouad from the Extraction Team showcased his work integrating multiple cloud storage solutions, including GCS, ADLS, Box, AWS S3, and Oracle Cloud Applications.
Hanya Khaled’s demo showed how to leverages AI to provide dashboard insights, reduce ticket resolution time, and significantly improve overall customer satisfaction.
Mohamed Elsherif and Ahmed Rustom illustrated how their project generates and optimizes complex formulas, making data analysis easier and more insightful for our customers.
Sarah Soliman highlighted her learnings with the Cloud SRE Team, showing the importance of learning opportunities in operations and troubleshooting.
Mohamed Saeed presented how to make Spark error messages more understandable to improve the customer experience and overall productivity.
Farah Hossam presented her innovative strategies to enhance DevOps efficiency, focusing on improving build findability and pipeline scheduling.
Esraa Majed's presentation focused on her project that visualizes schema load jobs, making troubleshooting simpler and more intuitive.
Ziad Samer detailed his project that aims to provide centralized control and data discovery capabilities across Databricks workspaces.
Somaia Fahkr and Abdelrahman Elsherif's project focused on enhancing Kyuubi's capabilities by integrating it with our Unity Catalog.
Zakaria Kortam's demo included showing off visually appealing components developed using the Component SDK to improve data visualization and user experience.
These impressive demos showcase how our interns have utilized their skills and out-of-the-box thinking to contribute to Incorta's growth - and their own promising futures! To learn more about joining our intern program, please email internship@incorta.com.

As a branch of AI, machine learning (ML) utilizes algorithms and statistical models to perform specific tasks without explicit programming instructions. This allows machines to learn from data, identify patterns, and make predictions. In our latest webinar, we explored machine learning concepts, fundamental principles, and real-world use cases - illustrating how customers have leveraged machine learning with Incorta to drive significant business outcomes.
Machine learning is different from traditional programming, as it enables computers to learn from data and make decisions or predictions without being explicitly programmed. Traditional programming involves a set of instructions executed to achieve a specific outcome - like calculating the square root of a number, or the break-even point in economics. In contrast, machine learning can predict customer behavior and recognize objects in an image.
Understanding machine learning begins with its fundamental concepts:
However, not every customer use case can be solved with ML. In some cases, an ML implementation may not be the best solution and can even cause more complications than it seeks to solve.
Underfitting or overfitting a model can be detrimental to the results generated by ML. Underfitting happens when a machine learning model oversimplifies the data and fails to capture enough information about the relationships within it. Overfitting, on the other hand, happens when the model is overly sensitive to the data, leading to an over-analysis of the patterns. The best machine learning method should be interpretable, simple, accurate, fast, and scalable.
Incorta's ML lifecycle complements traditional data pipelines by providing a complete solution for data preparation, feature engineering, model training, prediction, monitoring, and model deployment.
Incorta offers features like Data Studio that simplify data preparation by low-technical users and ML professionals. It also provides access to large language models trained by industry-leading companies - requiring no setup, as these models are fully hosted and managed by Incorta.
The Incorta Machine Learning Life Cycle involves several key stages:
Incorta's ML model registry is a central repository for storing, managing, and tracking the lifecycle of machine learning models. It allows for easy storage, import, version tracking, and model deployment, enhancing ML operations' performance.
While machine learning may seem intricate and complex, platforms like Incorta make it more accessible and beneficial to businesses across various sectors.
Watch the full discussion here to learn more, and stay tuned for more in our GenAI webinar series.
Better business decisions hinge on rapid access to data. But data ingestion and analysis often pose challenges for organizations reliant on Oracle Cloud Applications. Incorta bridges this gap with powerful, scalable connectors – transforming how businesses extract, transform, and load data into their analytics ecosystem. Let's explore:
Despite the existing capabilities of Oracle Cloud ERP, organizations frequently run into the following challenges while attempting to extract and analyze data:

Why it Matters:
BICC is the go-to connector for large-scale data extraction, delivering performance at scale. By automating this process, Incorta minimizes manual intervention and accelerates data readiness for analytics.

Why it Matters:
For custom data sets and ad-hoc reporting needs, Incorta’s BIP connector ensures no data is left behind, providing greater flexibility in Oracle Cloud data extraction.
Why Incorta?
Incorta’s Oracle Cloud connectors are redefining the data ingestion landscape, offering unparalleled speed, flexibility, and efficiency. Whether you’re migrating to the cloud, enhancing reporting capabilities, or scaling analytics – Incorta’s innovative approach ensures that Oracle Cloud data becomes a strategic advantage.
– Chris Bergren, Director of Enterprise Business Systems (EBS) at PAR Systems
PAR Systems is a leader in intelligent manufacturing automation, specializing in custom solutions across various industries - including aerospace, life sciences, and nuclear. With over 60 years of experience and more than 8,000 successful projects worldwide, PAR focuses on enhancing efficiency, quality, and safety in manufacturing processes.
But with a wealth of transactional data stuck across custom legacy ERP systems, they faced a common hurdle internally: accessing and analyzing that data efficiently.
Colossal amounts of critical data lived within custom, legacy ERP platforms: making it complicated for teams to access, understand, or use it. Business units were creating singular reports using different data sets, creating discrepancies and delaying impactful decisions. PAR Systems needed an integrated solution that could break down these silos and empower every team member with access to, and understanding of, data relevant to their department.
Incorta quickly proved transformative for PAR Systems - integrating all data from their complex business systems into one simple, unified view. Rollout focused on enabling key power users across their core departments:
With Incorta, PAR Systems unearthed powerful data buried across their legacy systems. Teams could now access the same data while building their own analytics dashboards. Incorta also empowered PAR business users to become self-reliant in their data usage - transforming day-to-day operations and creating a cultural shift toward data democratization.
By identifying and empowering power users across the organization, PAR not only improved operational efficiency and decision-making, but also cultivated a data-driven culture that encourages collaboration and innovation.
A major win for PAR Systems was a drastic 4% increase in collections from their Accounts Receivable department. With Incorta, the team could see sub-ledger details in an easy-to-use dashboard - eliminating the reliance on data stewards.
Traditionally, gross margin analysis might take anywhere from a few days to a week to complete, especially if it requires extensive manual processes or data manipulation. With Incorta, PAR systems can perform gross margin analysis instantly - unlocking fast, usable insights and freeing up teams' time for more strategic tasks.
Incorta's implementation at PAR Systems has revolutionized their approach to data - unlocking easy access to all of their data across legacy systems. Adoption and usability was fast and simple - without the need for deep technical know-how, teams were quickly to build, test, and share their own analytics. PAR Systems not only improved operational efficiency and decision-making, but sparked a data-driven culture that encourages ongoing collaboration and innovation.
As East and Gulf Coast ports brace for a potential strike, the U.S. faces a significant supply chain disruption. Negotiations have stalled, and the repercussions could be severe—inventory shortages, skyrocketing grocery store costs, and container backlogs that could take months to clear.
Attacks have additionally stalled shipping through the Red Sea, and the Panama Canal is still operating below capacity due to drought conditions: the combination of these events creates a "perfect storm" that could choke off major arteries of global trade.
The consequences of these disruptions could cripple businesses that rely on these ports for imports and exports. In today's interconnected world, supply chains are susceptible to constant disruption - from strikes and geopolitical conflicts to government trade tariffs and natural disasters. Access to live, detailed operational data is the key to navigating these challenges efficiently and effectively.
Supply chains are complex networks that require constant monitoring and adjustment. Traditional methods of managing these networks rely heavily on periodic data updates - which can delay decision-making. This is an especially risky approach in situations like port strikes, where every minute counts.
Whether it's rerouting shipments, adjusting inventory levels, or communicating with stakeholders, access to live operational data facilitates more informed decision-making that can mitigate the impact of disruptions.
Supply chains are dynamic by nature, influenced by fluctuating market conditions, consumer demands, and unforeseen events. For example, if a port strike causes delays, companies need to quickly identify alternative routes or adjust their logistics strategies to maintain smooth operations.
In the face of mounting challenges, Incorta helps supply chain organizations by providing easy access to live, detailed operational data. This enables companies to quickly identify and respond to bottlenecks, adjust inventory, and optimize logistics.
With live operational data, companies can analyze the ripple effects of disruptions on different supply chain nodes. This helps leadership make informed decisions around resource allocation, contingency plans, and alternative suppliers.
Disruptions at critical junctions, like port operations, can have far-reaching effects. With access to live data from Incorta, organizations can minimize disruptions caused by port strikes and other unforeseen logistical challenges by rerouting shipments or reallocating resources.
While live data is crucial, building a data foundation for predictive AI is just as important for a future-proof supply chain. Moving from reactive to proactive is a milestone, but predictive capabilities unlock strategic decision-making. Discover how AI tools like GenAI can keep you ahead in our ebook: AI - Mission Critical for Supply Chain- Centric Organizations.
Access to live data is critical for making informed, agile decisions in any business environment. That’s why we’re excited to announce that Incorta is pivotal in Google Cloud’s new integration of its Cortex Framework with Oracle E-Business Suite (EBS).
The Google Cloud Cortex Framework provides a foundation of solution accelerators to help businesses quickly deploy Google Cloud’s advanced data analytics capabilities. Now, with its latest integration into Oracle EBS, businesses using this widely trusted ERP solution can use Google Cloud's powerful data tools to streamline operations and fuel business insights.
However, integrating ERP data into advanced analytics platforms has traditionally been challenging due to its complexity and volume. That’s where Incorta comes in.
Incorta’s advanced data analytics platform simplifies and accelerates access to complex data, particularly from enterprise systems like Oracle EBS. Our platform lets organizations tap into the live, detailed operational data that drives critical business decisions.
With Incorta’s Direct Data Mapping, data can be ingested, enriched, and visualized in minutes—without needing complex transformations or time-consuming data modeling. This empowers organizations to quickly get actionable insights from their Oracle EBS data, enabling faster decision-making across key business functions like finance, supply chain, and operations.
Getting more value from your Oracle EBS data
For businesses leveraging the new Google Cloud Cortex Framework integration, Incorta’s capabilities make getting value from their Oracle EBS data easier than ever. Whether it’s driving efficiencies in supply chain management, improving financial reporting, or optimizing operational workflows, Incorta provides a seamless and scalable way to access and analyze critical data in near real time.
With Incorta as a critical part of this integration, users can expect
As businesses rely on data to drive decision-making, accessing that data instantly and in detail is becoming more critical than ever. The integration of Google Cloud’s Cortex Framework with Oracle EBS and Incorta’s unmatched ability to unlock and accelerate operational data is a game-changer for organizations looking to optimize their processes, improve performance, and maintain a competitive edge.
Watch as we simplify extracting operational source data for order-to-cash metrics with Incorta and Google's BigQuery and Cortex Platform. Incorta mirrors your Oracle EBS environment to ensure your data's integrity. Learn how your data is deduplicated, processed, and sent to BigQuery, ready for direct use in Looker or feeding into the CORE-TEX framework.
Learn more about how Incorta transforms data analytics for Oracle EBS users on our partner page, and read Google's full announcement here.
Plus, see our other integrations with Google that are helping users unlock real value from their data.
BI platforms are setting the bar high when it comes to harvesting insights from vast amounts of data.
But what if business intelligence could go one step further? Enter agentic AI—where AI agents not only process data, but also engage in autonomous reasoning and provide proactive insights. This capability offers immense potential, coupled with unfamiliar challenges. Businesses need to learn how to effectively use AI agents while ensuring their behavior remains controlled and policy-driven.
In our recent webinar with aiXplain and TDWI, experts Fern Halper (TDWI), Vice President and Senior Director of QWI Research for Advanced Analytics, Ebrahim Alarecki, Principal Machine Learning Engineer at Incorta, and Nur Hamdan, Product Lead at aiXplain, explore how organizations are transitioning from traditional data analytics to more autonomous, agent-driven systems: trailblazing an exhilarating new era of business intelligence.
While AI encompasses several methodologies like machine learning, neural networks, and natural language processing (NLP), organizations are keen to experiment across diverse GenAI use cases: such as predicting churn, understanding customer behavior, product recommendations, and even fraud detection.
Research shows organizations using sophisticated AI tools are more likely to derive significant top or bottom-line benefits than those using basic analytics.
The transition from simply collecting and analyzing data to integrating AI into applications marks an exciting shift toward developing solutions that not only interpret data - but also act on it
Accelerating this shift is Agentic AI. This refers to intelligent systems designed to act autonomously and proactively, making decisions and taking actions based on data without needing constant human input (like traditional AI models would).
AI agents can "sense" their environment (through data inputs), "reason" about it (using AI models), and "act" by executing tasks or making decisions to achieve specific goals. This means these systems can process large amounts of data, engage in independent reasoning, and provide insights without relying on human input.
When asked “Is your organization thinking about agentic AI?”, 25% of respondents said it was the first they had heard of it, while others were already planning or experimenting with it.
As Nur Hamdan explained, developing agentic AI applications involves several steps, including defining the desired outcomes and selecting the appropriate AI models and tools. The architecture of these applications typically includes a combination of large language models (LLMs) and specialized agents that work collaboratively to achieve specific tasks. Integrating data retrieval systems and governance frameworks further enhances the effectiveness and security of these applications.
However - even with these steps in place, using agentic AI hinges on two key elements that must be implemented before AI can be fully deployed.
As with any AI, the phrase “garbage in, garbage out” applies. Agentic AI needs a reliable, accessible, strong data foundation to function effectively, and live data to make accurate decisions. Companies often have data siloed across multiple data sources (ERP systems like SAP, or Oracle), which must be unified and accessible to be usable by AI agents.
"For an agent to make a decision, it has to be timely - it can’t operate on outdated information. It should be relevant, and that’s where Incorta comes in - providing live, refreshed data to support these agents."
Incorta provides direct access to live, operational data from any source system - ensuring AI agents have relevant, timely information to work with. Incorta's platform allows data to be mapped and consolidated in real time across various systems, providing a unified data environment for AI agents to consume.
As AI systems become more autonomous, their actions must be understandable and explainable. aiXplain lets users see how decisions are made based on data and algorithms, promoting an accountable and understandable approach to using AI agents
aiXplain’s focus on transparency and reliability reduces the risks of errors, biases, and incorrect outputs as organizations experiment more with Agentic AI.
The combination of Incorta's data foundation and aiXplain's advanced AI capabilities offers organizations a powerful platform for deploying agentic AI applications.
Incorta ensures that businesses have access to live, operational data in real-time, integrating data from multiple sources such as ERP systems and transactional platforms. This real-time data is crucial for AI agents, allowing them to make timely and accurate decisions. aiXplain complements this by providing the tools to build and manage AI agents that can autonomously process this data, reason about it, and take action.
Together, Incorta and aiXplain enable businesses to shift from traditional analytics to sophisticated, AI-driven decision-making. With Incorta's unified data platform and aiXplain’s focus on transparency, accountability, and safeguarding against errors like "hallucinations," organizations can confidently develop and deploy agentic AI solutions.
As AI evolves, we must evolve alongside it - committing to continually redefine our data strategies to keep pace. In this next frontier of business intelligence, AI isn’t just a tool for analysis—it’s an autonomous agent that can drive innovation and deliver answers at an unprecedented speed and scale.
With platforms like Incorta and aiXplain, businesses can seamlessly integrate real-time data and autonomous AI agents into their operations, unlocking new opportunities for proactive insights and agile responses. By ensuring a solid data foundation and robust governance frameworks, organizations are better equipped to navigate the complexities of AI, reduce risks, and stay ahead of market demands.
Embracing agentic AI is not just about adopting new technology—it’s about transforming how data is used to drive smarter, faster, and more informed decisions across every facet of business. The future is here, and with the right tools, we’re ready!

Are you seeing ROI from the investments you've made in GenAI?
GenAi isn’t just experimental—it's a business imperative. Organizations, particularly those in consumer-facing industries, are increasingly exploring its potential to transform operations, personalize customer experiences, and optimize workflows.
However, despite the widespread acknowledgment of its capabilities, many business leaders are struggling to answer a critical question: Where is the ROI?
If you feel stuck trying to turn your GenAI initiatives into tangible results, you’re not alone.
Leaders recognize the potential of GenAI but remain uncertain about how to deploy it strategically. According to EY’s Reimagining Industry Futures survey, only 35% of ICT executives are actively investing in GenAI, while 33% are planning investments during the year. Additionally, nearly 68% of consumer companies are building proofs of concept, yet none have deployed GenAI for enterprise-wide, business-critical processes.
This leaves companies in a catch-22. The consensus is clear—GenAI has the potential to transform industries. Yet a significant percentage of leaders (36%) express confusion about where and how it can create meaningful value. Similarly, 35% of respondents cite concerns about governance and oversight, hindering progress in adoption at scale.
Some of the greatest barriers to seeing GenAI’s value boil down to:
With this high level of uncertainty, it isn’t surprising that many businesses are still stuck in the pilot phase, hesitant to commit resources to large-scale implementation.
But stagnation comes with its own risks. Businesses that delay in fully utilizing GenAI run the risk of falling behind competitors who are actively weaving AI into their core business operations.
It’s not simply about implementing GenAI—it’s about deploying it in a way that drives real and measurable results. To determine how GenAI can create value, ask these essential strategic questions:
For sectors like retail, GenAI can handle customer service automation. But is it more critical to improve the front-end customer experience, or to optimize operational back-end systems such as supply chain efficiency?
Consider whether you’re using GenAI to hyper-personalize experiences for existing customers or to attract a new demographic altogether.
Is success measured by cost reductions, revenue growth, operational efficiency, or some combination of the above? Being deliberate about success metrics ensures that the AI implementation is guided toward clear, measurable goals.
A significant reason many enterprises fail to extract value from GenAI is a poor data foundation. Even the most advanced AI models falter without seamless access to clean, connected, and live data.
This is where modern data solutions like Incorta come into play. By integrating and analyzing data in real time from multiple sources, Incorta provides businesses with the full visibility they need. A unified data infrastructure ensures that GenAI apps aren’t just "smart," but also "actionable," reliably delivering insights that matter.
An optimized, data-driven infrastructure is essential to deriving maximum ROI from GenAI investments. Incorta’s platform provides the necessary foundation to move GenAI deployments from early-stage pilots to enterprise-level innovation.
Now, it’s time to mobilize your AI. Here are some practical applications where GenAI can yield immediate returns for your business:
GenAI-powered chatbots and support tools go far beyond answering FAQs. They can offer personalized shopping recommendations, handle complex customer queries, and even analyze sentiment to predict customer satisfaction trends. This translates to reduced response times, happier customers, and fewer operational costs.
Demand planning, inventory optimization, and supplier relationship management are ripe for GenAI disruption. By analyzing historical trends alongside real-time data, AI models can predict supply needs with precision, mitigating risks and optimizing inventory management.
GenAI can generate dynamic product descriptions, optimize A/B tests, create personalized campaigns, and even predict user intent. This improves ad targeting, reducing marketing costs while boosting engagement.
Generative tools can improve workflows, from automating repetitive tasks to offering on-demand training resources for employees. This addresses retention challenges and boosts productivity.
AI tools empowered with advanced analytics can help businesses make quicker, smarter decisions powered by data. From spotting product trends to evaluating competitive benchmarks, these tools bring a layer of agility to the decision-making process that manual analytics can’t match.
By aligning these GenAI applications with your business priorities, ROI becomes not only achievable but scalable.
AI integrations aren’t going away - to start seeing a return, it’s fundamental to start from a solid data foundation. Every successful GenAI strategy needs two core pillars:
With a solution-oriented approach powered by tools like Incorta, businesses are better positioned to deploy GenAI and start realizing its value today—not five years from now.
The question isn’t whether GenAI has potential—it’s whether leaders are ready to harness it effectively. The quicker your organization can build clarity and confidence in its AI investments, the faster you’ll see a tangible return.
Are you ready to bridge the gap from potential to performance? Learn more about how Incorta provides the data access you need for your next GenAI breakthrough.
Inauguration Day next week comes with a sticker shock. The president-elect has promised to implement 25% tariffs on Mexico and Canada, as well as an additional 10% tariff on goods made in China.
The manufacturing sector stands at a critical juncture - tariff increases not only raise consumer costs but are poised to dramatically disrupt global supply chains. What can you do to prepare?
Tariffs on imported raw materials and components can raise production expenses dramatically. A 25% tariff on Mexican and Canadian goods and a 10% tariff on Chinese imports would significantly impact manufacturers relying on these sources.
Higher production costs result in increased prices for end products, which can reduce consumer demand and affect sales. The Consumer Technology Association predicts significant price hikes for consumer electronics if tariffs are enacted.
Manufacturers may face challenges sourcing materials, leading to delays and potential shortages. The automotive industry, for example, is considering relocating production to mitigate tariff impacts.
Other countries may impose their own tariffs in response, further complicating international trade and potentially reducing market access for U.S. manufacturers. Canada, for example, has broadly criticized these proposed U.S. tariffs, indicating potential retaliatory measures.
While we have yet to see what these sweeping promises hold, they've already ignited economic uncertainty as manufacturers prepare for higher production costs, supply chain disruptions, increased prices passed to consumers, and retaliatory trade measures.
To mitigate the effects of these looming tariff hikes, manufacturers and supply chain managers are taking preventative measures such as:
By identifying alternative suppliers and regions, manufacturers can reduce their reliance on any single source. This approach helps mitigate the risks associated with tariffs and ensures a more resilient supply chain.
Streamlining processes, minimizing waste, and improving logistics can help manufacturers offset the dramatically increased costs associated with tariffs. Implementing lean manufacturing principles and investing in supply chain optimization tools can help significantly improve efficiency and cost savings.
Advanced data analytics, AI, and machine learning can help manufacturers gain insights into their supply chains, predict potential disruptions, and optimize operations. Manufacturers can enhance their decision-making capabilities and maintain a competitive edge by investing in better, faster access to their data across disparate source systems.
With a highlighted need for adaptability and foresight in manufacturing strategies, Incorta helps manufacturers hone in on where to cut costs while bracing for potential tariff increases:
With Incorta, manufacturers have easy, fast access to live, detailed operational data from any source - without relying on complex ETL processes. By having quick access to accurate data, manufacturers can make more informed decisions about sourcing, production, and pricing - letting them quickly adapt to supply chain disruptions.
Incorta makes it easy to find cost-saving opportunities: giving you an overhead view of your data, from any source, in a unified view. With visibility into details such as procurement costs, inventory levels, and supplier performance, manufacturers can better manage suppliers and renegotiate contracts to offset the impact of higher tariff costs.
See around corners with better forecasting and scenario planning. Incorta’s advanced analytics let you simulate the potential impact of tariff hikes on cost structures and profitability - so you can plan alternative sourcing strategies, adjust production schedules, and optimize pricing strategies.
Access to live, detailed data means manufacturers can uncover inefficiencies in their operations, from production processes to distribution networks. By automating insights and streamlining operations, manufacturers can reduce waste and cut costs in areas like labor, energy, and logistics, helping to offset rising costs due to tariffs.
Incorta supports financial planning and analysis (FP&A) by providing finance teams with up-to-date, granular data on costs and margins. With this visibility, manufacturers can more accurately forecast the financial impact of tariff hikes, make adjustments to their budgets, and ensure they are controlling costs effectively.
In times of economic uncertainty, quick and reliable reporting is crucial. Incorta allows manufacturers to monitor key metrics and business performance in real-time, so they can respond rapidly to changing market conditions, including tariff changes, and take proactive measures.
By leveraging Incorta’s live data capabilities, manufacturers can stay agile, reduce costs, and make data-driven decisions that help them mitigate the financial impact of potential tariff hikes.
How have other manufacturing leaders navigated the unexpected by getting full access to their data from any source with Incorta?
PAR Systems, a leader in intelligent manufacturing automation, faced challenges accessing and analyzing critical data stored in legacy ERP systems. By adopting Incorta's data analytics platform, PAR Systems improved data access, enhanced supply chain management, and fostered a data-driven culture within the organization.
Nortek struggled with disconnected data systems that affected their supply chain agility.
By integrating Incorta, Nortek achieved faster integration of acquisitions, reduced total cost of ownership, and improved their pricing strategy. These improvements allowed Nortek to respond more effectively to market demands and maintain a competitive advantage.
The future of trade policies remains uncertain, but manufacturers can take proactive steps to stay agile and competitive. By investing in technology, diversifying sourcing, and optimizing supply chain efficiency, manufacturers can better prepare for the uncertainty on the horizon.
Learn more about what manufacturing leaders are doing today to future-proof.
The latest updates and resources from us, directly to your inbox. (No spam, we promise!)