Finance IT modernization

How CFOs Can Get Transaction Data Faster by Going Direct

CFOs today are being asked to make more and better decisions, faster, in a world where chaos and uncertainty reign. To be able to do that, they need better data, and faster access to it.

Typically, CFOs and offices of finance get their data through a complex pipeline based on a star schema, data lake/warehouse, multi-tool, multi-step architecture. Along the way, data passes through many hands and systems that take it through several levels of curation, aggregation, and modeling before delivering it to a finance professional.

The purpose of an EPM, ERP, or any front office system is to collect and preserve the integrity of all transactions and transaction-level details across an organization. The complicated architecture mentioned cannot deliver the speed, transparency, and level of detail CFOs today require. Which means CFO’s must trade off the integrity of their source system data for “best we can do” data from their data engineering teams. No company or technology vendor has designed a platform or pipeline that addresses this challenge.

What’s needed is a new architecture that is purpose built and designed to preserve the integrity of source system data, while delivering data in full fidelity to the finance professionals fingertips.

What does this architecture look like? The defining characteristics involve owning data extraction, curation, modeling and data delivery, all in one platform. Furthermore, the data in the analytics platform is a digital twin of your ERP or source system data. It is not flattened or transformed into star schemas, which means it is application data in a near raw state.

Instead of specifying business requirements and waiting for the data to be delivered, users can directly access and analyze data that is 100% identical to the source. This offers significant benefits for the CFO and the office of finance, while significantly reducing the traditional data engineering efforts:

  • Ready access to hot data. This is data that needs to be frequently accessed, and is most valuable at the time of collection (hot). If you were to move it through the traditional complex pipeline, its value for decision making degrades (cool) with time and handling.
  • Ability to answer unlimited questions. The nature of analysis is that questions answered generate new questions. One of the great shortcomings of traditional architecture is that if you have a new business requirement or a new question, you can’t drill down into the data because the transactional details have been aggregated away. You essentially have to break your data models and rebuild them.If you don’t have time for that, you either make the decision with your gut, or have an analyst go track down the data. The problem with the latter is it introduces data governance and quality issues because it’s outside the productionized data model.With Incorta, you have full access to detailed, trusted and governed data. You don’t need a new data model if you have a new question or business requirement.
  • More data for more people. With traditional architecture, every time you pass the data through a new step, you have to try to match the security from the source system. Since it’s impossible to replicate all of the security settings, some of the data has to be excluded from the analytics system.Because Incorta replicates your ERP or source system data, it inherits all of the row and column record security parameters from those systems. What that ultimately does is allows you to give more people access to more data.
  • Easier error resolution. When you are working with this type of architecture and there is an error in a report (and there will always be errors in reports) there are only two places where it can be: in the source system, or in the finance analytics hub. That saves the office of finance a significant amount of time spent hunting down errors and correcting them.In the traditional architecture, the data or calculations could be wrong in any part of the pipeline.

CFOs today are trying to re-architect their systems to get closer to the data. But as long as they’re working with a warehouse and star schema-driven pipeline architecture that involves a number of technologies and teams, each step actually takes them farther and farther away from the original data in terms of both time and fidelity.

Unfortunately, everyone has accepted that is the way it is, because that has been the paradigm for decades. As a result, they are trying to find ways around it using RPA and self-service analytics tools. Up until Incorta, there hasn’t been another way to model application data.

The architecture doesn’t rely on multiple steps and multiple technologies. Incorta owns the connection and the ingestion of the ERP or source system data, which reduces complexity and delivery time.

We also own the model generation for the business user. The way the technology works is that there is simply one data model that is assembled on the fly at query time. The data model is Incorta’s proprietary technology called Direct Data Mapping. There is no pipeline bringing the data to the business. We simply bring the business to the data.

The biggest challenge in analytics today is that a data warehouse architecture requires you to predefine what data you want to see and how you want to see it.

That still works for things like quarterly financials or monthly treasury reports that don’t change that much. But as CFOs become more involved in day-to-day decision making, there are more and more use cases where you can’t define your business requirements because the world is changing too fast.

These use cases require full and real time access to data. The faster you can see and analyze it, the more value you can derive from it. You can make decisions that impact the business in a very positive way.

Ready to see for yourself? Spin up a free trial and see how Incorta works for you today.

New call-to-action