Top Business Intelligence Trends 2023

Co-founder & CTO at 10 Senses

Are you ready to take your data analytics game to the next level? As we gear up for 2023, the Business Intelligence (BI) landscape is rapidly evolving. New trends and technologies emerging that promise to transform the way we leverage data for decision-making. Old trends are either on the rise or are falling out of favor.

In this article, I’ll take you on a deep dive into the top Business Intelligence trends that are shaping the landscape. From fostering a data-driven culture and bolstering data security through utilizing data storytelling and saving money by consolidating and optimizing the data stack to last but not least, utilizing cutting-edge augmented analytics and XAI, we’ll explore the key developments that are poised to revolutionize the future of BI.

But that’s not all – we’ll also uncover some exciting challenges and opportunities that are just around the corner. So get ready to fasten your seatbelt and join me on a journey to discover the latest trends in BI for 2023. I think you might find a few of my choices, especially the ones that I pointed as trends in decline as controversial. Let’s dive in!

Table of Contents

Trend 1. Data Governance / Data Literacy / Data driven culture

First up on our list of top BI trends for 2023 is the growing importance of data governance, data literacy, and a data-driven culture. Theoretically you could consider those three as separate trends, however I’ve put them together for a reason. They are closely interconnected and in my experience when an organization tries to implement just one without considering the other two, such attempts are usually less successful than when the synergy of all three is allowed to bring more impact.

One of the biggest challenges faced by companies today is managing the sheer volume and complexity of data available to the data team (e.g. Data Scientists, Data engineers, ML Engineers etc.) and how to make it available to the business users. Data governance is crucial to ensure that data is accurate, reliable, and compliant with industry regulations. At the same time, you need to create a data-driven culture that promotes data literacy, so business users can understand and make decisions based on data insights. Without these elements in place, you might risk making inaccurate or uninformed decisions that can impact business in a negative way.

This trend is supported by a recent survey “Top Business Intelligence Trends 2023: What 1,823 BI Professionals Really Think” (

Take a look at this interactive visualization from BARC that shows the evolution of Business Intelligence Trends. In this snapshot, I’ve highlighted the upward trends from 2019 to 2023. While my selection of top trends may differ from common views on this subject, it’s important to make a note (which I think you would agree with) that pure statistical results do not always paint a full picture.

Importance of Business Intelligence Trends (Timeline) – Upward trends 2019-2023

So why should you care about this trend? Let’s dive in and discover what implementing those principles can potentially do for you:

  • Improve decision-making: Effective data governance and a data-driven culture lead to better-informed decisions based on data insights.
  • Build competitive advantage: Developing data literacy skills and using data-driven decision-making can be a way to gain competitive edge.
  • Future-proof business: Investing in data governance, data literacy, and a data-driven culture can help you stay ahead of the competition and adapt to changes in the market.
  • Increase ROI: Improving data governance and promoting a data-driven culture can help to reduce costs, optimize resources, and increase return on investment.
  • Drive business success: By identifying new opportunities, improving customer experience, and increasing operational efficiency, data-driven decision-making can drive business success.
  • Build trust: A strong data governance framework can help build trust with customers, employees, and partners by ensuring that their data is protected and used ethically.
  • Reduce risk: Effective data governance helps reduce risk by ensuring data is accurate, consistent, and compliant with industry regulations.
  • Promote innovation: A data-driven culture fosters collaboration and communication across departments, promoting innovation and creativity.
  • Ensure scalability: Proper data governance practices ensure that data is scalable and can be managed effectively as a company grows.
  • Enable agility: A data-driven culture enables a more agile approach and ability to respond quickly to changes in the market or industry.

If you are still not convinced let’s consider the landscape of data privacy and security regulations. I will focus on EU, US and Canada as those are the regions we’re monitoring more closely in our team. It’s hard to imagine a way to ensure compliance and security, without prioritizing strong data governance practices, data literacy, and a data-driven culture.

Let’s look at the list of regulations, connected with data and analytics in EU, US and Canada. Some of them are already implemented and some are planned for near future:


  • The General Data Protection Regulation (GDPR) has been in effect since May 2018 and sets the rules for the collection, use, and protection of personal data in the EU.
  • The Data Governance Act (DGA) has entered into force in June 2022 to establish a framework for the governance of data in the EU.
  • The European Commission in October 2022 has adopted the Digital Services Act (DSA) and the Digital Markets Act (DMA)  to modernize the rules governing digital services. Both entered into force in November 2022
  • Artificial Intelligence Act (AI Act) is being proposed to establish a framework for the development and deployment of AI in the EU


  • The California Consumer Privacy Act (CCPA) went into effect in January 2020 and sets out the rules for the collection, use, and protection of personal data in California.
  • The California Privacy Rights Act (CPRA) took effect on January 1, 2023, and will become fully enforceable on July 1, 2023. It works as an addendum to the CCPA and significantly expands it.
  • The Consumer Data Privacy Act (CDPA) went into effect in Virginia on January 1, 2023, and establishes comprehensive rules for the collection, use, and sharing of personal data.
  • Algorithmic Accountability Act (AAA) is being worked on to regulate the impact of AI on privacy, security, and fairness


  • The Personal Information Protection and Electronic Documents Act (PIPEDA) has been in effect since 2001 and sets out the rules for the collection, use, and protection of personal data in Canada.
  • The Canadian government has proposed the Consumer Privacy Protection Act (CPPA) to modernize the rules governing the collection, use, and disclosure of personal information by businesses, but it has not yet been adopted.

As you can see Data governance, data literacy, and a data-driven culture are essential components of a modern data strategy, especially in the face of increasing data privacy and security regulations. And those are more than just compliance buzzwords. There are also other benefits. With robust data governance practices in place, your organization can better understand how data is collected, processed, and stored, while minimizing risks associated with data privacy and security. A data-literate workforce can help ensure that data is used effectively and in compliance with regulatory requirements. At the same time, a data-driven culture can drive innovation and business value, further benefiting your organization. By embracing these principles, organizations can position themselves for success in a rapidly evolving and complex regulatory environment, while also reaping the benefits of data-driven insights.

As a short intermission: Given how often I am mentioning Data Security in this context, you might be wondering if it has made it to my list of the BI trends. It certainly has. It is on the list as a main trend below. You just need to keep reading… or scrolling.

So what does implementing Data Governance / Data Literacy / Data driven culture mean in practice? What should you focus on?

  • Develop and implement data governance policies, practices, and technologies to ensure the quality and accuracy of data as well as allow effective data management.
  • Foster a data-driven culture by providing training and education to employees on how to understand, analyze, and use data to make informed decisions.
  • Establish clear data ownership and accountability throughout the organization to ensure that data is accurate, consistent, and compliant with industry regulations.
  • Encourage collaboration and communication across different departments to ensure that data is used effectively and efficiently across the organization.
  • Promote a data-driven decision-making approach by creating a culture that values and utilizes data to assess success or failure of initiatives.
  • Identify and prioritize key data needs and use cases to ensure that data is being used effectively and efficiently to support business goals and objectives.
  • Establish KPIs and metrics to track the effectiveness of data initiatives.
  • Invest in technologies and tools that can help automate and streamline data governance and data literacy processes.
  • Ensure that employees have access to the right data and the right tools to make informed decisions.
  • Continuously monitor and improve processes to ensure that they are effective and efficient in supporting the organization’s goals and objectives.

Setting goals for effective data governance, promoting data literacy, and fostering a data-driven culture are important steps in building a modern data strategy. However, achieving these goals requires using the right tools for the job. That’s why we’ve compiled a list of essential tools and technologies to support your organization’s efforts in these areas. Here it is:

  • Business Intelligence (BI) and Analytics Platforms: BI tools and analytics platforms like Power BI, Tableau, and Qlik can help organizations derive insights from their data and create meaningful visualizations and reports to support decision-making by business leaders.
  • Data Governance Tools: tools like Collibra, Informatica, Alation, and Microsoft Purview can help organizations with data discovery, and data management, ensure data quality, and comply with industry regulations.
  • Data Catalogs: Data catalogs like Alation, Atlan, Collibra, Microsoft Purview can help organizations discover and classify data assets from various data sources, making it easier for employees to find and use data.
  • Data Literacy Tools: tools like DataCamp and Coursera can help employees develop their data literacy skills through online courses and training programs. It can be utilized to boost the skills of your data scientists, data engineers also employees that do not necessarily focus on data on a daily basis.
  • Master Data Management (MDM) Tools: MDM tools like Informatica MDM and IBM InfoSphere can help organizations manage master data across the enterprise, ensuring consistency and accuracy.
  • Data Quality Tools: Data quality tools like Talend Data Quality, Informatica Data Quality, and Trifacta can help organizations identify and correct data quality issues that impact decision-making.
  • Data Lineage Tools: Data lineage tools like MANTA and Octopai can help organizations trace the movement of data from its source to its destination and provide a complete record of how data has been transformed and used throughout its lifecycle, enabling root cause analysis and impact analysis. Many of the Data Governance Tools like e.g. Microsoft Purview also have this important functionality realized quite well.
  • Collaborative Workspaces: Collaborative workspaces like Microsoft Teams and Slack can help organizations promote a data-driven culture by facilitating communication, collaboration and secure data sharing across teams and departments.

Feeling overwhelmed by the range of tools and technologies listed above? You’re not alone here and what’s more, you are in a very good company. The good news is that our second trend addresses this challenge. With the ever-growing data stack, consolidating and optimizing resources has become more critical than ever to maximize cost efficiency and streamline operations. This trend promises to be a game-changer for organizations looking to make the most of their data and drive business success. Will it deliver?… we will see. But it’s a trend we should not overlook.

Trend 2. Consolidation of the Data Stack

For the last several years, the data stack has been growing and becoming increasingly convoluted. Many organizations have accumulated a patchwork of disparate tools and systems to manage their data, resulting in a complex and inefficient infrastructure. Now, as we move into 2023, it’s no surprise that Consolidation of the Data Stack is emerging as a trend that promises to improve cost efficiency and streamline operations. Optimizing data infrastructure and reducing the number of tools and systems used to manage data can help your company achieve a more consistent and reliable view of data, ensuring data quality and integrity. Consolidating the data stack can also simplify the process of navigating and integrating data from various sources, benefiting data engineers, data scientists, and other members of the data team. This simplification can facilitate better collaboration with business users, ultimately leading to improved decision-making and driving business success. On the other hand, if the consolidation process is poorly executed or if the opposite happens (meaning bloating of the data stack), it could lead to failure or at least a business hiccup.

Let’s again look at why should you care about this trend and what can be the benefits:

  • Reduced complexity: By consolidating the data stack, you can reduce the complexity of the infrastructure, making it easier to manage and maintain.
  • Improved data quality: With a more streamlined data stack, you can more easily ensure data quality and integrity, leading to better decision-making.
  • Increased productivity: Consolidating the data stack can help reduce redundancy and waste, leading to improved productivity and cost savings.
  • Better collaboration: With a more consistent and reliable view of data, it’s easier for teams to collaborate and share insights across the organization.
  • Agility and adaptability: Consolidation of the Data Stack can enable a more agile and responsive approach to changing business needs, helping to stay ahead of the curve.
  • Competitive advantage: By streamlining operations and making the most of your data, you can gain a competitive edge against competitors. Or at least not stay behind.
  • Innovation and growth: Consolidation of the Data Stack is a key enabler of innovation and growth, as it helps to make the most of available data assets to drive business success.

How can you actually achieve that? Here are some possible strategies to consider:

  • Adopt a single, integrated data platform: One approach to consolidation is to adopt a single, integrated data platform that can handle all aspects of the data stack, from data storage and processing to visualization and reporting. This can help eliminate redundancy and streamline operations, leading to improved cost efficiency and productivity.
  • Standardize on a common data architecture: Another approach is to standardize on a common data architecture that can be used across the organization. This can help ensure consistency and reliability of data, making it easier to manage and maintain the data stack. Everyone in the data team will benefit, be it data scientists, data engineers or ML engineers.
  • Reduce reliance on legacy systems: Many companies have accumulated a patchwork of legacy systems over time, which can make the data stack more complex and difficult to manage. By reducing reliance on these legacy systems and consolidating around more modern, cloud-based platforms, you can achieve a more streamlined and cost-effective data stack.
  • Adopt best practices for data governance: Effective data governance is key to successful consolidation of the data stack. And that’s how we link this second trend to the first one. On a lighter note, if you’ve ever played any fighting games this starts to look like a beginning of an unstoppable combo: “Data Governance + Consolidation of the Data Stack”. If you’re wondering about the details, then look to the first trend in this article.
  • Balance data virtualization and data storage: Data virtualization is a technology that enables information from different data sources to be accessed and integrated in real-time, without the need for physical data movement. This can be a useful approach to consolidation, as it allows companies to reduce reliance on physical data storage and processing, leading to improved agility and cost efficiency. Keep in mind that sometimes actually moving and storing data using a common data architecture might be a better choice.
  • Use data lineage and impact analysis: Data lineage and impact analysis can help companies better understand the flow of data through the organization and the impact of changes to the data stack. This can help ensure data quality and integrity, and can be a useful tool for managing the consolidation of the data stack.

Looking to streamline your organization’s data stack and improve cost efficiency? It’s time to take a closer look at the powerful tools and technologies available to help you achieve this goal:

  • Cloud-based data platforms: Microsoft Azure, Amazon Web Services (AWS), Google Cloud Platform, Snowflake are all examples of cloud-based data platforms that can provide a flexible and scalable infrastructure for managing data in a consolidated manner.
  • Data integration tools: Microsoft Azure Data Factory, AWS Glue, Talend, Informatica, SnapLogic, are examples of data integration tools that can help companies integrate data from multiple sources into a single, consolidated view.
  • Master data management (MDM) tools: Informatica MDM, Talend MDM, Profisee, CluedIn are examples of MDM tools that can help companies manage master data, such as customer and product data, in a consolidated and consistent manner. E.g. Profisee, CluedIn can be integrated with Microsoft’s technology stack.
  • Data governance tools: Microsoft Purview, Collibra, Alation, and Informatica Axon are examples of tools that can help companies establish and enforce data policies and procedures, as well as ensure data quality and integrity.
  • Data virtualization tools: Denodo, and Red Hat JBoss Data Virtualization are examples of data virtualization tools that can help companies access and integrate data from multiple sources in real-time, without the need for physical data movement. E.g. Azure SQL Managed Instance also has data virtualization features.
  • Data lineage and impact analysis tools: Data lineage and impact analysis tools, such as Informatica’s Enterprise Data Catalog, Collibra Lineage and Microsoft’s Purview can help companies understand the flow of data through the organization and the impact of changes to the data stack. These tools can be a useful tool for managing the consolidation of the data stack and ensuring data quality and integrity.
  • Separate Semantic Layer: This actually takes us to the next trend so I will present a bit more detail in the next section of this article.

I think it’s worth mentioning that cost savings and improved productivity are one thing. Of course this, in my view would be the main driving force behind growth of this trend. However there is also another side to Consolidation of the Data Stack. As already pointed out, it can enables more agile and responsive approach to changing business needs. By reducing the number of tools and systems used to manage data, you can streamline operations and make it easier to adapt to new data sources and technologies. This can help you to stay ahead of the curve, enabling you to quickly identify and act on opportunities as they arise. Whether you’re a small startup or a large enterprise, Consolidation of the Data Stack, in my view, deserves a high place in the list of business intelligence trends this year. This is something to watch. And additionally, it promises to be a key enabler of innovation and growth in the years to come.

Trend 3. Separate Semantic Layer

Traditional BI applications often tightly couple data modeling and visualization, making it challenging to maintain and update the application over time. Just imagine constantly having to rewrite visualizations because the underlying data model changed. It is a frustrating and time-consuming process that hinders the agility of organizations in the fast-paced business environment. Last year we’ve seen the semantic layer approach to finally catch up to other trends. For example dbt introduced a feature called semantic layer. At first it was accessible as a public preview and now it made it to the official features list. Google was pushing for integration of Looker semantic models into their other products. Microsoft introduced Datamarts as a preview feature in Power BI. It’s just the beginning but definitely a step in a good direction. I am very curious how will the semantic layer evolve and what adoption rate we will see this year.

AI generated abstract visualization of a semantic layer.
Definitely more like an abstract illustration than actual visualization. Not that surprising since
prompt engineering was quite limited on this one.

To shed some light on the subject, the Separate Semantic Layer approach decouples the data model and visualization layers, allowing changes to be made to the data model without affecting the visualization layer. This promises to make it easier to add new data sources, modify existing data models, and update visualizations without disrupting the overall application. By separating the data model and visualization layers, data can be pre-processed and optimized before being fed into the visualization layer, reducing the workload on the visualization layer and improving performance. Additionally, this approach also promises greater flexibility in how data is visualized and presented. Different visualizations can be created using the same data model. Finally, maintaining separate semantic layer might make it easier to manage and update the application over time, reducing the risk of errors or data inconsistencies.

Now, let’s summarize the benefits:

  • Easier maintenance and updating of the application over time
  • Quick response to changing business needs
  • Improved performance through pre-processing and optimizing data
  • Greater flexibility in how data is visualized and presented
  • Reduced workload on the visualization layer
  • Easier management and updating of the application over time
  • Reduced risk of errors or data inconsistencies

Since a fully separate Semantic Layer approach is still in its infancy it is important to approach the implementation process carefully to ensure success. The Semantic Layer is the point where the data team and the business team come together, so it’s crucial to remember that driving adoption and communication between teams is key. Here are some useful strategies to consider when implementing a Separate Semantic Layer:

  • Define the scope and requirements with laser focus: Before implementing a Separate Semantic Layer, it’s crucial to carefully define the scope and requirements of the project. This includes identifying the critical data sources, the types of data that matter most, and the most relevant visualization tools.
  • Choose the right data modeling tool and make it work for you: There’s no one-size-fits-all data modeling tool for creating a Separate Semantic Layer. It’s important to choose a tool that is well-suited to the organization’s unique needs and that provides the required functionality.
  • Standardize data definitions and naming conventions, but do it in a smart way: In order to maintain consistency and avoid confusion, it’s important to standardize data definitions and naming conventions across the organization. However, to be effective, the standardization process must be smart, agile, and focused on what matters most.
  • Implement a rigorous testing and validation process that will inspire confidence: Testing and validation are crucial steps in ensuring the accuracy and reliability of the data model. The process must be rigorous, thorough, and designed to inspire confidence in stakeholders.
  • Provide appropriate training and documentation: To ensure the success of the implementation, it’s vital to provide appropriate training and documentation to end-users and stakeholders. This includes providing inspiring training on the data model, visualization tools, and best practices for using the Separate Semantic Layer.

If you’re wondering what tools to use to implement Semantic Layer in BI and analytics applications, here are a few examples. You have to keep in mind that’s it’s still the beginning of the journey towards a true and independent semantic layer:

  • dbt (Data Build Tool): dbt is an open-source data transformation tool that enables data analysts and engineers to transform and model data with SQL. dbt provides a range of features for implementing a Semantic Layer, including support for data source integration, data transformation and modeling, and visualization integration.
  • Power BI Datamart: Power BI Datamart is a feature in the Power BI platform that allows users to create a central, unified view of their organization’s data. The Datamart can act as a kind of Semantic Layer, allowing users to define data models and visualizations using a drag-and-drop interface, and also supports advanced modeling features such as calculated columns and measures with DAX.
  • Looker: Looker is a cloud-based BI platform that provides a semantic layer as part of its data modeling and visualization tools. Looker’s semantic layer includes a modeling language called LookML, which allows users to define data models describe dimensions, aggregates, calculations, and data relationships.
  • Tableau Prep: Tableau Prep is a data preparation tool that provides a visual interface for data modeling and transformation. Tableau Prep includes semantic layer called the logical layer. As far as I know it does not allow you to define metrics though, which I would say is a serious drawback.
  • Apache Superset: Apache Superset is an open-source BI platform that provides a semantic layer as part of its data modeling and visualization tools. Superset’s semantic layer includes a modeling language called Druid SQL, which allows users to define data models and visualizations using SQL.

The Separate Semantic Layer approach is gaining traction in the Business Intelligence (BI) and analytics space. It promises to improve the agility, performance, and efficiency of BI and analytics applications by decoupling the data modeling and visualization layers. While it’s not an entirely new concept, it is still relatively underdeveloped. Fully separate Semantic Layer approach is still in its infancy, last year we saw a range of preview features from companies such as dbt and Microsoft that showcased the potential of this approach, providing users with the ability to create metrics in a separate layer decoupled from the BI report. Recent additions to this space definitely make this trend a one to watch. As this 3rd entry on the list of Business Intelligence trends continues to evolve, I expect to see further innovation and advancements in the tools for implementing the Separate Semantic Layer.

Trend 4. Data Security

The rising importance of data security is a trend that is shaping the way organizations approach their data management and protection strategies. With the increasing amount of sensitive data being collected, processed, and stored by organizations, it is becoming more crucial to protect this data against potential threats such as cyber-attacks, data breaches, and insider threats. The need for robust data security measures has only intensified as more employees work remotely, leading to new vulnerabilities and attack vectors.

In response, organizations are investing in advanced data security technologies and implementing more comprehensive security frameworks. This includes the use of encryption, access controls, and threat monitoring, as well as compliance with data privacy regulations such as GDPR and CCPA that I have already mentioned above. The rising importance of data security is not just a matter of protecting data against external threats, but also of maintaining the trust of customers and stakeholders. Organizations that can demonstrate strong data security practices are more likely to be trusted with sensitive data, and in turn, are better positioned to thrive in today’s data-driven business environment.

Data tools are evolving rapidly to meet the increasing demand for data security and compliance. Here are a few examples:

  • Data encryption: Data encryption tools are becoming more widely available, allowing organizations to protect sensitive data by encrypting it at rest and in transit. Some data tools also offer end-to-end encryption to ensure that data remains protected even if it is intercepted in transit.
  • Data governance platforms: It seems that Data governance tools make it to almost every trend in BI. Anyway, data governance platforms are becoming more advanced, providing organizations with a central platform for managing data access, usage, and quality. These platforms can help ensure that data is used in compliance with regulatory requirements and organizational policies.
  • Data masking and anonymization: Data masking and anonymization tools are becoming more sophisticated, allowing organizations to protect sensitive data by masking or removing personally identifiable information (PII). This can help ensure that sensitive data is not exposed in the event of a breach or unauthorized access.
  • Compliance reporting: Data tools are increasingly offering built-in compliance reporting features, allowing organizations to generate reports on data access, usage, and compliance. These reports can help organizations demonstrate compliance with regulatory requirements and internal policies.
  • Automated data classification: Automated data classification tools are becoming more widely available, allowing organizations to automatically classify data based on its sensitivity or criticality. This can help organizations ensure that sensitive data is given the appropriate level of protection and security.

Given the features outlined above, Data governance tools are well-suited to support data security by providing a centralized platform for managing data access, usage, and quality. Below are examples of such tools, along with key data security capabilities.:

  • Collibra is a data governance platform that provides a central point of control for managing data assets. It includes features for data classification, access control, data lineage, and policy management to ensure data is used in compliance with regulatory requirements and organizational policies.
  • Informatica offers a selection of tools that enable organizations to manage data quality, metadata, and access controls. Its Data Security solution provides features such as data masking, encryption, and access controls to protect sensitive data.
  • Alation is a data catalog and governance platform that helps organizations understand their data assets and ensure data is used in compliance with policies and regulations. It includes features such as data lineage, data classification, and access controls.
  • Microsoft Purview is a data governance platform that provides features such as data discovery, cataloging, data lineage and access control to support data security. It enables organizations to discover and understand their data assets, and ensure that sensitive data is protected through features such as data masking and encryption. Purview also provides data classification and labeling capabilities, helping organizations to manage compliance with regulations and internal policies. Its built-in access controls allow organizations to control access to sensitive data and ensure that data is used in compliance with policies and regulations.
  • Apache Atlas is an open-source data governance and metadata management platform. It provides features such as data classification, data lineage, and access controls to help organizations manage data assets and ensure compliance with regulations.

It’s fascinating to see how the importance of data security is highlighting the critical connection between data governance and data literacy. As organizations face a growing number of data security regulations worldwide, they must prioritize strong data governance practices to protect sensitive data while minimizing risk. Data literacy among employees is an essential ingredient to ensure that data is used effectively and in compliance with regulatory requirements. And this does not only mean the data team like data scientists and data engineers but also final consumers of business intelligence and data analytics reports. That’s why those interconnected aspects of data strategy made it to the top of the business intelligence trends list.

Trend 5. Continuous integration / Continuous delivery (CI/CD) and automated testing for BI

Although continuous integration/continuous delivery (CI/CD) and automated testing are well-established in software development, they have been slower to gain traction in the business intelligence (BI) and analytics space. As data-driven decision-making becomes increasingly important, fast and reliable access to up-to-date data is critical. CI/CD for BI allows you to quickly build, test, and deploy BI solutions with the latest and most accurate data. Automated testing for BI solutions is also gaining popularity, reducing the risk of data inaccuracies and ensuring solutions are working as expected. Together, CI/CD and automated testing for BI are game-changers, driving faster time to value and improving data-driven decision-making in organizations.

CI/CD and automated testing for BI can offer significant benefits in terms of speed, reliability, and accuracy. Automating the build, testing, and deployment process for BI solutions helps you quickly iterate and deliver solutions, reducing time to market and improving data-driven decision-making quality. Automated testing ensures solutions work as expected and reduces the risk of data inaccuracies, so developers can focus on innovation and value creation. As data-driven decision-making grows more important, it’s clear that CI/CD and automated testing for BI will become increasingly vital to help organizations keep up with the pace of business and maximize their data’s value.

Let’s again look at benefits of CI/CD and automated testing in the Business Intelligence space:

  • Faster time-to-market for BI solutions
  • Improved quality of data-driven decision-making
  • Rapid iteration and delivery of BI solutions
  • Reduction in risk of data inaccuracies
  • Improved accuracy and reliability of BI solutions
  • Better utilization of developer time for innovation and value creation
  • Reduction in manual testing efforts and potential human errors
  • Reduction in overall development and maintenance costs of BI solutions

The, so far slow adoption of CI/CD and automated testing in BI is probably due in part to the historical lack of integration between BI and software development processes. Unlike software development, BI has traditionally been treated as a standalone process that is separate from the rest of the organization’s technology infrastructure. BI solutions are typically developed and deployed manually, with limited automated testing. They are often managed by a team separated from the rest of the organization’s technology team. This separation has made it challenging to apply software development best practices like CI/CD and automated testing to BI.

Another factor contributing to the slow adoption of CI/CD and automated testing in BI might be the complexity of BI solutions themselves. BI solutions often involve a large number of data sources, data transformations, and data models, which can make it challenging to build and deploy solutions in a rapid and automated manner. Additionally, the need for data accuracy and reliability in BI solutions can make it challenging to implement automated testing, as the testing of BI solutions often involves complex data comparisons and rule-based testing.

However, as the importance of data-driven decision-making continues to grow, the need for fast and reliable access to up-to-date data is becoming more critical.

One example is Power BI. Historically, and unfortunately still to this day, Power BI datasets and reports are built as a Power BI Desktop pbix file that are saved together with the underlying data. This made it virtually impossible to use standard CI/CD tools that are based on git version control. However, there is now a light at the end of the tunnel as Microsoft recently introduced deployment pipelines, which enable CI/CD for Power BI Service. It’s important to note that deployment pipelines are currently only available with the Premium per user license or with Premium capacity. Nevertheless, this addition represents a significant step towards a more automated, and integrated BI solution development and deployment process.

If we look at more standard approaches taken from software engineering we have more options for CI/CD and automated testing:

  • Jenkins: an open-source automation server that can be used to set up continuous integration and continuous delivery pipelines for BI solutions
  • Azure DevOps: a cloud-based service that provides tools for creating and managing continuous integration and delivery pipelines for BI solutions
  • GitLab: a web-based Git repository manager that provides continuous integration and delivery capabilities for BI solutions
  • Apache Airflow: a platform to programmatically author, schedule, and monitor workflows for BI solutions
  • Selenium: an open-source automated testing tool that can be used to test the functionality of BI solutions. Theoretically possible but in practice extremely hard due to the complex DOM structure generated by the web BI reporting solutions
  • JMeter: an open-source performance testing tool that can be used to test the performance and scalability of BI solutions
  • pytest: a testing framework for Python that can be used for automated testing of BI solutions

Curiously, in the fast-paced world of business intelligence and analytics, the adoption of Continuous Integration and Continuous Delivery (CI/CD) and automated testing has been relatively slow. However, the benefits of adopting these practices are becoming increasingly apparent. With the recent introduction of CI/CD capabilities in Power BI Service and the continuous rise of the Infrastructure as a Code (IaC) paradigm we can expect to see more organizations embracing this trend in the coming years.

This would conclude my list of top 5 business intelligence trends for 2023. Of course this list is based mostly on my opinion and you don’t have to agree with the selection. Let’s now move to a few notable mentions of business intelligence trends that are on the rise and a few that are in decline. I think you’ll find at least a few of them controversial.

Notable mentions

As I’ve already mentioned a few times the world of business intelligence and analytics is constantly evolving, and with it, so are the trends that shape how organizations leverage data to drive success. Here are some of the most notable trends that are currently on the rise:

Data Storytelling

This involves the use of data and visualizations to tell a compelling and persuasive story. Historically, it was the data scientists job to present data analytics results in the most communicative way. But data storytelling is important for all organizations because it simplifies complex information and engages a wider audience. It helps bridge the gap between data and decision-making by creating a shared understanding of the data across the organization. By telling a compelling story, organizations can inspire action and drive better decision-making for improved business outcomes. It can also build a data-driven culture where data is valued and its insights are integrated into decision-making.

BI-generated Alerts / Automations

This trend involves the use of automated alerts and notifications to inform business users of important trends or changes in their data. It fits in with the larger trend of hyperautomation, which involves using advanced technologies like AI and machine learning to automate business processes. Automated alerts and notifications help organizations quickly identify important trends or changes in their data and take action. This improves operational efficiency and responsiveness, and can give organizations a competitive edge in today’s fast-paced business environment. BI-generated alerts and automations are an important step towards achieving the benefits of hyperautomation in the BI and analytics space.

Embedded Analytics

This trend involves the integration of analytics and business intelligence capabilities directly into applications used by internal or external customers. It addresses the growing demand for seamless and context-driven access to data insights. Traditional BI solutions require users to switch between different applications and interfaces to access data, which can be time-consuming and inefficient. With embedded analytics, data insights are presented in the context of the application or workflow where they are needed. This lets users make data-driven decisions without leaving their primary application. Embedded analytics improves the user experience and increases the adoption of analytics and business intelligence capabilities by a wider audience, including operational staff or customers who may not have the technical expertise or time to navigate a separate BI tool.

Augmented Analytics

This is an exciting trend that promises to revolutionize the way we analyze data. However, in my opinion it’s progress might be slower than anticipated, and its full potential may not be realized yet. For example, Power BI has offered a Q&A visual for quite some time, which allows users to create visuals using natural language. While this is helpful, creating a good data visualization often involves an iterative process similar to scientific research, rather than a straightforward set of instructions. The impact of pre-trained Large Language Models (LLMs) in natural language processing (NLP) and tools like chatGPT on the future of augmented analytics remains to be seen. The development of prompt engineering is a new concept that may accelerate this trend in the near future. However, it’s possible that we may not see a sharp rise in augmented analytics in 2023.

Explainable AI (XAI)

Finally, XAI is another exciting trend, making AI algorithms and models more transparent and interpretable. This is particularly relevant as AI continues to become more pervasive in business and society. By building trust in AI models and improving decision-making, XAI can help to ensure that AI is used in an ethical and transparent way that protects the rights and interests of individuals and society as a whole. In fact, explainability is one of the key factors being considered in the proposed AI Act in the European Union, which aims to set ethical standards for the development and deployment of AI technologies. As organizations increasingly rely on AI and other advanced technologies, XAI can help to promote greater trust and adoption of AI by making it more interpretable and accountable.

I will finish with a few trends that are in discussion for several years now, but in my opinion are currently on the decline:

Again I present a snapshot of the evolution of Business Intelligence Trends from 2019 to 2023 according to BARC survey respondents. This time with downward trends selected.

Importance of Business Intelligence Trends (Timeline) – Downward trends 2019-2023

Self-Service BI

This trend of democratizing data analysis aims to empower business users to create their own reports, visualizations, and dashboards with user-friendly interfaces. I know that putting this in the declining category might be controversial as almost every BI tool is marketed with the promise of Self-Service BI. And if you look at the BARC’s survey the scores for this trend are also quite high. However, in my experience, I’ve seen that more data-mature organizations tend to rely on specialized teams of data engineers and BI developers to build both the data and visualization layers. Then, end-users can drill down and query the data further. Don’t get me wrong, I am a strong believer in data democratization and data literacy, however not necessarily in Self-Service BI. To me, this trend feels more like a dream than an achievable reality. I think the suppliers of BI solutions are also beginning to notice this as well. Trends like embedded analytics, CI/CD, and Semantic Layer offer more flexibility to BI developers, instead of limiting their options with a closed set of use cases that fit into a UI. Of course, some forms of self-service analytics can be valuable and empowering for employees to use data. However, if left unchecked, it can lead to data silos, inconsistencies, and potentially chaotic data environments. Ultimately, organizations must balance the need for self-service with the need for centralization, governance, and data quality.

Data Preparation by Business Users

This trend involves the use of tools and techniques that enable business users to prepare their own data for analysis. Similarly as self-service BI, while still a valuable approach, this trend is on the decline as organizations look to centralize and automate data preparation to improve data quality and governance.

Mobile BI

This trend refers to the delivery of BI and analytics capabilities to mobile devices, but this trend has seen a decline in recent years. Despite the potential for powerful insights on the go, organizations have shifted their focus to delivering analytics directly within the tools and applications that users already use on a daily basis. While still useful in certain contexts, the ease of sharing a few KPIs via email or messenger with a screenshot has made it less of a sought-after feature. Additionally, the need for larger screens to drill down into data can limit the utility of mobile BI. Of course it’s nice to be able to access KPIs and charts in a nice and easily digestible form on a mobile phone, but it just does not add enough value to be a focus point.

As we look ahead to the future of Business Intelligence, it’s clear that the times are… interesting. With an array of emerging trends and technologies, new paths and avenues are being forged. Established trends are gaining even greater importance, while some previously prominent contenders are losing traction. It will be fascinating to watch these trends unfold and see how they impact the world of BI and analytics. While we’ve explored some of the key trends that are shaping the industry, it’s important to remember that these are just my opinions, and I might be proven wrong. One thing’s for sure – the world of Business Intelligence is always evolving, and it’s up to us to stay ahead of the curve and embrace the changes that lie ahead.

Are you curious to learn more…

…then don’t miss out on our upcoming virtual conference “Business Intelligence Trends 2023”! This carefully curated event is designed to help you stay ahead of the curve and explore the latest tools and techniques for driving business success and saving money through data-driven decision-making. Join us to hear from top industry leaders and experts as they share their insights and expand on the most relevant BI trends for 2023, with a special focus on key value drivers for Power BI. Secure your spot and join us at:

Talk to our expert

Are you looking for expert skills for your next data project?

Or maybe you need seasoned data scientists to extract value from data?

Fill out the contact form and we will respond as soon as possible.