The data-driven organization, digital transformation, and the data economy are all hot topics in board rooms. They all imply that the role of data changes. Organizations want to use data more widely, more effectively, and more efficiently: a real data disruption. However, data disruption clearly raises the bar for developing IT systems, because it leads to more complex IT systems, involving e.g. AI, sensor technology, and real-time analytics. Luckily, so much high-tech technology is available, we can almost build whatever the business needs. So, the technology is ready, but is IT itself ready? You would think so. IT has more than fifty years of experience in data modeling, data architectures, data strategies, data warehouses and databases. If we look at our track record for developing more traditional IT systems, we have to conclude that some of our projects have not been delivered on time and within budget, and are sometimes completely cancelled. So, how will IT perform if the complexity dramatically increases? What can we actually learn in the future from all the experiences we have gained? What do we have to change in order to participate in this increasingly data-driven economy in which digital transformation is the magic word for everyone? This keynote addresses this issue, and discusses recommendations on how IT specialists and IT management need to change to be able to address the actual data disruption: the IT disruption.
Jaap-Willem Verheij van Wijk will open the session with an introduction to Qlik then share customer insights including EWALS Cargo Care who have continuously innovated with data to stay ahead of the curve. See how they have solved the latest data challenges in logistics and how they automate the data warehouse lifecycle. Jaap will also share how AEGON have solved their manual data stream issues and are now able to enclose multiple data sources with a small team that supports the agility of their business demands.
Read lessAPG is the largest pension provider in the Netherlands and sees data as a crucial asset from current and future business operations. The government is increasingly withdrawing from a sufficient retirement provision (raising the state retirement age, decreasing the pension accrual), so insight into the personal situation of participants and offering action perspective is crucial. APG also wants to be a leader as an executor and investor.
All this has led to the earmarking of data as a strategic asset. But how do you get from that ambition to execution on the various axes of technology, capability, culture and organization? This presentation tells an integral story about the journey that has been made in the past period: what went well, what didn’t, what did we learn from it, where are we now?
The development of an appropriate architecture, the building of the right knowledge and skills, the combination with modern working methods and the challenges of collaboration across business units will be discussed:
Ensembling is one of the hottest techniques in today’s predictive analytics competitions. Every single recent winner of Kaggle.com and KDD competitions used an ensemble technique, including famous algorithms such as XGBoost and Random Forest.
Are these competition victories paving the way for widespread organizational implementation of these techniques? This session will provide a detailed overview of ensemble models, their origin, and show why they are so effective. We will explain the building blocks of virtually all ensembles techniques, to include bagging and boosting.
What You Will Learn:
The digital future: think big, think highly distributed data, think in ecosystems. What Integration Architecture is needed to play an important role in the digital world with eco-system with FinTech company’s and other banks? Do Enterprise Data Warehouses still have a role in this landscape? This is what the presentation is about. Piethein will also cover:
Self-service analytics has been the holy grail of data analytics leaders for the past two decades. Although analytical tools have improved significantly, it is notoriously difficult to achieve the promise of self-service analytics. This session will explain how to empower business users to create their own reports and models without creating data chaos. Specifically, it examines seven factors for leading a successful BI program: right roles, right processes, right tools, right organization, right architecture, right governance, and right leadership. Ultimately, it will show how to build a self-sustaining analytical culture that balances speed and standards, agility and architecture, and self-service and governance.
You will learn:
With data increasingly being seen as a critical corporate asset, more organisations are embracing the concepts and practices of Data Governance. As a result Data Governance is today one of the hottest topics in data management, focusing both on how Governance driven change can enable companies to gain better leverage from their data through enhanced Business Intelligence, Data Analytics and so on, and also to help them design and enforce the controls needed to ensure they remain compliant with increasingly stringent laws and regulations, such as GDPR.
Despite this rapidly growing focus, many Data Governance initiatives fail to meet their goals, with only around one in five fully achieving expectations. Why is the failure rate so high? There are many factors, but one key reason is that implementing Data Governance without aligning it with a defined enterprise and data architecture is fraught with dangers. Linking Architecture with data accountability, a core principle of Data Governance, is essential.
This session will outline why Data Governance and Architecture should be connected, how to make it happen, and what part Business Intelligence and Data Warehousing play in defining a robust and sustainable Governance programme.
This talk will cover:
Cloud data warehousing helps to meet the challenges of legacy data warehouses that struggle to keep up with growing data volumes, changing service level expectations, and the need to integrate structured warehouse data with unstructured data in a data lake. Cloud data warehousing provides many benefits, but cloud migration isn’t fast and easy. Migrating an existing data warehouse to the cloud is a complex process of moving schema, data, and ETL. The complexity increases when architectural modernization, restructuring of database schema or rebuilding of data pipelines is needed.
This session provides an overview of the benefits, techniques, and challenges when migrating an existing data warehouse to the cloud. We will discuss the pros and cons of cloud migration, explore the dynamics of migration decision making, and look at migration pragmatics within the framework of a step-by-step approach to migrating. The tips and techniques described here will help you to make informed decisions about cloud migration and address the full scope of migration planning.
You Will Learn:
The data-driven organization, digital transformation, and the data economy are all hot topics in board rooms. They all imply that the role of data changes. Organizations want to use data more widely, more effectively, and more efficiently: a real data disruption. However, data disruption clearly raises the bar for developing IT systems, because it leads to more complex IT systems, involving e.g. AI, sensor technology, and real-time analytics. Luckily, so much high-tech technology is available, we can almost build whatever the business needs. So, the technology is ready, but is IT itself ready? You would think so. IT has more than fifty years of experience in data modeling, data architectures, data strategies, data warehouses and databases. If we look at our track record for developing more traditional IT systems, we have to conclude that some of our projects have not been delivered on time and within budget, and are sometimes completely cancelled. So, how will IT perform if the complexity dramatically increases? What can we actually learn in the future from all the experiences we have gained? What do we have to change in order to participate in this increasingly data-driven economy in which digital transformation is the magic word for everyone? This keynote addresses this issue, and discusses recommendations on how IT specialists and IT management need to change to be able to address the actual data disruption: the IT disruption.
Jaap-Willem Verheij van Wijk will open the session with an introduction to Qlik then share customer insights including EWALS Cargo Care who have continuously innovated with data to stay ahead of the curve. See how they have solved the latest data challenges in logistics and how they automate the data warehouse lifecycle. Jaap will also share how AEGON have solved their manual data stream issues and are now able to enclose multiple data sources with a small team that supports the agility of their business demands.
Read lessAPG is the largest pension provider in the Netherlands and sees data as a crucial asset from current and future business operations. The government is increasingly withdrawing from a sufficient retirement provision (raising the state retirement age, decreasing the pension accrual), so insight into the personal situation of participants and offering action perspective is crucial. APG also wants to be a leader as an executor and investor.
All this has led to the earmarking of data as a strategic asset. But how do you get from that ambition to execution on the various axes of technology, capability, culture and organization? This presentation tells an integral story about the journey that has been made in the past period: what went well, what didn’t, what did we learn from it, where are we now?
The development of an appropriate architecture, the building of the right knowledge and skills, the combination with modern working methods and the challenges of collaboration across business units will be discussed:
Ensembling is one of the hottest techniques in today’s predictive analytics competitions. Every single recent winner of Kaggle.com and KDD competitions used an ensemble technique, including famous algorithms such as XGBoost and Random Forest.
Are these competition victories paving the way for widespread organizational implementation of these techniques? This session will provide a detailed overview of ensemble models, their origin, and show why they are so effective. We will explain the building blocks of virtually all ensembles techniques, to include bagging and boosting.
What You Will Learn:
The digital future: think big, think highly distributed data, think in ecosystems. What Integration Architecture is needed to play an important role in the digital world with eco-system with FinTech company’s and other banks? Do Enterprise Data Warehouses still have a role in this landscape? This is what the presentation is about. Piethein will also cover:
Self-service analytics has been the holy grail of data analytics leaders for the past two decades. Although analytical tools have improved significantly, it is notoriously difficult to achieve the promise of self-service analytics. This session will explain how to empower business users to create their own reports and models without creating data chaos. Specifically, it examines seven factors for leading a successful BI program: right roles, right processes, right tools, right organization, right architecture, right governance, and right leadership. Ultimately, it will show how to build a self-sustaining analytical culture that balances speed and standards, agility and architecture, and self-service and governance.
You will learn:
With data increasingly being seen as a critical corporate asset, more organisations are embracing the concepts and practices of Data Governance. As a result Data Governance is today one of the hottest topics in data management, focusing both on how Governance driven change can enable companies to gain better leverage from their data through enhanced Business Intelligence, Data Analytics and so on, and also to help them design and enforce the controls needed to ensure they remain compliant with increasingly stringent laws and regulations, such as GDPR.
Despite this rapidly growing focus, many Data Governance initiatives fail to meet their goals, with only around one in five fully achieving expectations. Why is the failure rate so high? There are many factors, but one key reason is that implementing Data Governance without aligning it with a defined enterprise and data architecture is fraught with dangers. Linking Architecture with data accountability, a core principle of Data Governance, is essential.
This session will outline why Data Governance and Architecture should be connected, how to make it happen, and what part Business Intelligence and Data Warehousing play in defining a robust and sustainable Governance programme.
This talk will cover:
Cloud data warehousing helps to meet the challenges of legacy data warehouses that struggle to keep up with growing data volumes, changing service level expectations, and the need to integrate structured warehouse data with unstructured data in a data lake. Cloud data warehousing provides many benefits, but cloud migration isn’t fast and easy. Migrating an existing data warehouse to the cloud is a complex process of moving schema, data, and ETL. The complexity increases when architectural modernization, restructuring of database schema or rebuilding of data pipelines is needed.
This session provides an overview of the benefits, techniques, and challenges when migrating an existing data warehouse to the cloud. We will discuss the pros and cons of cloud migration, explore the dynamics of migration decision making, and look at migration pragmatics within the framework of a step-by-step approach to migrating. The tips and techniques described here will help you to make informed decisions about cloud migration and address the full scope of migration planning.
You Will Learn:
Data vault, ensemble logical modeling, data virtualization and cloud are known to every BI or data warehouse specialist. But the big question is how you can use them together to develop real-life systems and then make optimum use of the power and possibilities of each component. This session explains how all can be used efficiently together. Key to this is the new concept of “data routes”. Within a data and analytics architecture, data routes serve as a fuel for the virtual data presentation layer that is accessed by end users for all their data needs.
The concept proposes a data-oriented way of processing that rests on the aforementioned issues such as data vault, ensemble modeling and data virtualization. A decoupling of data and technology is hereby realized whereby the emphasis is shifted to the characteristics of the data and the requirements set by use cases. The result is offered as a virtual (semantic) data layer to a broad group of data users. With the help of data virtualization, a virtual data collection is built as a virtual data portal for data users.
To unify the collection of data is necessary in order to ultimately make better decisions. Without uniformity, a decision culture is based on loose sand and a gut feeling. In our contemporary information landscape, we see a greater need to integrate more and more – cloud services and storage, new sources of information, as well as API LED developments. What makes Data Management more than current! Which aspects are essential for today’s information landscape?
Data Virtualization, Data Quality, Reference Data Management, Master Data Management, and Metadata Management as part of Data Management Enable organizations to coordinate the different data silos and improve their decisions.
Our central question is: “How can TIBCO support digital transformation initiatives in this?” Data is the foundation for operational excellence, customer intimacy, and Business Reinvention. The role of TIBCO’s Unify portfolio is the cornerstone of a data-driven initiative for Operations, Data Governance, and Analytics.
In de afgelopen vijf jaar zijn cloud databasesystemen echt doorgebroken. De cloud maakt het mogelijk om kapitaal investeringen vooraf om te zetten in operationele kosten, zodat men alleen betaalt voor de capaciteit die echt is; en er nooit zorgen hoeven te zijn over capaciteitsproblemen. Daarbovenop “ontzorgen” cloud database systemen in de zin dat het beheer van de database systemen en onderliggende hardware bij de cloud provider ligt. In tijden van personele schaarste is dat een andere belangrijke factor achter het succes van cloud database systemen, die de eventuele nadelen op het gebied van lock-in en zorgen rond privacy en security vaak neutraliseert.
Maar, als eenmaal het besluit is genomen om de database naar de cloud te brengen, welke dan te kiezen? Er zijn op dit moment al een heleboel cloud systemen. Amazon heeft onder andere Aurora, Redshift, Neptune en Athena. Microsoft heeft SQLserver en Cosmos DB. Google heeft onder andere BigQuery. En dan zijn er nieuwe bedrijven bijgekomen, die zich specialiseren in cloud services, zoals Snowflake en Databricks.
Om beter te begrijpen wat de overeenkomsten en verschillen zijn tussen al die nieuwe cloud systemen, zal Peter Boncz ingaan op wat er zich onder de motorkap van deze nieuwe systemen bevindt. De verschillende alternatieven worden technisch ontleed en met elkaar vergeleken.
Enkele van de onderwerpen die aan bod zullen komen:
This session will expose analytic practitioners, data scientists, and those looking to get started in predictive analytics to the critical importance of properly preparing data in advance of model building. The instructor will present the critical role of feature engineering, explaining both what it is and how to do it effectively.
Emphasis will be given to those tasks that must be overseen by the modeler – and cannot be performed without the context of a specific modeling project. Data is carefully “crafted” by the modeler to improve the ability of modeling algorithms to find patterns of interest.
Data preparation is often associated with cleaning and formatting the data. While important, these tasks will not be our focus. Rather it is how the human modeler creates a dataset that is uniquely suited to the business problem.
You will learn:
With democratization of analytical capabilities and the wider access to data, questions arise on the governance and regulatory- and ethical compliance of the data usage. Locking all data down is not the answer as we would lose too much value.
Using the Data Governance 1.0 top down and waterfall like models are not well suited to deal with the new paradigms. The presentation focusses on the steps you need to take to get sustainable and compliant value through (Self-Service) Analytics out of your big data.
You will learn:
When it comes to data analytics, you don’t want to know “how the sausage is made.” The state of most data analytics pipelines is deplorable. There are too many steps; too little automation and orchestration; minimal reuse of code and data; and a lack of coordination between stakeholders in business, IT, and operations. The result is poor quality data delivered too late to meet business needs.
DataOps is an emerging approach for building data pipelines and solutions. This session will explore trends in DataOps adoption, challenges that organizations face in implementing DataOps, and best practices in building modern data pipelines. It will examine how leading-edge organizations are using DataOps to increase agility, reduce cycle times, and minimize data defects, giving developers and business users greater confidence in analytic output.
You will learn:
In the highly complex world of semiconductor manufacturing vast amounts of largely varied data are generated every day. ASML, world-leader manufacturer of machines for the production of semiconductors (chips), is implementing a central data lake to capture this data and make it accessible for reporting and analytics in a central environment. The data lake environment also includes an analytics lab for detailed exploration of data. Managing all this rapidly changing data imposes some very challenging requirements. In this session, real-life examples of how ASML approaches these challenges are presented.
Many IT systems are more than twenty years old and have undergone numerous changes over time. Unfortunately, they can no longer cope with the ever-increasing growth in data usage in terms of scalability and speed. In addition, they have become inflexible, which means that implementing new reports and performing analyses has become very time-consuming. In short, the data architecture can no longer keep up with the current “speed of business change”. As a result, many organizations have decided that it is time for a new, future-proof data architecture. However, this is easier said than done. After all, you don’t design a new data architecture every day. In this session, ten essential guidelines for designing modern data architectures are discussed. These guidelines are based on hands-on experiences with designing and implementing many new data architectures.
Data vault, ensemble logical modeling, data virtualization and cloud are known to every BI or data warehouse specialist. But the big question is how you can use them together to develop real-life systems and then make optimum use of the power and possibilities of each component. This session explains how all can be used efficiently together. Key to this is the new concept of “data routes”. Within a data and analytics architecture, data routes serve as a fuel for the virtual data presentation layer that is accessed by end users for all their data needs.
The concept proposes a data-oriented way of processing that rests on the aforementioned issues such as data vault, ensemble modeling and data virtualization. A decoupling of data and technology is hereby realized whereby the emphasis is shifted to the characteristics of the data and the requirements set by use cases. The result is offered as a virtual (semantic) data layer to a broad group of data users. With the help of data virtualization, a virtual data collection is built as a virtual data portal for data users.
To unify the collection of data is necessary in order to ultimately make better decisions. Without uniformity, a decision culture is based on loose sand and a gut feeling. In our contemporary information landscape, we see a greater need to integrate more and more – cloud services and storage, new sources of information, as well as API LED developments. What makes Data Management more than current! Which aspects are essential for today’s information landscape?
Data Virtualization, Data Quality, Reference Data Management, Master Data Management, and Metadata Management as part of Data Management Enable organizations to coordinate the different data silos and improve their decisions.
Our central question is: “How can TIBCO support digital transformation initiatives in this?” Data is the foundation for operational excellence, customer intimacy, and Business Reinvention. The role of TIBCO’s Unify portfolio is the cornerstone of a data-driven initiative for Operations, Data Governance, and Analytics.
In de afgelopen vijf jaar zijn cloud databasesystemen echt doorgebroken. De cloud maakt het mogelijk om kapitaal investeringen vooraf om te zetten in operationele kosten, zodat men alleen betaalt voor de capaciteit die echt is; en er nooit zorgen hoeven te zijn over capaciteitsproblemen. Daarbovenop “ontzorgen” cloud database systemen in de zin dat het beheer van de database systemen en onderliggende hardware bij de cloud provider ligt. In tijden van personele schaarste is dat een andere belangrijke factor achter het succes van cloud database systemen, die de eventuele nadelen op het gebied van lock-in en zorgen rond privacy en security vaak neutraliseert.
Maar, als eenmaal het besluit is genomen om de database naar de cloud te brengen, welke dan te kiezen? Er zijn op dit moment al een heleboel cloud systemen. Amazon heeft onder andere Aurora, Redshift, Neptune en Athena. Microsoft heeft SQLserver en Cosmos DB. Google heeft onder andere BigQuery. En dan zijn er nieuwe bedrijven bijgekomen, die zich specialiseren in cloud services, zoals Snowflake en Databricks.
Om beter te begrijpen wat de overeenkomsten en verschillen zijn tussen al die nieuwe cloud systemen, zal Peter Boncz ingaan op wat er zich onder de motorkap van deze nieuwe systemen bevindt. De verschillende alternatieven worden technisch ontleed en met elkaar vergeleken.
Enkele van de onderwerpen die aan bod zullen komen:
This session will expose analytic practitioners, data scientists, and those looking to get started in predictive analytics to the critical importance of properly preparing data in advance of model building. The instructor will present the critical role of feature engineering, explaining both what it is and how to do it effectively.
Emphasis will be given to those tasks that must be overseen by the modeler – and cannot be performed without the context of a specific modeling project. Data is carefully “crafted” by the modeler to improve the ability of modeling algorithms to find patterns of interest.
Data preparation is often associated with cleaning and formatting the data. While important, these tasks will not be our focus. Rather it is how the human modeler creates a dataset that is uniquely suited to the business problem.
You will learn:
With democratization of analytical capabilities and the wider access to data, questions arise on the governance and regulatory- and ethical compliance of the data usage. Locking all data down is not the answer as we would lose too much value.
Using the Data Governance 1.0 top down and waterfall like models are not well suited to deal with the new paradigms. The presentation focusses on the steps you need to take to get sustainable and compliant value through (Self-Service) Analytics out of your big data.
You will learn:
When it comes to data analytics, you don’t want to know “how the sausage is made.” The state of most data analytics pipelines is deplorable. There are too many steps; too little automation and orchestration; minimal reuse of code and data; and a lack of coordination between stakeholders in business, IT, and operations. The result is poor quality data delivered too late to meet business needs.
DataOps is an emerging approach for building data pipelines and solutions. This session will explore trends in DataOps adoption, challenges that organizations face in implementing DataOps, and best practices in building modern data pipelines. It will examine how leading-edge organizations are using DataOps to increase agility, reduce cycle times, and minimize data defects, giving developers and business users greater confidence in analytic output.
You will learn:
In the highly complex world of semiconductor manufacturing vast amounts of largely varied data are generated every day. ASML, world-leader manufacturer of machines for the production of semiconductors (chips), is implementing a central data lake to capture this data and make it accessible for reporting and analytics in a central environment. The data lake environment also includes an analytics lab for detailed exploration of data. Managing all this rapidly changing data imposes some very challenging requirements. In this session, real-life examples of how ASML approaches these challenges are presented.
Many IT systems are more than twenty years old and have undergone numerous changes over time. Unfortunately, they can no longer cope with the ever-increasing growth in data usage in terms of scalability and speed. In addition, they have become inflexible, which means that implementing new reports and performing analyses has become very time-consuming. In short, the data architecture can no longer keep up with the current “speed of business change”. As a result, many organizations have decided that it is time for a new, future-proof data architecture. However, this is easier said than done. After all, you don’t design a new data architecture every day. In this session, ten essential guidelines for designing modern data architectures are discussed. These guidelines are based on hands-on experiences with designing and implementing many new data architectures.
Limited time?
Can you only attend one day? It is possible to attend only the first or only the second conference day and of course the full conference. The presentations by our speakers have been selected in such a way that they can stand on their own. This enables you to attend the second conference day even if you did not attend the first (or the other way around).
View the Adept Events calendar
“Longer sessions created room for more depth and dialogue. That is what I appreciate about this summit.”
“Inspiring summit with excellent speakers, covering the topics well and from different angles. Organization and venue: very good!”
“Inspiring and well-organized conference. Present-day topics with many practical guidelines, best practices and do's and don'ts regarding information architecture such as big data, data lakes, data virtualisation and a logical data warehouse.”
“A fun event and you learn a lot!”
“As a BI Consultant I feel inspired to recommend this conference to everyone looking for practical tools to implement a long term BI Customer Service.”
“Very good, as usual!”