U KUNT DAG-1 OF DAG-2 VOLGEN OF BEIDE! MET WORKSHOPS NAAR KEUZE.
Het gaat er niet langer om het management te overtuigen van de waarde van het gebruik van data om bedrijfswaarde te creëren. De echte vraag is nu verschoven naar het zorgen dat de waarde op een duurzame manier wordt geleverd. Te veel organisaties slagen er nog steeds niet in om daadwerkelijk waarde te halen uit hun data-initiatieven. Wat zijn de belangrijkste elementen die moeten worden ingevoerd om succes te garanderen? Hoe gaat u van een technologie-centrische naar een geïntegreerde datastrategie? Hoe verbeteren we de datageletterdheid van de stakeholders en zorgen we ervoor dat de dataproducten effectief gebruikt kunnen worden? Met de regelmaat van de klok introduceren we nieuwe concepten zoals data fabric en data mesh waarbij de vraag blijft in welke mate die nieuwe oplossingen brengen of nieuwe problemen introduceren.
U leert:
Met de komst van het nieuwe pensioencontract in Nederland staat APG voor een uitdaging: Hoe worden de pensioenrechten van miljoenen deelnemers omgezet naar de nieuwe pensioenregeling? Datamanagement speelt een belangrijke rol in deze transitie. Arjen Bouman geeft een blik achter de schermen bij deze enorme operatie en deelt zijn ervaringen en learnings die hij tijdens dit traject heeft opgedaan.
Lees minderDe betekenis van data krijgt steeds meer aandacht. Data lineage: de herleidbaarheid van data naar zijn betekenis en de reden waarvoor de data wordt gebruikt, is steeds vaker een kritieke succesfactor. Daarnaast leidt de steeds grotere verscheidenheid van data tot de noodzaak om grip te krijgen op de afzonderlijke databronnen. Schaarste in dataspecialisten leidt tot de noodzakelijk om slimmer met data om te gaan, en beschikbare kennis te expliciteren. De invoering van een gedistribueerde data-architectuur geeft het laatste zetje om het “informatiehuis op orde” te brengen.
De verwerking van data is daarmee niet alleen een logistieke uitdaging, maar vereist ook een betrouwbare aanpak om de betekenis van data in kaart te brengen die verder gaat dan de traditionele beschrijving van de structuur van het datawarehouse: een semantische aanpak is vereist.
Deze semantische aanpak neemt de problem space als uitgangspunt voor de beschrijving: het domein waarover de data gaat. Vanuit een nauwkeurige analyse en model van het domein, volgt een herleidbare vertaling en relatie naar het model voor de data zelf in de solution space. Het resultaat kan gezien worden als een knowledge graph: een netwerk van verbonden (linked) data, inclusief de definitie van deze data en de lineage naar de grondslag voor deze data in wet- regelgeving, compliancy richtlijnen en bedrijfsdefinities.
Een dergelijke aanpak is niet alleen relevant voor het datawarehouse: het resultaat is een expliciete, eenduidige vastlegging van de kennis over de relevante data in een organisatie. Marco Brattinga neemt u mee in de wereld van enterprise semantic data management langs de volgende topics:
Als data scientist groeit onze impact op de wereld om ons heen elke dag aanzienlijk. Maar wat zijn de concrete stappen die we kunnen zetten om een verantwoordelijke data scientist te worden? In deze sessie maakt u kennis met verantwoordelijke data science en hoe ethiek kan worden geïntegreerd in technologie. Tanja Ubert en Gabriella Obispa tonen hun visie op hoe we ‘responsibility’ moeten incorporeren in ons werk met data. Welke vragen moeten we stellen, welke verantwoordelijkheid hebben we, als specialisten, te nemen wanneer we verzamelen, gebruiken en implementeren voor data oplossingen in onze wereld.
Data architectures are becoming increasingly complex due to the need to serve many purposes: multiple personas, ranging from operational data users to data scientists need to have access to a variety of managed, governed data and demand real-time, self-service reporting and analytics. Applying principles while designing data architectures will help simplify the development and usage of those architectures by developers and end users. We apply the following principles:
In this session we will show how Connected Data Group and 2150 Datavault Builder work together in designing the simplified architecture by focusing on data modelling with Data Vault and automating the data engineering process with Datavault Builder.
During this session you will learn:
How to use the full benefits of Data Vault? Data Vault is the modeling approach to become agile in Data Warehousing. The Data Vault approach is unbeatable, especially when the technical implementation is abstracted through automation. Datavault Builder has combined its Data Vault driven Data Warehouse approach with a standardized development process that allows to scale and allocate development resources flexible.
Quickly develop your own Data Warehouse. Rely on the visual element of Datavault Builder to facilitate the collaboration between business users and IT for fully accepted and sustainable project outcomes. Immediately lay the foundation for new reports or integrate new sources of data in an agile way. Deliver new requirements and features with fully automated deployment. Agile Data Warehouse development and CI/CD become a reality.
Lees minderHet hebben van de juiste data op de juiste plaats op het juiste moment met de juiste kwaliteit is belangrijk voor het ondersteunen van zakelijke beslissingen, optimalisatie, automatisering en het voeden van AI-modellen. Net als bij software ontwikkeling, wilt u snel nieuwe functionaliteiten van hoogwaardige kwaliteit opleveren. Nieuwe data, nieuwe inzichten, nieuwe AI modellen wilt u niet maandelijks, maar wanneer ze klaar zijn beschikbaar stellen aan de gebruiker. Dat is wat DataOps in de theorie kan bewerkstelligen. Maar in de praktijk loopt men tegen heel wat uitdagingen aan die het een stuk moeilijker maken het DataOps proces in een organisatie te effectueren. Hoe om te gaan met bijv. development sandboxes en representatieve testdata over systemen heen.
In deze sessie tonen Niels Naglé en Vincent Goris wat DataOps is en dat het niet gewoon DevOps voor data is. Zij gaan in op de unieke uitdagingen, oplossingen voor deze uitdagingen en hun lessons learned.
[Video intro] We hebben allemaal wel eens studies gezien die aangaven wat een enorme hoeveelheden data er per dag op deze planeet aangemaakt worden. Een groot deel van deze data is echter niet nieuwe maar gekopieerde data. In bestaande data-architecturen, zoals datawarehouses, wordt zeer veel gekopieerd, maar ook moderne architecturen zoals data lake en data hub rusten volledig op het kopiëren van data. Dit ongebreidelde kopiëren moet verminderd worden. We staan er niet altijd meer bij stil maar het kopiëren van data kent vele nadelen, waaronder hogere data-latency, complexe datasynchronisatie, complexere databeveiliging en dataprivacy, hogere ontwikkel en onderhoudskosten en verslechterde datakwaliteit. Het wordt tijd dat bij het opzetten van nieuwe data-architecturen het dataminimalisatieprincipe toegepast wordt. Dit betekent dat er gestreefd wordt naar het minimaliseren van gekopieerde data. Met andere woorden, gebruikers krijgen meer toegang tot originele data en er wordt overgestapt van data-by-delivery naar data-on-demand. Dit laatste komt overeen met wat er in de filmwereld gebeurd is: van video’s ophalen bij een winkel naar video-on-demand. Kortom, dataminimalisatie betekent dat we onze data gaan ‘Netflixen’.
[Video introduction] The data warehouse is over thirty years old. The data lake just turned ten. So, is it time for something new? In fact, two new patterns have recently emerged—data fabric and data mesh—promising to revolutionise the delivery of BI and analytics.
Data fabric focuses on the automation of data delivery and discovery using artificial intelligence and active metadata. Data mesh has a very novel take on today’s problems, suggesting we must take a domain driven approach to development to eliminate centralised bottlenecks. Each approach has its supporters and detractors, but who is right? More importantly, should you be planning to replace your existing systems with one or the other?
In this session, Dr. Barry Devlin will explore what data fabric and mesh are, what they offer, and how they differ. We will compare them to existing patterns, such as data warehouse and data lake, data hub and even data lakehouse, using the Digital Information Systems Architecture (DISA) as a base. This will allow us to clearly see their strengths and weaknesses and understand when and how you might choose to move to one or the other.
What You Will Learn:
In this interactive session Lawrence Corr shares his thoughts and experiences on using visual collaboration platforms such as Miro and MURAL for gathering BI data requirements remotely with BEAM (Business Event Analysis and Modeling) for designing star schemas. Learn how visual thinking, narrative, a simple script with 7Ws and lots of real and digital Post-it ™ notes can get your stakeholders thinking dimensionally and capturing their own data requirements with agility in-person and at a distance.
Attendees will have the opportunity to vote visually on a virtual whiteboard and should have their smartphones ready to send Lawrence some digital notes to play the ‘7W game’ using the Post-it app.
This session will cover:
[Video introduction] Developing a machine learning strategy designed to maximize business value in the age of Deep Learning
Deep Learning is so dominant in some discussions of AI and machine learning that many organizations feel that they need to try to keep up with the latest trends. But does it offer the best path for your organization? What is this technology all about and why should both executives and practitioners understand its history?
All business leaders know that they have to embrace analytics or be left behind. However, technology changes so rapidly that it is difficult to know who to hire, which technologies to embrace, and how to proceed. The truth is that traditional machine learning techniques are a better fit for more organizations than chasing after the latest trends. The hyped techniques are popular for a reason so leaders with a responsibility for analytics need to have a high-level understanding of them.
Learning objectives
[Video introduction] Companies rely on modern cloud data architectures to transform their organizations into the agile analytics-driven cultures needed to be competitive and resilient. The modern cloud reference architecture applies data architecture principles into cloud platforms with current database and analytics technologies. However, many organizations quickly get in over their head without a carefully prioritized and actionable roadmap aligned with business initiatives and priorities. Building such a roadmap follows a step-by-step process that produces a valuable communication tool for everyone to deliver together.
This session will cover the four significant steps to align the data strategy and roadmap with the business. We’ll start with translating business strategy into data and analytics strategies with the Enterprise Analytics Capabilities Framework. This is followed with a logical modern cloud reference data architecture that can leverage agile architecture techniques for implementation as a modern data infrastructure on any cloud, hybrid or multi-cloud environment. This will provide the basis for drilling deeper into architecture patterns and developing proficiency with DataOps and MLOps.
This session will cover:
Do you want to generate more value out of your data with less effort and cost?
This presentation will help you to reduce your time to market and increase your development efficiency. Erik discusses projects he has been involved in and explains how he was able to accelerate and streamline them using WhereScape. His main focus will be on a Data Vault 2.0 implementation he was involved in at a large bank.
WhereScape Data Automation software accelerates the design, build, documentation and management of complex data ecosystems. It automates repetitive manual tasks such as hand coding and enables developers can produce architectures in a fraction of the time, without human error.
Lees minder[Video-introduction] The role of data in business processes has never been more critical. But as we develop new technologies and new skills it feels like we meet new dilemmas at every turn. Concerns about governance and compliance seem to conflict with demands for agility and collaboration. The expanding scope of the data we work with brings new ethical concerns to light.
So, are we doomed to a constant struggle for control of our data assets? I don’t think so. In this session, I’ll sketch out a provocative, but hopefully useful idea – that we have confused ownership and accountability, governance and compliance, openness and collaboration. We’ll look at some potentially new approaches, which aim to resolve some of the complex puzzles of enterprise data.
[Video introduction] We have all heard “This is the golden age of data” and “Data is the new oil” but that does not necessarily mean your senior executives are anxious to participate in Conceptual Data Modelling / Concept Modelling. The speaker recently had an interesting exception to the reluctance of senior executives to participate in data modelling. Led by the Chief Strategy Officer, a group of C-level executives and other senior leaders at a mid-size financial institution asked Alec to facilitate three days of Concept Modelling sessions.
Fundamentally, a Concept Model is all about improving communication among various stakeholders, but the communication often gets lost – in the clouds, in the weeds, or somewhere off to the side. This is bad enough in any modelling session, but is completely unacceptable when working at the C-level. Drawing on forty years of successful consulting and modelling experience, this presentation will illustrate core techniques and necessary behaviors to keep even your senior executives involved and engaged,
Key points in the presentation include:
[Video introduction] Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
Who is it for?
Course Description
1. How to choose the best machine learning strategy
2. Decision Trees: Still the best choice for many everyday challenges
3. Introducing the CART decision tree
4. Additional Supervised Techniques
Lees minder
[Video introduction] By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
[Video introduction] Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
[Video introduction] Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
Course Description
1. Understanding why we need to change
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
The 2 key processes to focus on for DataOps
3. Managing DataOps: defining Metrics and Maturity Models
[Video introduction] Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
Who is it for?
Course Description
1. How to choose the best machine learning strategy
2. Decision Trees: Still the best choice for many everyday challenges
3. Introducing the CART decision tree
4. Additional Supervised Techniques
Lees minder
[Video introduction] By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
[Video introduction] Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
[Video introduction] Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
Course Description
1. Understanding why we need to change
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
The 2 key processes to focus on for DataOps
3. Managing DataOps: defining Metrics and Maturity Models
[Video introduction] Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
Who is it for?
Course Description
1. How to choose the best machine learning strategy
2. Decision Trees: Still the best choice for many everyday challenges
3. Introducing the CART decision tree
4. Additional Supervised Techniques
Lees minder
[Video introduction] By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
[Video introduction] Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
[Video introduction] Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
Course Description
1. Understanding why we need to change
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
The 2 key processes to focus on for DataOps
3. Managing DataOps: defining Metrics and Maturity Models
[Video introduction] Regression, decision trees, neural networks—along with many other supervised learning techniques—provide powerful predictive insights. Once built, the models can produce key indicators to optimize the allocation of organizational resources.
New users of these established techniques are often impressed with how easy it all seems to be. Modeling software to build these models is widely available but often results in disappointing results. Many fail to even recognize that proper problem definition was the problem. They likely conclude that the data was not capable of better performance.
The deployment phase includes proper model interpretation and looking for clues that the model will perform well on unseen data. Although the predictive power of these machine-learning models can be very impressive, there is no benefit unless they inform value-focused actions. Models must be deployed in an automated fashion to continually support decision-making for residual impact. The instructor will show how to interpret supervised models with an eye toward decisioning automation.
The seminar
In this half-day seminar, Keith McCormick will overview the two most important and foundational techniques in supervised machine learning, and explain why 70-80% or more of everyday problems faced in established industries can be addressed with one particular machine learning strategy. The focus will be on highly practical techniques for maximizing your results whether you are brand new to predictive analytics or you’ve made some attempts but have been disappointed in the results so far. Veteran users of these techniques will also benefit because a comparison will be made between these traditional techniques and some features of newer techniques. We will explore that while tempting, the newer techniques are rarely the best fit except in a handful of niche application areas that many organizations will not face (at least not in the short term). Participants will leave with specific ideas to apply to their current and future projects.
Learning Objectives
Who is it for?
Course Description
1. How to choose the best machine learning strategy
2. Decision Trees: Still the best choice for many everyday challenges
3. Introducing the CART decision tree
4. Additional Supervised Techniques
Lees minder
[Video introduction] By the end of this workshop your team will have a sound understanding of how data and analytics can expand, enhance and strengthen your business and your relationships with clients. You’ll have some practical guidelines for strategy, messaging and design which can get you started on your own analytics journey.
Learning objectives
Course Description
1. Introduction: Data as a resource, analytics as a differentiator
We believe that data without analytics is a wasted resource; analytics without action is a wasted effort. We review the value of data to software companies and the potential for analytics as a new line of business.
2. Case studies
Real-world examples of software companies who have developed analytic products and services using a gameplan methodology.
3. Three simple models to get you started
Although there are many ways in which you can leverage data as a resource and analytics as an offering, we have found three to be relatively easy and effective to start with. We’ll review the components and technologies of each, with some guidelines for success and pitfalls to avoid.
4. Communities of practice and tools of choice
When you introduce analytics as a line of business, users and their social interactions, whether in the office or online, will be critical to your success. We show how communities of practice develop around the tools we choose – and we describe how to ensure your tool is chosen.
5. Governance and privacy
In any discussion of data and analytics today, concerns about privacy and compliance always come to the surface. We’ll introduce the subject with enough detail for you take the first, important, practical steps to being well governed for today’s regulatory environment.
6. Narratives and gameplans
These are simple tools for mapping and aligning strategy. However, although simple, they offer subtle and effective capabilities for planning features and releases and for aligning teams such as marketing and management around a vision.
Who’s it for?
[Video introduction] Whether you call it a conceptual data model, a domain map, a business object model, or even a “thing model,” a concept model is invaluable to process and architecture initiatives. Why? Because processes, capabilities, and solutions act on “things” – Settle Claim, Register Unit, Resolve Service Issue, and so on. Those things are usually “entities” or “objects” in the concept model, and clarity on “what is one of these things?” contributes immensely to clarity on what the corresponding processes are.
After introducing methods to get people, even C-level executives, engaged in concept modelling, we’ll introduce and get practice with guidelines to ensure proper naming and definition of entities/concepts/business objects. We’ll also see that success depends on recognising that a concept model is a description of a business, not a description of a database. Another key – don’t call it a data model!
Drawing on almost forty years of successful modelling, on projects of every size and type, this session introduces proven techniques backed up with current, real-life examples.
Topics include:
[Video introduction] Adopting the DataOps Methodology is helping agile teams deliver data and analytics faster and more manageable in modern data infrastructure and ecosystems. DataOps is critical for companies to become resilient with data and analytics delivery in a volatile and uncertain global business environment. Going beyond DevOps for continuous deployments, DataOps leverages principles from other disciplines to evolve data engineering and management.
Companies need data and analytics more than ever to be agile and competitive in today’s fast-changing environment. DataOps can be an enterprise-wide initiative or an independent agile delivery team working to improve how they deliver data analytics for their customer. Gaining traction takes time and ongoing support.
This seminar will cover:
Course Description
1. Understanding why we need to change
2. Making DataOps Work
The 7 key concepts to focus on for DataOps
The 2 key processes to focus on for DataOps
3. Managing DataOps: defining Metrics and Maturity Models
Tijdgebrek? Volg één dag!
Heeft u slechts één dag de tijd om de DW&BI Summit bij te wonen? Maak een keuze uit de onderwerpen en kies de dag die het beste past. De onderwerpen zijn zodanig gekozen dat zij op zich zelf staan zodat het ook mogelijk is om alleen de eerste dag van het congres te volgen óf om dag twee te volgen zonder dat u dag één heeft bijgewoond. Deelnemers aan het congres hebben bovendien nog enkele maanden toegang tot de video opnames van de gekozen dag dus als u een sessie moet missen, is er geen man overboord.
Kijk op de agenda van Adept Events
“Langere sessies leverden de mogelijkheid tot uitdiepen en dialoog. Dat vind ik goed aan deze summit.”
“Inspirerende summit met goede sprekers die de onderwerpen vanuit verschillende hoeken mooi belichten. Organisatie en locatie: prima!!”
“Inspirerend en prima verzorgd congres. Actuele onderwerpen met veel praktische richtlijnen, handvatten en do’s en don’ts over de informatie architectuur zoals Big Data, Data Lakes, datavirtualisatie en een logisch datawarehouse.”
“Een leuk en leerzaam event!”
“As a BI Consultant I feel inspired to recommend this conference to everyone looking for practical tools to implement a long term BI Customer Service.”
“Was weer prima!”