Every time we venture into something for the first time, we can be both excited and nervous at the same time. This is especially the case when dealing with something in your career for the first time. One is eager to make it a success by putting all of your energy into it. They say that "Knowledge is Power", and the potential of being successful in a new venture is directly proportional to the informed decisions one makes in day-to-day work. If you are reading this tutorial, this means that you are keen to take your first venture into a Business Intelligence project and make it a success. So let's get started and understand what we are going to study.
To understand a project of any nature, one needs to understand the typical architectures and best practices for the type of project. We will start with an understanding of the Business Intelligence (BI) architecture to develop our knowledge of the key points for a BI project. Once you land on a project, you will be assigned responsibilities that you need to understand to analyze your work and provide estimates on how soon you can deliver the work - which for many people is the most unnerving part of the job as you don't want to ask for too much or too little time.
In a BI project, data is the currency and analytics is the merchandise. Without having the right type, amount and shape of data, one cannot get the right kind of analytics. In this tutorial, we will look at different types of data sources as well as relational data modeling. Data makes many hops from one layer of the solution to another in any BI solution architecture. During this data movement from source to destination, the shape of the data keeps transforming. This facilitation of data movement and transformation along the way is typically known as Extract, Transform and Load (ETL). We will look at typical ETL needs and the associated topics like data staging and master data management concepts.
Once we gather a huge volume of data from a variety of sources for our required analytics, it needs to be stored in the right manner for better retrieval. To understand it in simple terms, it's similar to how books are organized in a library for easy retrieval. We will discuss data warehousing and developing a data model for high performance.
When huge volumes of data at the scale of GBs / TBs is required for a project, we need an eagle-eye view of the data that goes beyond the reach of an Online Transaction Processing (OLTP) database. For example, let's say a national bank has 10 million transactions per day, and the CEO needs to know an average amount of these transactions for the past five years. These high volume computations are better suited for an Online Analytical Processing (OLAP) database. To meet these needs, we need to understand data marts, cubes, OLAP as well as dimensional modeling.
Finally we reach a stage where we are ready to report on the statistically computed values in a visually appealing way (i.e. reporting and analytics). We need to understand the different kinds of reports and different analytics options.
Through the course of this tutorial, we will associate each topic with corresponding tools / technology from the Microsoft Business Intelligence technology stack with example use-cases of how to implement the solution.
The outline for this tutorial is: