Blog

How To Build Embedded Dashboards With Panoply and Luzmo

January 14, 2021

Tuana Çelik

Want to create stunning charts and graphs without the pains of data modeling? Here's how to set up embedded BI hassle-free with Panoply and Luzmo.

A common pain point for anyone who needs a layer of analytics and visualizations for the data they’ve accumulated, is often getting that data ready to be used. It’s perfectly normal for you to arrive at this stage only to notice that your data is distributed across different places, doesn’t match in some cases and overall looks messy. Luckily, there are plenty of tools to help you unify and clean your data in some meaningful way, so you can leverage it to its full potential. Luzmo has connectors to a number of data sources that make this possible. One of which is Panoply.

In this blog post, I’ll go through a simple demo of how you can use Panoply in this respect. But before we get into that, let me give you some idea as to why you might want to start thinking about your data before you start a visualization step.

Why start with your data model first?

Creating a performant data model before adding your visualization layer allows you to make sure you model is built in such a way that it is:

  1. Reusable and easy for your designers to implement dashboards and visualizations with
  2. Decoupled from your analytics and visualizations layer. Ultimately, this reduces the complexity for your designers. By decoupling complex JOINs, data cleaning steps from within your data visualization tool, your designers can make sure they focus on building dashboards
  3. Performant to your specific dashboard needs
  4. Lastly and possibly most importantly, a single source of truth for your data would allow you to ensure consistency by avoiding error prone duplicates.

Ultimately, when there’s a need for an analytics/visualization layer, you most probably already have the data you want to analyze somewhere. Before you start creating dashboards using this data, a good idea is to start thinking about how that data might need to be re-modelled to achieve the insights you require. This is not an easy step. In most cases, it requires a deeper technical understanding of the existent data, and an idea of how it should be re-modelled to meet the requirements.

A Common Scenario

For example, a common scenario is one where different sets of databases are managed or have been accumulated by different teams for various reasons. But you soon realize that you need to bring these databases together in some way, in order to achieve the analytics and visualizations you need. You may also need to clean the data, making sure there are no errors or duplicates, and a number of other requirements depending on your use case.

When you come to the stage of thinking about these requirements, a good practice to consider is to make sure you have a single source of truth. This means you refer to one source, rather than risking having multiple versions of the same data. This may cause clashes and confusions. A data warehouse tool such as Panoply is a great way to achieve this. You can also have a look at this article on the importance of single source of truth, by Panoply!

How to do data modeling?

In this blog, I will show you an example of how I achieved a simple level of data modelling on distributed data sources with Panoply. I won’t be going into extreme levels of complexity (there are so many levels/types/layers to what ‘data modelling’ can be). However, it should give you an idea of what you can achieve with Panoply before getting into the data visualization step. I’ll walk through some of the most common problems you may have with messy data. Finally, I’ll show you how to connect data from Panoply to Luzmo to be used in your dashboards.

The Data

Let’s imagine it’s the holiday season (which it was just recently). It’s the time of year where home deliveries for gifts are probably the highest, and probably also the time with the most delays. Let’s say we work at a logistics company that is arranging the deliveries (or, ya know, we’re Santa). There’s a team (of elves) that has info on people and their addresses. Another is keeping track of each package’s status (if it’s delivered or not). The third team has info on how long the expected delivery date will be for each package, when and where it was sent from.

This year, we had the genius idea to build an analytics layer so that we can see where the delays were (if any), how many deliveries we had and between what countries. We have all the information we need with the information each team has. However, the problem is that the data is dispersed in different tables, we need to combine them in some way. Not only that, but one team has decided they’ll use MongoDB, the other PostgreSQL, and the other is just using Google Sheets. The data here is purely for demonstration purposes! For example, you probably wouldn’t choose to use Google Sheets for the type of data described below, but let’s say you did for arguments sake. Let’s have a look at what these tables might look like.

Receiver Information in MongoDB
Example of a MongoDB dataset structure
Package Information in Google Sheets
Example of a Google Sheets dataset structure
Package Status Information in PostgreSQL
Example of a PostgreSQL dataset structure

The Relation

The first thing you could note here is that some fields relate to each other across the different tables. For example, the id fields in the Package Status and Package Information tables, and the receiver_id field in Package Information and id in Receiver Information. This is good if an agreement was reached during the creation of these tables. In that case, it means these ids refer to the same thing in each. So, if you will need this type of identification of your data, it will be worth thinking about early on. In the example here, you can imagine receiver_id in the Package Information field refers to the person of the same id in the Receiver Information table.

Schema of the relationships between the datasets from different data sources

How to set up a data model in Panoply

Collect the Data

First thing we can do is collect your data into Panoply. In the Data Sources tab, you will see an option to Add Data source.

Connecting your databases or APIs in Panoply

If we, for example, pick MongoDB as our data source, we will be asked to fill in some information about our database and select which Collection we want to collect data from. In our case, the team is collecting data on receivers in MongoDB. So this will look like the following.

Setting up a data connection in Panoply

In Panoply, you can collect data from a number of resources in much the same way. So, we will do the same for our data that’s in Postgres and Google Sheets. Your Panoply data warehouse will be continuously updated for the data you wish to collect within it, so you don’t have to worry about consistency across Panoply and your databases. It provides you a single source where you can manage all of your distributed data in one place.

Once you’ve defined all of your data sources in Panoply, you will be able to see them in your Tables tab. In our case, we’ve collected data from 3 different sources.

Example of data tables in Panoply

Furthermore, you will be able to preview the data and make sure the data types for the fields are what you intend them to be, and edit them if required.

Edit metadata of your data tables in Panoply

Join Tables

So far, I’ve talked about how you can use Panoply as a single source of data for a scenario where the data is all over the place (literally). But what I haven’t talked about is possibly the most interesting part of this tool. That is, how it allows you to easily relate this data to one another, in order to create views with an astonishingly simple interface. Our aim here is to create a logistics dashboard that will show us the delays for package deliveries over the holiday period. We want to achieve the following dashboard.

Granted, the dashboard paints a pretty grim picture for the holiday season delivery statuses (we used mockaroo to mock the data, and this was what the roll of the dice determined). Although given my own luck with gift arrivals for my family this year, this wouldn’t surprise me in reality. Back to the dashboard, we want to be able to see the distribution of late deliveries per country of destination, and we want to be able to see the origin to destination of packages. This dashboard, when filtered on China as the destination would look like the following, tells us that most of its deliveries were from within the country.

Example of a filtered Luzmo dashboard built on top of Panoply's data warehouse

To achieve this we can answer two simple questions and create Joins based on their answer.

What are the relations between the existing tables?

We need to tell where a package originated from, where it’s heading to and what its latest known status is. The Package Information table contains the origin of the package and id of the receiver. But with the id of the receiver, we could check the Receiver Information table and find out where the package is headed to, as this table contains the address of the receiver. Finally, we could do the same with the package id and check its status in the Package Status Information table.

What data does the dashboard need?

The dashboard we want to create tells us we are definitely going to need country names based on the dashboard we want to create, but street numbers and postcodes are possibly unnecessary. Similarly, we can disregard names of receivers too!

Now, we can use the Panoply Workbench to create a new View by creating Joins with these tables.

Create the SQL Query

A join can be achieved by running a simple SQL query, and telling it which fields we want to include and which fields are equivalent to each other across the tables you want to join. In this case, our SQL query looks like the following, and is Saved As ‘delivery_status_view’ :

An important thing to note here: while the first join is a ONE to ONE join, the second is a ONE to MANY join. Here, we have implemented our view by using (INNER) JOINs. Although the view you create in your workbench can be created with the specific type of JOIN you may need.

This will result in a new entry in our Tables tab.

Overview of data tables, containing a new table for the join we created

How to connect Panoply to Luzmo

The only thing you would have to do now is to connect this view as your data source in Luzmo's embedded analytics platform. Luckily, Luzmo has a connector to Panoply which means you simply have to select Panoply as your New Dataset and select the view you created.

Importing a Panoply dataset into Luzmo

Enhance datasets in Luzmo

Of course, once you have imported data into Luzmo, you can enhance the dataset to the specific needs of the dashboard. There are multiple options available, such as creating derived columns or linking two datasets once we have imported the data into Luzmo.

Derived Columns

In the example above, the dataset has the sending times, last update times and the statuses of the packages. For the dashboard to include ‘LATE’, ‘ON TIME’ and ‘ON WAY’ statuses, we create a derived column in the dataset editor. For this, we ‘Add Formula’ and define the condition and column name.

Creating derived columns in Luzmo

This derived column can now be used in dashboards as a usual field. We can use this column to display the stacked column charts that display the distribution of delivery statuses by country.

 

Linking Datasets

As mentioned above, linking datasets is also possible in Luzmo. Although an important thing to note is that these are available for MANY to ONE relations (LEFT JOIN). So, for more complex relations between datasets it’s a good idea to model this at an earlier stage. For more information on these links you can have a look at our ‘Important when linking datasets’ Academy Article.

An example of how we may use this functionality in our demo would be to include the Topography of countries. In that way, we can use a map to display number of deliveries per country. The dataset as it stands does not have Topography information that the Choropleth map needs. But, there is another dataset available in Luzmo called ‘World Countries’ which we could use to link our dataset to. Keep in mind that the columns we link should be equivalent to each other. The value from the column on the left (Master Table) will be used to LOOKUP the correct row on the right (Lookup Table). In this case, we can link the ‘country’ field to the ‘Name’ field.

Linking 2 datasets in Luzmo

Now, we can use a Choropleth map to see how many deliveries each country received by using the ‘Topography’ field for Geography.

 

And that's it! You can now build any dashboard or charts on top of your data model in Panoply, using Luzmo's drag and drop dashboard editor. Have fun!

Resources

Build your first embedded dashboard in less than 15 min

Experience the power of Luzmo. Talk to our product experts for a guided demo  or get your hands dirty with a free 10-day trial.

Dashboard