There was a time when the data analyst on the team was the person who listened intently to a manager’s business hypotheses. Shortly thereafter, the data analyst started an exciting adventure: data is searched in Excel sheets, queried via email, or queried with SQL. The Data Analyst sets out to find answers by exploring the data for weak signals. One by one, a PowerPoint fills up with graphs and action titles to steer decisions or define a business strategy. The BI analyst was at the forefront of digital transformation and was a major influence on the impact of data for decision-making.
But technological advancements quickly overtook the role of the Data Analyst. A company wants to go faster, bigger and better. Data and consequently Data infrastructure is growing and so are the demands on technical skills: DevOps, DataOps, ML Engineer, Data Engineer, and Full Stack Data Scientist. PowerPoint reports have been replaced by dashboards. The BI analyst role became less defined.
In the worst-case scenario, the Business Analyst becomes the second-class data citizen in the enterprise. While in the past a BI analyst could analyze data in Excel or MySQL and then create a report, now it’s just the person who visualizes data, troubleshoots the dashboard, and briefs the IT team. Interactions with business stakeholders become theoretical instead of pragmatic and fast. It is the end of the fast, iterative quest to find truth in data. Pipelines are integrated into lengthy sprints and long feedback loops. It’s now the job of engineers and not of self-taught Business “McGyver” Analysts. The BI analyst is left out, and the engineering department is left alone and overloaded with tasks.
One symptom of this change is moldy data and dashboards. Mikkel Dengsøe sums it up in his article:
We need a new understanding of a BI analyst to get back to old strengths. BI Analysts need to be able to build pipelines, test metrics, and visualizations quickly and iteratively despite the Modern Data Stack. The BI Analyst should spend more time on analysis than on fixing dashboard problems. We need an intermediate layer…
We have just released an open-source tool with Matti for this very reason that allows BI Analysts to build advanced data pipelines directly through an abstraction layer. The BI Analysts can build workflows from data blocks (Data Sources = Airbyte under the hood), transformation blocks (Transformation= dbt), and data science blocks and connect them to a dashboard. Ultimately it means that a BI Analyst can fastly iterate metrics and professionalize them in a way (with dbt under the hood) so that an engineer can always extend the tool.
This creates a clean codebase under the hood. For example, DBT projects are created and built automatically. We have the idea to build a tool for the data analytics space similar to Webflow for web designers. An experienced engineer can customize the dbt models or create new models and make them available on the Kuwala Canvas for no-coders. Did I mention we are open-source? 😅 It would be awesome working with you together on a PR. If you are a business — Yes we are still searching for Design Customers! We don’t bite!
Back to topic: This flexibility is necessary because data projects are individual, grow, and need to be customizable.
Kuwala covers currently the following points:
You can easily start Kuwala on your local machine over Github, here!
We now search for more people using it and building separate parts to grow as a community. We are set — are you? Start hacking! Send us your issues! Start contributing! And if something doesn`t work, join our Slack community and we will help you 🚀