Start-ups and scale-ups often prioritise quick decisions to maintain their competitive edge, which can lead to shortcuts in data analysis or overreliance on intuition. The impact is often immediate because hasty decisions based on incomplete or improperly analysed data can result in missed opportunities or strategic missteps.
This is particularly true when data is fragmented across silos. Teams simply cannot access or integrate information efficiently. This forces tech leaders to either wait for data consolidation (slowing down the process) or make quick decisions based on incomplete data, sacrificing rigour (accuracy).
This article will address these two primary challenges and offer actionable solutions while solving the other three capital problems in data-driven decision-making. However, this is not our normal day at work. In this case, things just cannot be worse. We are operating in a high-pressure scenario where the company is on the brink of financial ruin and you, as a technology leader, inherited a chaotic environment with poor data processes. The goal is to quickly induce enough order to enable survival, even if perfection is impossible.
In any given scenario, the challenges are the same:
But, in our situation, we can’t use the familiar approach and/or deploy common strategies. We need to step up our game.
Start-ups and scale-ups often adopt multiple tools and platforms quickly, leading to fragmented data spread across various systems (CRM, ERP, marketing tools, etc.). Integrating this data into a cohesive system is complex and resource-intensive. This is especially true if you fail to a) invest in data integration platforms, and/or b) develop a unified data architecture early on.
In all honesty, a tech leader’s hands are often tied either due to budgetary restraints or late arrival. Consequently, disconnected data sources hinder holistic insights and create inefficiencies in decision-making and you can’t exactly “correct” what’s been done wrong right from the start on short notice.
How to solve this problem?
When traditional mitigation strategies are not viable, you can still take alternative, resource-efficient steps. These approaches focus on leveraging existing resources, prioritising immediate needs and adopting creative low-cost solutions.
Identify the most critical data silos that impact decision-making and prioritise integrating those first. Use lightweight manual processes or scripting (eg, Python, Google Sheets) to consolidate data where automation tools are unavailable.
From that point onward, do the following:
The outcome of these measures should be partial but impactful data integration for essential use cases without significant resource investments.
Maximise the utility of existing platforms and adopt free or open-source tools for basic data integration. Your sequence of actions should be like this:
This should result in cost-effective integration with tools already in your tech stack.
If you are in a larger organisation, identify key individuals within departments who can take ownership of their team’s data. These people should act as intermediaries to share and consolidate information.
Now, to make this process as smooth as possible, take the following steps:
What you are looking to achieve with this is not only improved communication but also understanding of data across departments without requiring centralised systems. It is a longer walk around, no doubt, but on the bright side, it will create a data processing singularity in the long run.
At first glance, this solution seems like it might lead to a pinball effect, with you bouncing from one office to another in a desperate search for that final document. Be that as it may, if you allow teams to maintain control over their own data while introducing light governance structures, it will a) reduce silos, and b) result in shared standards and definitions. However, it won’t happen on its own so to achieve those results, follow this strategy:
And there you have it – a fully decentralised yet coordinated approach to data management that minimises silos. Because sometimes, even the government’s bureaucracy turns out efficient.
If — and this is a big if — resources allow for at least minimal investment, pilot a low-cost, pay-as-you-go cloud data lake solution. You want a focused, incremental approach to centralisation without incurring large up-front costs.
This is one of the possible approaches:
Later, during a fast-growth stage, when you get your hands on more resources, this can easily evolve into a full-stack cloud data storage and processing.
As you can assume, this strategy perhaps better fits the onset of the fast-growth stage, but it could also be just what you need in your start-up. This is how it works:
It is an agile team effort that minimises dependencies on expensive tools or specialists.
The core philosophy here is: start small, build incrementally.
In other words, when constrained by budget or timing, focus on solving the highest-impact problems first. Admit to yourself that perfect integration may not be possible, but incremental improvements can still provide meaningful value. By being a bit creative and by maximising existing resources, technology leaders can mitigate the impact of silos without requiring substantial investments.
Your most immediate challenge is the all too familiar consequence of rapid growth and that’s a lack of consistent data governance. As you know, this inevitably leads to poor data quality (inaccuracies, duplicates or incomplete data).
The impact can turn out devastating because low-quality data undermines the reliability of insights, leading to poor strategic decisions. Imagine a marketing team missing an entire segment of the target audience or misaligning the core message. Sooner than later, all fingers will point at you.
On a normal day, you would mitigate by:
But remember, this is not your normal day. More often than not, technology leaders inherit a chaotic environment with poor processes and must react instead of being proactive.
Here’s what you can do in such a situation:
Your immediate priority is to identify the most critical areas where poor data quality immediately impacts the company’s survival. Take the following steps:
In the end, you will understand where to focus efforts for maximum impact in the shortest time.
In other words, identify and solve one or two highly visible data issues to demonstrate progress and build trust. Simply fix a problem that has frustrated key stakeholders (eg, cleaning up sales pipeline data or resolving overdue billing errors) and then publicise the success with tangible results (eg, “Resolved 300 duplicate records, improving invoice accuracy by 20%”).
And now you have improved stakeholder confidence and momentum for broader changes.
Quickly enforce lightweight rules to address the most damaging data quality issues without overengineering. This is achieved by:
If you do everything right, you should end up with an immediate reduction in errors, enabling more reliable decision-making.
This strategy is more appropriate for larger organisations, but it can be scaled down to fit the purpose of a start-up.
In essence, you assemble a cross-functional, small team with representatives from critical departments to act as a task force. To succeed, this is what you should do:
The outcome is rapid, team-based problem-solving that restores operational functionality.
In other words, fix the most critical data issues in high-priority areas and immediately lock processes to prevent further degradation.
Start by identifying high-impact errors (eg, duplicates in customer records, incorrect pricing). Once you identified the set(s), correct these errors manually or via scripts. Finally, implement basic process locks, such as requiring specific fields to be filled before records are saved or restricting edits to validated data.
You end up with stabilised data quality in key areas, reducing downstream chaos.
Once the immediate chaos is controlled, start laying the groundwork for systematic improvements and building a foundation for sustainable data management. For instance, create a roadmap for addressing root causes (eg, better governance, new necessary tools). But whatever you do, don’t forget to document lessons learned from the crisis to guide future processes.
The key principle here is: stabilise, not perfect.
Remember, your goal is to bring enough order to stabilise operations and decision-making, even by using imperfect solutions. Once the immediate crisis is averted, you can gradually transition to proactive long-term strategies.
Let’s see what we can do with infrastructure bottlenecks caused by over-relying on basic tools that now can’t handle the exponential growth of data as the organisation scales. Instead of smooth operations, we have slow analytics processes, delayed insights and increased costs because systems struggle to keep up.
Again, on a normal day, you would simply:
But that simply isn’t the case. Your predecessors (if any), didn’t quite do the job right and now you have a serious problem – unscalable data in a fast-growing company.
When faced with such an infrastructure in a rapidly growing organisation without the resources to invest in modern solutions, you must focus on triage, optimisation and tactical solutions. The goal is to stabilise the infrastructure to support growth in the short term while preparing for future scalability once resources are available.
Your priority is identifying the most critical bottlenecks in the current infrastructure that directly impact operations or decision-making. That is, perform a rapid audit of the existing infrastructure to identify pain points (eg, slow query response times, system outages, capacity issues).
Once identified, prioritise fixing the systems that handle mission-critical data (eg, sales, billing, customer support).
This should give you a clearer understanding of where to focus limited resources for maximum impact.
While you are already dealing with bottlenecks, activate the afterburner by squeezing the maximum performance out of the existing infrastructure with targeted optimisations.
For example:
If done correctly, you should see immediate performance improvements without requiring new infrastructure.
The play here is to introduce temporary fixes to alleviate pressure while preparing for longer-term improvements.
Here’s what you can do to achieve this:
These solutions may appear trivial but keep in mind what we are trying to achieve here and under which circumstances. We ultimately want stabilised infrastructure to support ongoing growth, even if suboptimal.
Data don’t need to be processed or stored at the same priority level. Therefore, segregate data workloads based on their importance and urgency. For example:
The cumulative effect is reduced strain on the infrastructure without sacrificing business-critical operations.
Sometimes, you don’t have any other choice but to enter the dark ally of open-source tools and use them to address specific pain points in the data infrastructure.
Use open-source tools like MySQL, PostgreSQL or SQLite for additional database capacity and implement lightweight ETL solutions like Apache NiFi or Singer for data integration. Finally, make sure to monitor system health with, for example, Zabbix or Prometheus.
None of us prefer open-source solutions, but they are cost-effective and scalable enhancements. For instance, we are utilising Mautic as our central nervous system and a single source of truth. Our CTO, Jason Noble, spent a lot of sleepless nights getting that open-source beast to life and keeping it updated. However, it was worth it. We don’t spend thousands on monthly subscriptions and we alone own all data. Would it be the same if we had chosen HubSpot, for example, that’s highly questionable.
When automation or scaling proves impractical for any number of reasons, use manual processes to handle critical data workflows.
You simply assign dedicated teams or individuals to manage data flows that the current infrastructure cannot handle (eg, manually consolidating reports or transferring data between systems). Just remember to use templates or scripts to streamline repetitive tasks.
It’s not exactly practical and can cause delays, but these short-term solutions keep the business running without overwhelming the infrastructure.
The key principle here is: survival first, perfection later.
In this critical phase, focus on stabilising the infrastructure and ensuring business continuity. While the current environment may remain suboptimal, these actions will buy you time to secure the resources and strategic alignment necessary for sustainable, long-term growth.
And remember, no matter the situation, begin laying the groundwork for scalable solutions even if resources are tight. Begin consolidating fragmented systems into a single source of truth wherever feasible. Also, document the current infrastructure and create a lightweight plan for migration to a scalable architecture once resources become available. And in that little spare time you get around lunch, try to identify low-cost, incremental investments that could ease scalability bottlenecks.
Start-ups often struggle to attract and retain skilled data professionals due to competition from larger organisations. That lack of expertise can result in underutilised data assets and suboptimal decision-making.
Commonly, a CTO would deploy these three strategies:
Now imagine the scenario in which none of the proposed mitigation strategies works, at least not in the long run because the small team of only a few simply can’t find additional time to upskill in data literacy and analytics (they are software engineers). Partnering with external consultants or some extensive outsourcing is out of the question and the work atmosphere is so grim that it is impossible to create and cultivate an attractive work culture to retain data talent. But the paycheck on the other hand is so big that you don’t want to quit and search for something else. What can you do?
Here is the list of the most realistic strategies:
As you can see, the guiding principle here is: stabilise to survive. In other words, if you are in a highly stressful and negative environment with limited resources and a small overburdened team, just focus on stabilising the situation and delivering “good enough” results.
Therefore, prioritise ruthlessly, automate strategically and leverage creatively to ensure the team survives the current challenges while laying the groundwork for future improvements.
As we said early on, start-ups and fast-growing organisations are often forced to make quick decisions to maintain their competitive edge. This leads to shortcuts in data analysis or overreliance on intuition.
Normally, a technology leader would implement these three strategies to balance speed with rigour:
But what happens when data silos hinder speed and rigour while pressure for speed amplifies silos?
Let’s use case studies to better understand this causal relationship:
How to break this vicious cycle?
In ideal circumstances, organisations would employ the following strategies:
Only, we are not that lucky. There are no warehouses, teams still work on legacy (read: rigid and fixed-capacity) systems and nobody shares anything. It even seems that teams pursue different strategic goals. That’s the situation we met after accepting the role.
What we need now is a phased, tactical approach that delivers quick wins while laying the groundwork for broader transformation. It is essentially a five-step strategy:
In this step, our priority is to identify critical interdependencies so we can get some clarity on immediate priorities to stabilise the situation.
To find out, we can conduct a rapid assessment of the most critical pain points. For example:
Then, we need to focus on cross-functional bottlenecks where silos directly affect speed and rigour. This requires the creation of a temporary “Data Task Force” or a small agile cross-functional group that will address critical silos by accessing and consolidating data needed for immediate priorities. The good practice here is to assign members from key teams (eg, product, finance, operations) to represent diverse perspectives.
Eventually, all these efforts should create a temporary workaround that will enable collaboration and quick fixes.
Start by creating a “Minimum Viable Integration” to achieve basic data sharing without major resource investments. That is, use lightweight solutions to connect siloed systems, focus on critical data flows and automate repetitive processes.
Next, establish a “Single Source of Truth” for critical metrics to enable shared visibility into business performance, fostering alignment.
Finally, pilot cross-functional decision reviews for high-stakes decisions to create a foundation for a gradual cultural shift toward collaboration and shared accountability.
To reduce strategic misalignment and increase clarity, teams must unify under the same goal framework. To get there, team leads need to be aligned on well-defined company-wide strategic goals. These goals must then be broken into measurable objectives tied to specific team deliverables.
It’s only now that you can start prioritising tactical investments in scalability by implementing high-impact, low-cost upgrades to legacy systems (eg, replacing outdated software with lightweight cloud-based tools).
You can easily justify these investments by linking them to business outcomes like faster time-to-market or improved customer satisfaction. Just remember to start small to fit within resource constraints.
The outcome is gradual modernisation without overwhelming the organisation.
You want to achieve three goals here:
What to track and measure?
Well, track key indicators such as decision turnaround times, collaboration frequency and strategic goal alignment. Use these metrics to gauge the effectiveness of your interventions. Just remember to regularly share progress updates with leadership and the broader team.
How to adapt for scaling?
The result is sustained momentum and long-term scalability.
In challenging environments, maintaining data integrity for strategic planning requires a balance between stabilising immediate risks and building a scalable foundation for the future. Quick wins, collaboration and adaptability are essential to breaking the cycle of dysfunction and driving sustained organisational success.
Through four weeks and sixteen lectures in Module 8 of our Digital MBA for Technology Leaders, the faculty of senior executives responsible for data management in their organisations, teach this and other subjects in much more detail, using years-long experience. You will learn how to adjust to an array of different circumstances to, ultimately, maintain data integrity even in worst-case scenarios.
90 Things You Need To Know To Become an Effective CTO
London
2nd Floor, 20 St Thomas St, SE1 9RS
Copyright © 2024 - CTO Academy Ltd