Modern data strategies for peak value and performance
By Splunk
Splunk’s strategies for data management prioritises maximising the value of data to aid organisations in streamlining operations and optimising costs.

Just like fuel for a car, only the right type of data will help organisations invent better customer experiences, identify malicious threats, and improve processes to strengthen digital resilience. Image: Canva
In an era where data-driven decision-making is paramount, government agencies are increasingly reliant on metrics to gauge performance, identify areas for improvement, and ensure accountability.
The sheer volume and complexity of this data can often lead to increased costs and operational inefficiencies.
New findings from a Splunk survey found that over 73 per cent of respondents reported that data volume accounted for the increased cost of data management.
Ninety-one per cent of them cited an increase in the overall spend on data management compared to the previous year.
Still, organisations put up with these circumstances not for the lack of reason.
Concerns for security and visibility tend to sway thinking towards conventional notion of centralising data in a stockpile, to meet organisational requirements such as retrieval requirements in line with legislative acts.
These needs form the basis of a data management strategy to help organisations manage the lifecycle of its data, informing practices from data virtualisation to data documentation.
The sheer scale of data today points the user towards more value creation - the process of extracting insights and turning data into something useful.
To subscribe to the GovInsider bulletin, click here.
New rules of data management
Establishing robust control over organisational data is a strategic imperative for agencies to transition from reactive data management to proactive data strategies.
Data quality, data reuse, data tiering, and data federation stand out as four accessible domains that organisations can immediately manage to best know what data is being generated in their enterprise for effective cost assessment and operations.
Data quality: Data quality is a measure of how well data serves the purpose it's intended for, where elements such as accuracy, completeness, consistency, and relevance enable organisations to make informed decisions.
In the survey, 73 per cent of organisations that made data quality a priority indicated an improvement in mean time to respond (MTTR) has improved.
Data reuse: Factors like data accessibility and proprietary formatting often drive data duplication and blind spots, stemming from the multiple purposes served. This presented potential cost savings by avoiding redundant data collection, enhancing collaboration, engaging in data stewardship across teams, and generating new insights by combining datasets from different sources.
Organisations that reused their data were reported to see better threat detection performance to other respondents by 15 per cent.
Data tiering: Data tiering ranks data based on factors such as access frequency, age of the data, and usage patterns. This practice is aimed at reducing data storage costs and accelerating access times for commonly used data types.
Organisations that employed data tiering were 26 per cent less likely to encounter challenges with cost management.
Data federation: Where traditional data systems collect data into a centralised warehouse for processing and storage, federated data systems scatter data across multiple devices and servers, such as Internet of Things (IoT) systems.
By adopting data federation, organisations were able to access their data faster and improve data governance and their compliance posture.

Identifying low-hanging fruit
Splunk’s Global Observability Strategist Koray Harman demonstrated these features at a recent webinar, where he outlined ways for organisations to streamline their operations and reduce expenditures by simplifying and archiving the process in data metrics.
The key, Harman stressed, is to align data collection with the agency’s specific needs, striking a balance between the level of detail and the associated costs.
This requires a shift in mindset, moving from a "collect everything" approach to a more strategic and selective approach by evaluating the data that is essential for decision-making.
Harman showcased tools like Metrics Pipeline Management and Metrics Usage Analytics to help in this decision making process with evidence-based usage analytics and out of the box tools to manage the different tiers of data.
One of the key solutions Harman raised was introducing a Metric Archive Store. This archive store is a cost-optimised storage solution for less frequently accessed or non-time-sensitive data.
The archive store offers several benefits including cost optimisation, simplified management, granular restoration, and can also serve as a critical part of an organisational data management workflow to safely phrase-out unused data with flexibility to restore where needed.
Working with Splunk to improve its usage analytics dashboard, Atlassian’s Principal Software Engineer Matt Ponsford, joined the webinar as a guest speaker sharing the software company’s experience automating the lifecycle of unused metrics.
Ponsford shared how Atlassian was in the process of automating the bulk archival of 10,000 rulesets (a collection of rules used in computer systems to define how actions should be made based). This would allow an estimated cost reduction of between 10-20 per cent.
Some criteria that aligned with Atlassian’s organisational workflow and priorities, Ponsford shared, for archiving metrics include datasets that are not utilised for any detectors, and was created at least 30 days prior, with the aim to reach approximately zero unused metrics and automate the archival process completely.
First steps in modern data management
As the demand for data is only set to increase, so too is the impetus for a revision and rethink in the data management rulebook.
The breakneck progress of developing solutions in cloud services, IoT systems, and artificial intelligence (AI) will only amplify the volume of data required for these solutions.
Splunk’s survey findings posited a symbiotic relationship between data management and AI where a strong data management strategy will be a force multiplier for AI implementation.
Where AI depends on quality data where a strong data management strategy underpins the performance of a model, simultaneously, AI helps fill in the gaps of organisations’ data management practices by boosting productivity and automation when woven into workflows.
Moreover, modern data management strengthens cybersecurity, boosts observability and ITOps practices.
Harman concluded the webinar providing some practical and simple steps listeners could take to get started on their journey of metric optimisation.
“The advice here is not going too big, but to use the metric analytics tools and filter on what's unused and have a look for yourself to explore those metrics that aren't being used today,” he said.
Splunk’s redefinition of a modern data strategy is akin to the ubiquitous act of spring cleaning.
To get one’s data house in order, the first steps involve classifying the data generated through its quality, keeping the data clean for access through federation, and adopting a unified platform where reuse prevents not only the duplication of data – but importantly the time and resources unnecessarily spent.