Affiliated with:

Managing Data as an Asset

Managing Data as an Asset

Economist Garrett Hardin warned what happens to assets that aren’t managed

Data professionals often talk about the importance of managing data and information as organizational assets, but what does this mean? What is the actual business value of data and information? How can this value be measured? How do we manage data and information as assets? How exactly should data be managed to maximize its value?

How Do We Manage Data Assets?

Data and information are organizational assets, and as such they need to be managed at an organizational level. Each business unit can’t have its own “truth”; that would be like each state or province in a country having its own currency. But how to manage them in a way that creates value?

What often happens in organizations is that business people collect and hoard data from wherever they can get it (in Excel spreadsheets and Access databases) like squirrels gathering nuts for the winter. They manipulate and filter the data in various unknown ways to suit their individual purposes, then they often share this data across the organization where it is used in ways that may be inappropriate, or downright dangerous. Over time, this disparate and low-quality data can cripple an organization’s ability to make correct decisions, or to respond effectively to new business challenges. Think of moles, whose activities can cause much destruction to people’s lawns and gardens. The moles don’t do this intentionally; they are only trying to build homes and feed their families. But the ways in which they try to meet their own needs can produce devastating results! 

In the book Growing Business Intelligence,[i] the author applies two fundamental laws of economics to the management of data and information. Gresham’s Law, familiar to most people, states that eventually bad currencies drive good currencies out of circulation. But there’s a corollary to Gresham’s Law, called Thier’s Law, which says that Gresham’s Law applies only to “fiat currencies”; that is, in cases where the government (or some similar authority) decrees that both currencies have the same value. For example, if the government decrees that a copper-and-nickel based coin with a silver coating has the same value as a solid silver coin, people will hoard the more valuable coin and keep the less valuable coin in circulation. The “bad money” would drive the “good money” out of circulation. But if people were allowed to place their own valuation on the coins, they would prefer to trade with the more valuable coin, and thus the “good money” would drive the “bad money” out of circulation.

Why is this important? Because Gresham’s Law and Thier’s Law apply to data and information as well as to currency! If bad data and bad information are regarded as no better or no worse than good data and good information, then disinformation will eventually win out (if for no other reason than it’s easier, faster, and cheaper to get and use bad data). But if good data and good information are regarded as more valuable, then good information will drive out disinformation.

This means that we need to create data and information assets that are regarded as more valuable and useful than the “bad currency” of locally-controlled Excel and Access data, and make these assets quickly and easily available across the organization.

So, the question is, how to create a “good currency” of high-quality, business-relevant, reusable data that can drive the “bad currency” of Excel and Access data out of circulation (or at least keep it under control)?

Tips for Managing Data as an Asset

First, define (i.e., model) data assets at as high a level in the organization as possible. Identify which data entities and attributes, and which business rules, pertain to the organization as a whole.  Identify which data assets are Canonical (that is, they span multiple business domains), and which pertain only to certain business domains or subdomains. There is a current approach to business intelligence called Data Mesh, in which all data is defined at the Domain (i.e., business subject area) level, and the results of analytics (called Data Products) are created and published at that level. The problem with this approach is that much of an organization’s data spans multiple business domains and needs to be defined consistently across the organization to be useful. Similarly, it is important to know whether the results of analytics are applicable across the entire organization, or only to a particular division or business unit. 

Second, data needs to be managed for quality, timeliness, consistency, reusability, and business relevance. This may mean, for example, managing enterprise-level data assets in a Master Data Management (MDM) catalog and publishing this data across the organization. It may also involve maintaining a common repository (e.g., an enterprise data warehouse or something similar) where organizational data assets and data products can be managed for consumption and reuse.  

Decades ago, an economist named Garrett Hardin published an essay called “The Tragedy of the Commons”, showing what happens to assets that anybody can use but that nobody manages or maintains. Those assets become corrupted and eventually fall into disrepair and disuse.

Third, make sure there is a formal process for creating, maintaining, using and publishing data and information assets. This is called Data Governance and is essentially a set of rules established by the business governing how people should behave with respect to Data and Information (remember what I said earlier about asset management!). Data Governance can be effectively implemented at the business domain level, with guidance and supervision from higher levels of the business. This fits in well with the Data Mesh approach[ii], and also with Robert Seiner’s “Non-Invasive” approach to Data Governance[iii]

Fourth, don’t forget about metadata! The purpose of metadata is not simply to describe data and information assets, but rather to proactively answer questions that consumers might have about them. Where did this data come from? How up-to-date is it? How trustworthy is it? What business process(es) created it? What business process(es) use it? What transformations or filtering have been applied to this data, and why? What is the business meaning of this data? What is its value to the business? What business purposes can this data be used for? What can’t this data be used for? Use metadata to maintain the transparency of data and information assets across the organization and ensure that these assets can be easily found, used and trusted.

Fifth, make sure that data and information assets are published and accessible across the organization, and make sure that people know where and how to find them. Educate users on where and how to find good data, how to tell good data from bad data, how to avoid common data usage errors, how to determine when the results of analyses may be incomplete or incorrect, and how to report data errors and problems for quick resolution. Also, make sure that less-trustworthy copies of the data are identified and deprecated. 


Take an iterative (i.e., Agile) approach to data management and BI. Don’t try to boil the entire ocean at one time. Take direction from the business as to which data and information assets are most important to the organization and create a workable process that can be executed iteratively to improve the data, its metadata, and the data governance process over time. 

[i] Burns, Larry. Growing Business Intelligence (Technics Publications, 2016).

[ii] Burns, Larry. “Domain-Driven Development, Part 4: Data Mesh and Data as a Product”, September 21, 2022.

[iii] Seiner, Robert S. Non-Invasive Data Governance (Technics Publications, 2014).


Larry Burns

Larry Burns is a specialist in the field of data and database management, with a career as a data architect, data modeler, database developer, consultant, and teacher.  He has developed an extensive repertoire of tools, techniques, and expertise that enables businesses to reuse their data and derive maximum value from their databases more easily. Larry has an extensive background in application development, and contributed significantly to the success of many major projects.  He has also been involved in teaching, lecturing and writing on various topics of database management and application development, particularly Agile Development.

Larry is the author of the books "Building The Agile Database" (Technics Publications, 2011), "Growing Business Intelligence" (Technics Publications, 2016) and "Data Model Storytelling" (Technics Publications, 2021) and has been an instructor and advisor in the University of Washington's certificate program in Data Resource Management.

Larry holds degrees from the University of Washington (B.S.), Seattle University (M.S.E), and certification as a data management professional.

© Since 1997 to the present – Enterprise Warehousing Solutions, Inc. (EWSolutions). All Rights Reserved

Subscribe To DMU

Be the first to hear about articles, tips, and opportunities for improving your data management career.