NOAA ERDDAP™
Easier access to scientific data
   
Brought to you by NOAA NMFS SWFSC ERD    
 

ERDDAP:
Heavy Loads, Grids, Clusters, Federations,
and Cloud Computing

ERDDAP™ is a web application and a web service that aggregates scientific data from diverse local and remote sources and offers a simple, consistent way to download subsets of the data in common file formats and make graphs and maps. This web page discusses issues related to heavy ERDDAP™ usage loads and explores possibilities for dealing with extremely heavy loads via grids, clusters, federations, and cloud computing.

The original version was written in June 2009. There have been no significant changes. This was last updated 2019-04-15.

Table of Contents


DISCLAIMER

The contents of this web page are Bob Simons personal opinions and do not necessarily reflect any position of the Government or the National Oceanic and Atmospheric Administration. The calculations are simplistic, but I think the conclusions are correct. Did I use faulty logic or make a mistake in my calculations? If so, the fault is mine alone. Please send an email with the correction to erd dot data at noaa dot gov.
 

Heavy Loads / Constraints

With heavy use, a standalone ERDDAP™ will be constrained (from most to least likely) by:
  1. A remote data source's bandwidth — Even with an efficient connection (e.g., via OPeNDAP), unless a remote data source has a very high bandwidth Internet connection, ERDDAP's responses will be constrained by how fast ERDDAP™ can get data from the data source. A solution is to copy the dataset onto ERDDAP's hard drive, perhaps with EDDGridCopy or EDDTableCopy.
     
  2. ERDDAP's server's bandwidth — Unless ERDDAP's server has a very high bandwidth Internet connection, ERDDAP's responses will be constrained by how fast ERDDAP™ can get data from the data sources and how fast ERDDAP™ can return data to the clients. The only solution is to get a faster Internet connection.
     
  3. Memory — If there are many simultaneous requests, ERDDAP™ can run out of memory and temporarily refuse new requests. (ERDDAP™ has a couple of mechanisms to avoid this and to minimize the consequences if it does happen.) So the more memory in the server the better. On a 32-bit server, 4+ GB is really good, 2 GB is okay, less is not recommended. On a 64-bit server, you can almost entirely avoid the problem by getting lots of memory. See the -Xmx and -Xms settings for ERDDAP/Tomcat. An ERDDAP™ getting heavy usage on a computer with a 64-bit server with 8GB of memory and -Xmx set to 4000M is rarely, if ever, constrained by memory.
     
  4. Hard drive bandwidth — Accessing data stored on the server's hard drive is vastly faster than accessing remote data. Even so, if the ERDDAP™ server has a very high bandwidth Internet connection, it is possible that accessing data on the hard drive will be a bottleneck. A partial solution is to use faster (e.g., 10,000 RPM) magnetic hard drives or SSD drives (if it makes sense cost-wise). Another solution is to store different datasets on different drives, so that the cumulative hard drive bandwidth is much higher.
     
  5. Too many files in a cache directory — ERDDAP™ caches all images, but only caches the data for certain types of data requests. It is possible for the cache directory for a dataset to have a large number of files temporarily. This will slow down requests to see if a file is in the cache (really!). <cacheMinutes> in setup.xml lets you set how long a file can be in the cache before it is deleted. Setting a smaller number would minimize this problem.
     
  6. CPU — Only two things take a lot of CPU time:
    • NetCDF 4 and HDF 5 now support internal compression of data. Decompressing a large compressed NetCDF 4 / HDF 5 data file can take 10 or more seconds. (That's not an implementation fault. It's the nature of compression.) So, multiple simultaneous requests to datasets with data stored in compressed files can put a severe strain on any server. If this is a problem, the solution is to store popular datasets in uncompressed files, or get a server with a CPU with more cores.
    • Making graphs (including maps): roughly 0.2 - 1 second per graph. So if there were many simultaneous unique requests for graphs (WMS clients often make 6 simultaneous requests!), there could be a CPU limitation. When multiple users are running WMS clients, this becomes a problem.
       

Multiple Identical ERDDAPs with Load Balancing? No

The question often comes up: "To deal with heavy loads, can I set up multiple identical ERDDAPs with load balancing?" It's an interesting question because it quickly gets to the core of ERDDAP's design. The quick answer is "no". I know that is a disappointing answer, but there are a couple of direct reasons and some larger fundamental reasons why I designed ERDDAP™ to use a different approach (a federation of ERDDAPs, described in the bulk of this document), which I believe is a better solution.

Some direct reasons why you can't/shouldn't set up multiple identical ERDDAPs are:

Yes, for each of those problems, I could (with great effort) engineer a solution (to share the information between ERDDAPs), but I think the federation-of-ERDDAPs approach (described in the bulk of this document) is a much better overall solution, partly because it deals with other problems that the multiple-identical-ERDDAPs-with-a-load-balancer approach does not even start to address, notably the decentralized nature of the data sources in the world.

It's best to accept the simple fact that I didn't design ERDDAP™ to be deployed as multiple identical ERDDAPs with a load balancer. I consciously designed ERDDAP™ to work well within a federation of ERDDAPs, which I believe has many advantages. Notably, a federation of ERDDAPs is perfectly aligned with the decentralized, distributed system of data centers that we have in the real world (think of the different IOOS regions, or the different CoastWatch regions, or the different parts of NCEI, or the 100 other data centers in NOAA, or the different NASA DAACs, or the 1000's of data centers throughout the world). Instead of telling all the data centers of the world that they need to abandon their efforts and put all their data in a centralized "data lake" (even if it were possible, it is a horrible idea for numerous reasons -- see the various analyses showing the numerous advantages of decentralized systems (external link)), ERDDAP's design works with the world as it is. Each data center which produces data can continue to maintain, curate, and serve their data (as they should), and yet, with ERDDAP™, the data can also be instantly available from a centralized ERDDAP, without the need for transmitting the data to the centralized ERDDAP™ or storing duplicate copies of the data. Indeed, a given dataset can be simultaneously available
from an ERDDAP™ at the organization that produced and actually stores the data (e.g., GoMOOS),
from an ERDDAP™ at the parent organization (e.g., IOOS central),
from an all-NOAA ERDDAP™,
from an all-US-federal government ERDDAP™,
from a global ERDDAP™ (GOOS),
and from specialized ERDDAPs (e.g., an ERDDAP™ at an institution devoted to HAB research),
all essentially instantaneously, and efficiently because only the metadata is transferred between ERDDAPs, not the data. Best of all, after the initial ERDDAP™ at the originating organization, all of the other ERDDAPs can be set up quickly (a few hours work), with minimal resources (one server that doesn't need any RAIDs for data storage since it stores no data locally), and thus at truly minimal cost. Compare that to the cost of setting up and maintaining a centralized data center with a data lake and the need for a truly massive, truly expensive, Internet connection), plus the attendant problem of the centralized data center being a single point of failure. To me, ERDDAPs decentralized, federated approach is far, far superior.

In situations where a given data center needs multiple ERDDAPs to meet high demand, ERDDAP's design is fully capable of matching or exceeding the performance of the multiple-identical-ERDDAPs-with-a-load-balancer approach. You always have the option of setting up multiple composite ERDDAPs (as discussed below), each of which gets all of their data from other ERDDAPs, without load balancing. In this case, I recommend that you make a point of giving each of the composite ERDDAPs a different name / identity and if possible setting them up in different parts of the world (e.g., different AWS regions), e.g., ERD_US_East, ERD_US_West, ERD_IE, ERD_FR, ERD_IT, so that users consciously, repeatedly, work with a specific ERDDAP, with the added benefit that you have removed the risk from a single point of failure.
 


Grids, Clusters, and Federations

Under very heavy use, a single standalone ERDDAP™ will run into one or more of the constraints listed above and even the suggested solutions will be insufficient. For such situations, ERDDAP™ has features that make it easy to construct scalable grids (also called clusters or federations) of ERDDAPs which allow the system to handle very heavy use (e.g., for a large data center).

I'm using grid (external link) as a general term to indicate a type of computer cluster (external link) where all of the parts may or may not be physically located in one facility and may or may not be centrally administered. An advantage of co-located, centrally owned and administered grids (clusters) is that they benefit from economies of scale (especially the human workload) and simplify making the parts of the system work well together. An advantage of non-co-located grids, non-centrally owned and administered (federations) is that they distribute the human workload and the cost, and may provide some additional fault tolerance. The solution I propose below works well for all grid, cluster, and federation topographies.

The basic idea of designing a scalable system is to identify the potential bottlenecks and then design the system so that parts of the system can be replicated as needed to alleviate the bottlenecks. Ideally, each replicated part increases the capacity of that part of the system linearly (efficiency of scaling). The system isn't scalable unless there is a scalable solution for every bottleneck. Scalability (external link) is different from efficiency (how quickly a task can be done — efficiency of the parts). Scalability allows the system to grow to handle any level of demand. Efficiency (of scaling and of the parts) determines how many servers, etc., will be needed to meet a given level of demand. Efficiency is very important, but always has limits. Scalability is the only practical solution to building a system that can handle very heavy use. Ideally, the system will be scalable and efficient.

The goals of this design are:

Our recommendations are:
grid/cluster diagram

The parts of the grid are:

A) For every remote data source that has a high-bandwidth OPeNDAP server, you can connect directly to the remote server. If the remote server is an ERDDAP™, use EDDGridFromErddap or EDDTableFromERDDAP to serve the data in the Composite ERDDAP. If the remote server is some other type of DAP server, e.g., THREDDS, Hyrax, or GrADS, use EDDGridFromDap.

B) For every ERDDAP-able data source (a data source from which ERDDAP can read data) that has a high-bandwidth server, set up another ERDDAP™ in the grid which is responsible for serving the data from this data source.

C) For every ERDDAP-able data source that has a low-bandwidth server (or is a slow service for other reasons), consider setting up another ERDDAP™ and storing a copy of the dataset on that ERDDAP's hard drives, perhaps with EDDGridCopy and/or EDDTableCopy. If several such ERDDAPs aren't getting many requests for data, you can consolidate them into one ERDDAP.
C servers must be publicly accessible.

D) The composite ERDDAP™ is a regular ERDDAP™ except that it just serves data from other ERDDAPs.

Datasets In Very High Demand — In the really unusual case that one of the A, B, or C ERDDAPs can't keep up with the requests because of bandwidth or hard drive limitations, it makes sense to copy the data (again) on to another server+hardDrive+ERDDAP, perhaps with EDDGridCopy and/or EDDTableCopy. While it may seem ideal to have the original dataset and the copied dataset appear seamlessly as one dataset in the composite ERDDAP™, this is difficult because the two datasets will be in slightly different states at different times (notably, after the original gets new data, but before the copied dataset gets its copy). Therefore, I recommend that the datasets be given slightly different titles (e.g., "... (copy #1)" and "... (copy #2)", or perhaps "(mirror #n)" or "(server #n)") and appear as separate datasets in the composite ERDDAP. Users are used to seeing lists of mirror sites (external link) at popular file download sites, so this shouldn't surprise or disappoint them. Because of bandwidth limitations at a given site, it may make sense to have the mirror located at another site. If the mirror copy is at a different data center, accessed just by that data center's composite ERDDAP™, the different titles (e.g., "mirror #1) aren't necessary.

RAIDs versus Regular Hard Drives — If a large dataset or a group of datasets are not heavily used, it may make sense to store the data on a RAID since it offers fault tolerance and since you don't need the processing power or bandwidth of another server. But if a dataset is heavily used, it may make more sense to copy the data on another server + ERDDAP™ + hard drive (similar to what Google does (external link)) rather than to use one server and a RAID to store multiple datasets since you get to use both server+hardDrive+ERDDAPs in the grid until one of them fails.

Failures — What happens if...

Simple, Scalable — This system is easy to set up and administer, and easily extensible when any part of it becomes over-burdened. The only real limitations for a given data center are the data center's bandwidth and the cost of the system.

Bandwidth — Note the approximate bandwidth of commonly used components of the system:

ComponentApproximate Bandwidth (GBytes/s)
DDR memory2.5
SSD drive1
SATA hard drive0.3
Gigabit Ethernet0.1
OC-120.06
OC-30.015
T10.0002

So, one SATA hard drive (0.3GB/s) on one server with one ERDDAP™ can probably saturate a Gigabit Ethernet LAN (0.1GB/s). And one Gigabit Ethernet LAN (0.1GB/s) can probably saturate an OC-12 Internet connection (0.06GB/s). And at least one source lists OC-12 lines costing about $100,000 per month. (Yes, these calculations are based on pushing the system to its limits, which is not good because it leads to very sluggish responses. But these calculations are useful for planning and for balancing parts of the system.) Clearly, a suitably fast Internet connection for your data center is by far the most expensive part of the system. You can easily and relatively cheaply build a grid with a dozen servers running a dozen ERDDAPs which is capable of pumping out lots of data quickly, but a suitably fast Internet connection will be very, very expensive. The partial solutions are: Note that Cloud Computing and web hosting services offer all the Internet bandwidth you need, but don't solve the price problem.

For general information on designing scalable, high capacity, fault-tolerant systems, see Michael T. Nygard's book Release It (external link).

Like Legos — Software designers often try to use good software design patterns (external link) to solve problems. Good patterns are good because they encapsulate good, easy to create and work with, general-purpose solutions that lead to systems with good properties. Pattern names are not standardized, so I'll call the pattern that ERDDAP™ uses the Lego Pattern. Each Lego (each ERDDAP) is a simple, small, standard, stand-alone, brick (data server) with a defined interface that allows it to be linked to other legos (ERDDAPs). The parts of ERDDAP™ that make up this system are: the subscription and flagURL systems (which allows for communication between ERDDAPs), the EDD...FromErddap redirect system, and the system of RESTful requests for data which can be generated by users or other ERDDAPs. Thus, given two or more legos (ERDDAPs), you can create a huge number of different shapes (network topologies of ERDDAPs). Sure, the design and features of ERDDAP™ could have been done differently, not Lego-like, perhaps just to enable and optimize for one specific topology. But we feel that ERDDAP's Lego-like design offers a good, general-purpose solution that enables any ERDDAP™ administrator (or group of administrators) to create all kinds of different federation topologies. For example, a single organization could set up three (or more) ERDDAPs as shown in the ERDDAP™ Grid/Cluster Diagram above. Or a distributed group (IOOS? CoastWatch? NCEI? NWS? NOAA? USGS? DataONE? NEON? LTER? OOI? BODC? ONC? JRC? WMO?) can set up one ERDDAP™ in each small outpost (so the data can stay close to the source) and then set up a composite ERDDAP™ in the central office with virtual datasets (which are always perfectly up-to-date) from each of the small outpost ERDDAPs. Indeed, all of the ERDDAPs, installed at various institutions around the world, which get data from other ERDDAPs and/or provide data to other ERDDAPs, form a giant network of ERDDAPs. How cool is that?! So, as with Lego's, the possibilities are endless. That's why this is a good pattern. That's why this is a good design for ERDDAP.

Different Types Of Requests — One of the real-life complications of this discussion of data server topologies is that there are different types of requests and different ways to optimize for the different types of requests. This is mostly a separate issue (How fast can the ERDDAP™ with the data respond to the request for data?) from the topology discussion (which deals with the relationships between data servers and which server has the actual data). ERDDAP™, of course, tries to deal with all types of requests efficiently, but handles some better than others.

These are my opinions.

Yes, the calculations are simplistic (and now slightly dated), but I think the conclusions are correct. Did I use faulty logic or make a mistake in my calculations? If so, the fault is mine alone. Please send an email with the correction to erd dot data at noaa dot gov.
 

Cloud Computing

Several companies offer cloud computing services (e.g., Amazon Web Services (external link) and Google Cloud Platform (external link)). Web hosting companies (external link) have offered simpler services since the mid-1990's, but the "cloud" services have greatly expanded the flexibility of the systems and the range of services offered. Since the ERDDAP™ grid just consists of ERDDAPs and since ERDDAPs are Java web applications that can run in Tomcat (the most common application server) or other application servers, it should be relatively easy to set up an ERDDAP™ grid on a cloud service or web hosting site. The advantages of these services are: The disadvantages of these services are: Hosted Data -
An alternative to the above cost benefit analysis (which is based on the data owner (e.g., NOAA) paying for their data to be stored in the cloud) arrived around 2012, when Amazon (and to a lesser extent, some other cloud providers) started hosting some datasets in their cloud (AWS S3) for free (presumably with the hope that they could recover their costs if users would rent AWS EC2 compute instances to work with that data). Clearly, this makes cloud computing vastly more cost effective, because the time and cost up uploading the data and hosting it are now zero. With ERDDAP™ v2.0, there are new features to facilitate running ERDDAP in a cloud: These changes solve the problem of AWS S3 not offering local, block-level file storage and the (old) problem of access to S3 data having a significant lag. (Years ago (~2014), that lag was significant, but is now much shorter and so not as significant.) All in all, it means that setting up ERDDAP™ in the cloud works much better now.

Thanks — Many thanks to Matthew Arrott and his group in the original OOI effort for their work on putting ERDDAP™ in the cloud and the resulting discussions.
 


Remote Replication of Datasets

There is a common problem that is related to the above discussion of grids and federations of ERDDAPs: remote replication of datasets. The basic problem is: a data provider maintains a dataset that changes occasionally and a user wants to maintain an up-to-date local copy of this dataset (for any of a variety of reasons). Clearly, there are a huge number of variations of this. Some variations are much harder to deal with than others. There are obviously a huge number of variations of possible types of changes to the source dataset and of the user's needs and expectations. Many of the variations are very difficult to solve. The best solution for one situation is often not the best solution for another situation — there isn't yet a universal great solution.

Relevant ERDDAP™ Tools

ERDDAP™ offers several tools which can be used as part of a system which seeks to maintain a remote copy of a dataset:

Solutions

Although there are a huge number of variations to the problem and an infinite number of possible solutions, there are just a handful of basic approaches to solutions: While there is no single, simple solution which perfectly solves all the problems in all scenarios (as rsync and Distributed Data almost are), hopefully there are sufficient tools and options so that you can find an acceptable solution for your particular situation.
 

Contact Information

The contents of this web page are Bob Simons personal opinions and do not necessarily reflect any position of the Government or the National Oceanic and Atmospheric Administration. The calculations are simplistic, but I think the conclusions are correct. Did I use faulty logic or make a mistake in my calculations? If so, the fault is mine alone. Please send an email with the correction to erd dot data at noaa dot gov.

Questions, comments, suggestions? Please send an email to erd dot data at noaa dot gov and include the ERDDAP™ URL directly related to your question or comment.
 


ERDDAP, Version 2.25_1
Disclaimers | Privacy Policy