The path for the Earth observation data, from the European Space Agency Sentinel (ESA) satellites all the way down to the end-user is a complex one, starting from the satellite, involving receiving stations as far away as Svallbard, including distributed processing centres for data calibration and geocoding, especially considering the data should be delivered as fast as possible to the user communities in order to run models and make predictions for environmental effects. GRNET is participating to this effort as one of the service/content distribution points for ESA’s Earth observation data, contributing to the European Union's flagship Copernicus programme. Taking part to the network design and operation process we had to assure that the data would be transferred fast enough from the main service distribution DC, located to central Europe, to our DC in Greece which acts as complementary DC. From capacity planning perspective, we knew that our WAN network and our upstream peerings to the pan-European research and education network (GÉANT) could achieve speeds of many Gbits per second. But, somehow, when the testing period started, the transfer speed was not adequate and the number of the datasets (files) per day, with a total size of many Terabytes, that were in the queue waiting to be transferred to our DC was constantly increasing day-to-day. It was easy to see the problem, but difficult to understand what was the cause. One of our first thoughts, that of the geographical distance (a.k.a propagation delay in networking) leading to TCP throttling was indeed the source of the problem, but an invisible hunter of our skills was drawing a red herring across the path.