Abstraction makes remote sensing images uncertain and useless Max Locke Center 3 (U of Westminster, Sept., “An Introduction to Remote Sensing”, accessed 7-3-11, CH)
The problem with the digital representation of reality in this way is that there will always be some kind of abstraction. Reality is immensely complex and maps are generally very simple models of it. A whole sub-discipline of GIS has arisen to discuss the nature of and potential solutions to the uncertainty that arises from this fact and the uncertainty associated with the data upon which GIS maps are based. The problem is that uncertainty is the only really certain thing in cartography / GIS. And the only true strategy for dealing with it is to acknowledge its existence and make some attempt to quantify the level of error or uncertainty in an image. The best way of doing this is through metadata – roughly speaking, data about data. The idea is that when you create a map or an annotated image, you create a file that goes with it – the metadata file – which explains, for example, where the image / map came from, what was done to it and when. This allows anyone who uses the image after you to quantify the error or uncertainty associated with the image. Figure 3.4 shows a metadata file for a Landsat image from ESDI.
The only effective remote sensing is too expensive to use anyway Center for Transportation Research and Education 1 (May, http://www.intrans.iastate.edu/reports/RemoteSenseInvent.pdf, accessed 7-3-11, CH)
The main disadvantage of remote sensing is cost. A source at the Iowa Department of Transportation estimates that with their in-house capability to orthorectify aerial or satellite images, “raw” digital images can be practically obtained from a commercial vendor for approximately $100 per linear mile. Costs for ortho-rectification were not estimated since they are done in-house and no numbers were available for comparison. As shown in Table III-9, costs for collection of points using GPS exceed the costs of imagery, while collection of features using a 3-camera panoramic videologging van with GPS is similar per mile to the costs of acquiring imagery. Even so, collection of a significant amount of roadway would quickly become prohibitive for any of the methods shown. Videologging is much cheaper if a minimum of information, such as number of signs per segment, rather than location of signs or condition of sign is desired. A source at the Iowa DOT estimated that this type of videologging is approximately $11 per mile not including the initial cost to purchase the van and equipment. A description of the advantages and disadvantages of other data collection methods is provided in Appendix I.
Remote satellite resolution undermines efficiency
European Communities 6 (EU commission, 1/8, http://ec.europa.eu/eurostat/ramon/statmanuals/files/KS-34-00-407-__-I-EN.pdf, accessed 7-3-11, CH)
However, the use of satellite data and the ability for detection and identification of e.g. land cover classes depends on the spectral and spatial resolution of satellite sensors. The spatial resolution determines the scale of work. Common satellite imagery enables mapping at a scale of 1:50.000 or 1:100.000. In a highly structured landscape the spatial resolution of e.g. 20m*20m does not enable a sufficient discrimination of objects composing such an area. Consequences of the relatively broad spatial resolution are that maps derived from satellite imagery are at scales, which are not always appropriate. With new high resolution satellite systems, like IKONOS, this limit can drastically be reduced, enabling map production up to scales of 1:5.000.
* Satellite sensors can only "see" the surface of the water. We can only surmise what is going on below. * Only a small percentage of the original light remains after the long journey from the sun, through the atmosphere, into the ocean, and back up to the sensor. This means our satellite instruments and the equations we use to understand sensor information must be precise so we don't misinterpret the information. * To correctly interpret data from satellite sensors, we must compare these data to "ground-truth" data. For instance, before phytoplankton pigment concentration can be derived from SeaWiFS data, we first have to measure how EM radiation changes as it interacts with bodies of water with known quantities of phytoplankton pigment. Then we can develop models that tell us how to interpret the signals we get from satellite sensors.
Satellites can’t solve climate prediction—inaccurate data measurement Frossard et al 8 (A., A. Gomez, J. Dwyer, P. Shaw, R. Schwartz, 12/9, http://aerosols.ucsd.edu/classes/sio217a/GroupAv3paper.pdf, accessed 7-3-11, CH)
The 1960 Television Infrared Observation Satellite (TRIOGS-1) was the ﬁrst successful meteorological satellite and lead to the long running Nimbus program. Numerous satellites have made cloud optical and physical property measurements since NASA’s 1964 launch of the ﬁrst Nimbus satellite (Grayzeck 2003). Remote sensing data is not exclusive to satellites and has, to some extent, been around for decades. Despite the relatively new availability of high quality satellite data, large uncertainties in the measurements make validation of global climate models problematic.Parameterization of micron-sized particles and drops onto planetary scales makes clouds and aerosols the source of largest uncertainty in predicted climate change (Solomon et al. 2007). A series of advanced satellites are now measuring properties of these two critical atmospheric constituents to constrain models. These newer satellites employ more sophisticated technologies, including LIDAR, RADAR, and highly sensitive radiometers. Satellite measurements provide global, consistent, and reliable observations that are unparalleled by ground-based instruments. The most recent and important satellite advances in the measurements of cloud albedo are reviewed, covering algorithms, cloud proﬁlers, radiometers, and their limitations
Satellites can’t monitor—remote sensing can’t overcome cloud coverage or track glaciers Liang & Lv 11 (Lu-Yi & Qin, Profs Comp Science, U of Colorado, 2/15, http://www.cs.colorado.edu/department/publications/reports/docs/CU-CS-1078-11.pdf, accessed 7-3-11, CH)
Supra-glacial lakes (i.e., ponds of melting water on ice sheet) in Greenland have attracted extensive global attention during the recent years. To understand the important role they play in glacier movement, sea level rise, and climate change, scientists need to learn where these lakes are, when they form, and how they change in each melting season and across multiple years. This requires detecting and tracking supra-glacial lakes both spatially and temporally. This problem is challenging due to the diverse qualities of massive amount of remote sensing images, frequent cloud coverage, as well as the diversity and dynamics of the large number of supra-glacial lakes on the Greenland ice sheet. Previous works that use supervised methods to detect supra-glacial lakes in individual cloud-free satellite images are limited in scale, quality, and functionality.