Search a Conference through our dedicated search page
Big spatial data is characterised by three main features: Volume beyond the limit of usual geo-processing, Velocity higher than available by usual processes, and Variety, combining more diverse geodata sources than usual. The popular term denotes a situation when one or more of these key properties reach a state when traditional methods of geodata collection, storing, processing, controlling, analysing, modelling, validating and visualizing fail to provide effective solutions. Entering the Era of Big Spatial Data requires finding solutions addressing all "small data" issues that will soon induce "big data" troubles. Resilience for Big Spatial Data means solving heterogeneity of spatial data sources (in topics, purpose, completeness, guarantee, licensing, coverage etc.), large volumes (from gigabytes to terabytes and more), undue complexity of geo-applications and systems (i.e. combination of standalone applications with web services, mobile platforms and sensor networks), neglected automation of geodata preparation (i.e. harmonisation, fusion), insufficient control of geodata collection and distribution processes (i.e. scarcity and poor quality of metadata and metadata systems), limited capacity of analytical tools (i.e. domination of traditional causal-driven analysis), low performance of visual systems, inefficient knowledge discovery techniques (for transformation of vast amounts of information into tiny and essential outputs) and many more. These trends will even accelerate as sensors in the world become more ubiquitous.