Maximal Covering Location Problem (MCLP) is a classical spatial optimization problem that plays a significant role in urban spatial computing. Due to its NP-hard, finding an exact solution for this problem is computationally challenging. This study proposes a deep reinforcement learning-based approach called DeepMCLP to address the MCLP problem. We model MCLP as a Markov Decision Process. The encoder with attention mechanisms learns the interaction between demand points and facility points and the decoder outputs a probability distribution over candidate facility points, and a greedy policy is employed to select facility points, resulting in a feasible solution. We utilize the trained DeepMCLP model to solve both artificially synthesized data and real-world scenarios. Experimental results demonstrate that our algorithm effectively solves the MCLP problem, achieving faster solving times compared to mature solvers and smaller optimality gaps compared to the genetic algorithm. Our algorithm offers a novel perspective on solving spatial optimization problems, and future research can explore its application to other spatial optimization problems, providing scientific and effective guidance for urban planning and urban spatial analysis.
Reproducibility is one of the corner stones of science: when studies cannot be reproduced it is hard to convey that they contain new findings of general truth. We constrain ourselves here to computational aspects of spatial data science, and discuss the challenges posed by always evolving software, scientific software developer communities, upstream and downstream dependencies, the publishing industry, and report on experiences from developer communities, and look at convergence in the spatial data science software ecosystems.
This paper offers a model of the semantic content of spatial nouns as generic terms in place names (e.g. Square in Trafalgar Square) and as descriptors for places ("place nouns", e.g. street in the second street). The model is based on a variant of Frame Semantics in which different context- and community-based uses (e.g. general, daily uses; specialised uses; legal, normative uses) are modelled as as sets/matrices of attribute-value pairs, or frames. The attributes forming these frames are based on data extraction from corpora (general uses), Wikipedia articles (specialised uses), and professional geographical dictionary (legal uses) as contexts. It is shown that uses associated to each context define frames varying considerably in content; however, a semantic overlap relation connects these frames. Consequences for a general theory of the semantics of place and geonames are discussed.
Spatio-temporal data analysis is crucial in many research fields. However, modelling large-scale spatio-temporal data presents challenges such as high computational demands, complex correlation structures, and the separation of mixed sources. To address these issues, we are developing 4DModeller (fdmr), a robust and user-friendly R package designed to model spatio-temporal data within a Bayesian framework. The software package offers a comprehensive solution for visualizing, analyzing and modelling different types of spatio-temporal data in various disciplines. By incorporating Bayesian hierarchical models, "fdmr" allows for the flexible integration of prior knowledge and data uncertainty into the modelling process. By utilizing the Integrated Nested Laplace Approximations (INLA) algorithm and the stochastic partial differential equations (SPDE) method for model inference, "fdmr" significantly reduces the computational complexity of handling high-resolution, highdimensional spatio-temporal data. Furthermore, "fdmr" provides intuitive and interactive visual analytics tools that facilitate the exploration of data patterns across both space and time. This paper aims to introduce the "fdmr" package, and outline its core modelling framework through an example study on the spread of COVID-19 infection rates in England from 19 December, 2020 to 20 March, 2021.
The ubiquity of algorithmic tools and services (ATS) in spatial data science has led to increased concerns about the biases they carry. This vision paper explores the biases inherent in ATS, encompassing computational, statistical, human, and systemic biases, and those compounded by multinational corporations. It underscores the imperative to address these biases, advocating a narrative-based approach to counteract them and promote equitable outcomes. This approach not only heightens awareness of embedded biases but also charts a course toward their mitigation.
This paper explores the concept of leveraging generative AI as a mapping assistant for enhancing the efficiency of collaborative mapping. We present the results of an experiment that combines multiple sources of volunteered geographic information (VGI) and large language models (LLMs). Three analysts described the content of crowdsourced Mapillary street-level photographs taken along roads in a small test area in Miami, Florida. GPT-3.5-turbo was instructed to suggest the most appropriate tagging for each road in OpenStreetMap (OSM). The study also explores the utilization of BLIP-2, a state-of-the-art multimodal pre-training method as an artificial analyst of street-level photographs in addition to human analysts. Results demonstrate two ways to effectively increase the accuracy of mapping suggestions without modifying the underlying AI models: by (1) providing a more detailed description of source photographs, and (2) combining prompt engineering with additional context (e.g. location and objects detected along a road). The first approach increases the accuracy of the suggestion by up to 29%, and the second one by up to 20%.
Urban greenness is critical in evaluating the urban environmentand living conditions, significantly affecting human well-being and house prices. Unfortunately, satellite imagery from a bird-eye view does not fully capture urban greenness from a human-centered perspective, while human-perceived greenness from street-view images heavily relies on road networks and vehicle accessibility. In recent years, scholars started to explore greenness measurements from a simulative perspective, among which the simulation of the Viewshed Greenness Visibility Index (VGVI) received wide attention. However, the simulated VGVI lacks a comprehensive assessment. To fill this gap, we designed a field experiment in Fayetteville, Arkansas, by collecting 360-degree panoramas in different local climate zones. Further, we segmented these panoramas via the state-of-the-art DeeplabV2 neural network to obtain the Panoramic Greenness Visibility Index (PGVI), which served as the ground-truthing human-perceived greenness. We assessed the performance of VGVI by comparing it with PGVI calculated from field-collected panoramas. The results showed that, despite the disparity of performance in different local climate zones, VGVI significantly correlates to the PGVI, indicating its great potential for various domains that favor urban human-perceived greenness exposure.
Acceleration of urbanisation is posing great challenges to sustainable development. Growing accessibility to big data and artificial intelligence (AI) technologies have revolutionised many fields and offered great potential for addressing pressing urban problems. However, using these technologies without explicitly considering responsibilities would bring new societal and environmental issues. To maximise the benefits of big data and AI while minimising potential issues, we envisage a conceptual framework of Responsible Urban Intelligence (RUI) and advocate an agenda for action. We first define RUI as consisting of three major components including urban problems, enabling technologies, and responsibilities; then introduce transparency, fairness, and eco-friendliness as the three dimensions of responsibilities which naturally link with the human, space, and time dimensions of cities; and further develop a four-stage implementation framework for responsibilities as consisting of solution design, data preparation, model building, and practical application; and finally present a research agenda for RUI addressing challenging issues including data and model transparency, tension between performance and fairness, and solving urban problems in an eco-friendly manner.
While a model prediction is a probabilistic claim about a system state to transpire in the future, a model projection is an if-then statement about the potential future of a system, by definition subject to (changes in) boundary conditions with an unknown likelihood. Despite a robust body of literature on the various potential purposes of models - and to predict is only one of these purposes - some modellers tend to refer to all their model outputs as predictions, while they are more often projections or neither of these two. Both geosimulation and spatial machine learning scholars are careless in how they refer to their model outputs. This is confusing for all involved and especially for the general public, for whom the model output is usually the only model component they get to see. In this paper we provide definitions, justifications, and a decision tree for classifying model outputs. This can help the GIScience community to gain clarity about what their model output entails.