Archives de catégorie : R-bloggers-eng

[Sextante Plugin] An R script for Neighborhood Detection

Des bâtiments regroupés en bloc à Paris

Artice in English

[Plugin Sextante] Un script R pour détecter les voisinages : Cet article explique comment réaliser un petit script R via le plugin Sextante pour QGIS afin d’établir des relations de voisinage entre objets en fonction de la distance qui les sépare. Il comporte une petite introduction sur l’écosystème QGIS, sur R aussi. Et puis bien sûr, des détails sur l’écriture d’un script R pour le plugin Sextante GIS Plugin.


Content:

  • An opensource spatial constellation
    • R
    • QGIS as a solar planet
    • Sextante GIS plugin as a glue
    • R and QGIS
  • Performing a neighborhood analysis with Sextante plugin
    • The R example scripts
    • Neighborhood by distance: my R script
    • Writing an R script
    • The complete Sextante R script
  • Usage in QGIS
    •  Quiet places where to sleep/live in Paris
    • Styling and visualizing
    • Statistics
    • Convex hulls
Download the R script directly here.

An opensource constellation

http://commons.wikimedia.org/wiki/File:Solar_sys.jpg

R

R is a statistical language from which a huge number of libraries, representing a multicoloured palette of domains, have proliferated, from finance to ecology, from spatial to morphometry.

Its popularity is increasing very fast. Even the New York Times wrote about it. Also, an article describes how it is used at google.

More and more companies search for data scientists that master R, besides SAS, the proprietary equivalent tool.

In the spatial domain, there are some functions that you’ll never find in any opensource GIS software, be it GRASS or Sextante.

In this spatial R page, you might discover some tasks you’ve never heard of in the geospatial world, like spatial modeling, point patterns and small area estimations.

QGIS as a solar planet

More and more, QGIS tends to integrate famous tools. The first one was GRASS, then there was R, Sextante GIS, recently there was the Orfeo Toolbox for Imagery and Remote Sensing.

All of this constitutes an opensource spatial galaxy/ecosystem in which every planet (here, software) develops itself in contact with other planets, also stars (the technologies that gain popularity like Mapnik, Leaflet).

Sextante GIS in QGIS as a glue

Initially, Sextante GIS is a geospatial library.
Then, it became a plugin for QGIS, developed by Victor Olaya. It not only included Sextante functionalities but also functions from Orfeo Toolbox, R, GDAL, GRASS,…

The goal of this plugin is to indifferently execute functions from different libraries in a graphical environment since not everybody is familiar with command lines.

Also, the Sextante QGIS Plugin integrates a modeler in which you can chain functions and make QGIS a kind of automated factory for spatial tasks. Remember GRASS also has a modeler.

I think that with the improvement of his Sextante modeler, QGIS could become a very powerful spatial ETL and could compete with Spatial Data Integrator/Talend Open Studio, GeoKettle or FME (when it deals with spatial treatments).

R and QGIS

There are many plugins that use the R library: manageR by Carson Farmer, spqr by Barry Rowlingson, SDA4PP (Spatial Data analysis For Point Patterns, Home Range Estimation

manageR is an R editor inside QGIS. Knowing R is a prerequisite for using this plugin.

spqr produces graphics for stat visualizations .

– SDA4PP is a suite of functions for the analysis of point patterns: are the points clustered, seggregated? 

I think that the most interesting R plugins for QGIs are those that guide the user through a graphical interface and perform complex R tasks behind.

R in Sextante QGIS plugin

When you install the Sextante GIS plugin, you must activate the R scripts to get them available in the Sextante dock

– Also, you have to mention where R is installed.

– I preferred to move all the R scripts that initially were in C:Documents and Settings[username].qgispythonpluginssextanterscripts to C:/R/QGIS_SEXTANTE to make them more accessible. I changed the R scripts folder in the configuration dialog as you can see in the image above.

The Sextante R Scripts

If you don’t know anything about R and if you want to use an R script but you don’t have the approriate package to use it, all you have to know is how to install this package. It is very simple. I recommend you RStudio for this (if a library has not been installed in your standard R, it won’t be available in Sextante QGIS R scripts)

Using an R script doesn’t require any knowledge in R programming but it’s preferrable to understand how it is made, and to see which function it uses. You can refer yourself to the documentation to get some details, by using R-Seek. Be studious: some functions sometimes require some theoretical knowledge “that could lead either to years of therapy, or to your Ph.D”.

Performing a neighborhood analysis with Sextante plugin

The R example scripts

There a already 9 R scripts in the package as you can see in the image above. The system of these scripts are input-output systems where the inputs are always spatial data and where outputs can either be spatial data, plots (images), or console outputs.

– With the Ripley Rason script, you can create some envelopes that are a bit like convex hulls but stick more to the group of elements.

– You can create create either regular, either random points based on a polygon (generally, we use a grid)

– You can analyze the clustering of points using the F, G, K Functions scripts

– You can check whether you values are random – that means following a normal distribution – with the Kolgomorov-Smirnov normality test script

– You can make kind of same analysis but with spatial quadrats: consider a study zone with points. You divide this zone into a certain number of squares (quadrats). You count the points inside each square. Finally, you make a test to see if your points are randomly located (it’s called the null hypothesis of Complete Spatial Randomness).

– There is an R script that makes a raster histogram. It illustrates that you can have rasters as inputs, not only vectors.

Neighborhood by distance: my R script

As you can see, the R scripts mainly focus on point patterns. I wanted to implement an R script that I had already written, but in an R environment, that focused on areas.

The goal of this R script is to specify a distance below which some polygons (for instance, buildings) are considered neighbors. Once it finds the neighborhood relationships, it counts the neighbors and affects each building to the group of neighbors it belongs to.

The output of this script is the same layer as the base one but enriched with two more attributes: neighbors count and neighbor group.

This script can be useful to determine the structure of the habitat and detect the buildings that are isolated or highly grouped based on a mimimum distance threshold.

Here, you’ll find a complete description of the spdep package.

Here is the script, in R language:

Writing an R script

The goal was to transform this R script into a Sextante-QGIS R scripts.

If you click on “create a new R script”, an edition window opens and you can have an assistant through the “edit script help” button. Unfortunately, I didn’t understand how this script help worked. So, I inspired myself from the example scripts.

To create a script, create a new file, in my case, neighborhood_by_distance.rsx and edit it.

An R script is structured in two parts:
– the configuration part where you set the input, output, options.
– the code part where you use the variables set in the configuration part inside a standard R script.

The configuration part

Group

Each script belongs to a group of scripts. It’s the group key, for instance,

##[datagistips]=group

means that the script belongs to the “datagistips” group. If the group doesn’t exist, it will automatically be created.

Input

The inputs are either vector, either raster. I don’t think you can mention you’d like a certain type of features: points, lines or polygons (maybe an improvement to be done as some analysises can only be done on a certain type of features?)

For vectors, you write:

##polygons=vector

polygons will be the label in the graphical window

For rasters:

##layer = raster

Options

In options, the user can indicate a numeric value or a string value.

For the numeric value

##distance=number 100

Here, the label will be distance and the default value is set to 100.
When the user will specify the value, he will type it manually or choose it from the different layer statistics

If the value is character, the user will have to type it.
 ##title=string France

Here, the label will be title and the default value is set to France

Fields

One very interesting thing is that you can use a field in a script. In the case of the Kolgomorov-Smirnov example script, you choose the numeric field from which to test the normality.

##field=field layer

Then, in a script

>lillie.test(layer[[field]])     

will integrate the field variable into the script

Outputs

For spatial data outputs, simply write
##output=output vector

Graphical outputs and console outputs will both be in HTML format

For graphical plots outputs, you must mention the tag

##showplots

(you could imagine producing some lovely plots with ggplot2)

For console output, you must precede the command by >

>lillie.test(layer[[field]])    
The complete Sextante R script

Below is the final R script. You can also download it here.

As you can see, you can instantly port your R code to a graphical environment so that every QGIS user, even a beginner, can use your process. Also, it’s easier than developing a python QGIS plugin.

The plugin then appears in the Sextante Toolbox

Usage example in QGIS

http://commons.wikimedia.org/wiki/File:Paris_sunset.JPG

Quiet places where to sleep/live of Paris

Here, I executed the script on an extract of buildings in the 1st district of Paris that I downloaded from the Geofabrik OpenStreetMap data repository

You could use it to determine which buildings are the most isolated, where you would probably sleep one night or live, far from the busy, crowdy life of the City of Lights.
The distance that I chose was 15 meters. In such a big city like Paris, that’s quite a lot.

The R script in action


Styling and visualizing

Once I had my neighbors layer, I could style it differently depending on the neighbors counts or the group it belonged to..

Clustered and isolated buildings on top of OSM data (with OpenLayers plugin)

Statistics

With the standard statistics tools incorporated in QGIS, you can analyze the spatial structure.

The results mention that there 24 groups of neighbors. The biggest group contains 23 buildings. Half of the neighbors contain between 0 and 7 neighbors.
You can also produce histograms of these values with the Statist plugin and see the distribution of counts for the neighbor groups

By using the request functionnality, I could locate buildings that where relatively isolated.

Buildings with less than 3 neighbors (distance of 15m) on top of Bing Aerial Imagery

Convex hulls

Finally, why not creating convex hulls to visualize the neighborhood groups and put some OSM data behind to see how it looks?

The output data on top of OpenStreetMap data (15 meters threshold)

When on top of OSM data, you can easily visualize the buildings groups. Remember that it all depends on the distance thereshold you used. With a 5m threshold, you get this:

The same but with a 5 meters threshold (nicer than with a 15m threshold, isn’t it?)

These spatial groups could probably reflect the different residences that exist in this place. They could be useful to plan a more detailed OSM investigation and mapping on the ground (to map residences in OpenstreetMap, use landuse=residential, name=[residence name], or a relation of type site)

Also, you could use these blocks as a base layer for some aggregated statistics: number of buildings, area, perimeter, lacunarity, fractal dimension, and so forth..

Isarithmic Map of French Votes: Why and How-to

Article in English

Cartes isarithmiques: pourquoi les utiliser et comment les réaliser? Cet article explique l’intérêt des cartes isarithmiques, pourquoi elles réussissent là où d’autres types de visualisation échouent. En réalité, la création de cartes isarithmiques est un procédé original dans le domaine de la géomatique, de la cartographie en général, se basant sur des masques continus et non binaires. L’article explique les principes techniques de telles réalisations et délivre le code à la fin du post.


Previously, we learnt  how to create a simple choropeth for vote results in France:

But is percentage enough to represent the election results? Here, we don’t take population in terms of number of inhabitants or density into account, although population is an important variable. The aim of this article is to study a kind of map called isarithmic map. It combines the visualization of two variables in a specific and compelling way.

Content:

  • Analysis  of  different visualization combinations
  • Isarithmic maps
  • The R code: technical explanations
  • Complete code on github

Analysis of  different visualization combinations

In our case, we have one polygonal data and it contains both informations we’re interested in.

  • Column VOTES that we got from integrating a csv file found on data publica 
  • and one column POPULATION that we get from the original source GEOFLA data.

Percentage of votes and population are two continuous numeric values. If we had one categorical data like “left”, “right”, it might have been easier to make a map: population represented as gradually colored polygons and blue/red dots above representing the dominant party.

In our case, we don’t want to alterate  the data and transform numeric data into categorical data. Also, we want to keep votes data as polygon data. Consequently, for the visualization purpose, we can consider these different options are available (more may be possible):

Polygon + XX visualization types
  • a simple overlay is not relevant. We don’t know if the brightness comes from population or votes values.
  • a combination of dots and polygon is better but mixing different shapes might be a little complex visually. Furthermore, in the case the city is small, the dot could overlay the polygon.  Putting  two many types of colors aside can be painful for your brain and they finally can be difficult to evaluate.
  • non contiguous cartograms would be a good idea, as contiguous cartograms. About the contiguous cartogram, one thing to remember is that  it doesn’t preserve, neither the position, neither the geometry so use with caution.
  • almost the same distortion can be affected to interpolation except it’s for mathematical reasons. Interpolation is a prediction method that primarily results in a  continuous image, from which no distinction can be made between the real collected values and the estimated  ones. Interpolation leads to contour lines and relief  (3D or 2.5D width shaded relief).

Isarithmic maps

Aren’t these maps pretty?

http://gis.stackexchange.com/questions/3083/examples-of-beautiful-maps

http://dsparks.wordpress.com/2011/10/24/isarithmic-maps-of-public-opinion-data/

In both cases, the same principle is used but the base layers are vectors for the first one and rasters for the second one. In each of these maps, there are two overlays. The first one is the thematic one. The second one is a mask that gives more or less importance to the overlaid data depending on its data. You probably know masks. They usually are binary. But here, it is continuous. The alpha transparency level is used as a weighted visual factor. The cities that are the most populated will be more visible, and so will be their votes. That’s a pretty logical and intuitive approach.

We could use this principle in many cases: attractiveness of a city, political weight, visibility of a landscape from a route.

In our case, the following schema summaries the process:

Isarithmic Map: the principle

Everytime a “data designer” tries to mimic nature: noise in textures, networks in trees, traffic as blood pressure along blood vessels, the OECD better life index as flowers, the visualization is compelling.  Here, I think the attractive aspect comes from a light sensation.

In my isarithmic map, the continuous mask looks like this:

A mask with continuous transparency/alpha levels

 You could compare it to the satellite photos you’d get from lightning in France. The mask, with its different halos of light, will enlighten some parts, but not others.

http://www.esa.int/esaCP/SEMVC1EWF0H_France_1.html

The isarithmic map of votes:

R was used by David B. Sparks for his opinion visualization. I used the same software for the code. Here are the technical explanations. The R code follows.

The R code: technical explanations

Half of the code, the one that consists in integrating voting results and making a choropleth, has already been treated in the previous post, so I won’t detail this part.

The latter half consists in creating and overlaying the continuous mask. That’s what interests us, overal

Logging the population values

Initially, my mask enhanced cities with the largest population density. The intermediate cities were almost totally masked. That’s why I decided to log my population values to get something more homogeneous. This histogram show the differences between the non-logged population values and the logged ones.

How to create a continuous black mask?

We’ll consider a base black color value: rgb(0, 0, 0)
We can add a 4th argument for opacity: rgb(0, 0, 0, .5) is half transparent.
Here, the color sequence will create 100 black values from 0 (transparent) to 1 (opaque).

seqTrsp <- seq(0, .8, length.out=nCuts)
palPop <- rgb(0, 0, 0, seqTrsp)

Complete code on github

On the road with R & Grass: Intervisibility along Lines

While drinking a glass of french Ricard (a famous drink from Provence) with Bertrand Bouteilles (see his blog) , a colleague of mine, the latter asked me if I knew a method to calculate the line-of-sight from a line. Usually, we calculate LOS from one XYZ observation points but I had not found any resource on the web for line.

He wanted to define how long you would see each part landscape if you were on a train, watching constantly through the window. Personally, I would sleep most of the times on long travels or I’d probably go at least once to the toilets.

Reciprocally, as the notion of intervisibility implies, such an analysis will tell you from where the railroad infrastructure will be the most visible. It gives the impact of it on landscape perception.

The problem was interesting, I took the challenge to give it a try.

More complex approaches integrating land covers could tell you when to sleep and when not to sleep (when the landscape is rich or when not), or whether you should book a seat on the left or right part of the train. That’s what we could analyse in future posts of  “On the Road”

I imagined Jack Kerouac being some kind of geogeek. He’d try to precisely prepare a travel by determining locations that would give him contemplative restfulness by watching the fugitive beautiful landscapes. That’s why I correlated this post with the book “On the Road” by Jack Kerouac who used to catch trains to get from one place to another across the USA.

So, here is how Jack Kerouac would have prepared his travel from his meditation mountain to an hypothetic geo-R conference held in san Francisco.

This script will be helpful if you’d like to familiarize with R / GRASS. Don’t worry, the different steps will be explained afterwards.

library(rgdal)
library(maptools)
library(spatstat)
library(spgrass6)
 
####################
### READING DATA ###
####################
track <- readOGR(".", "railroad")
track <- as(track, "psp")
 
#########################
### GENERATING POINTS ###
#########################
# every 50 meters (dem resolution)
pts <- pointsOnLines(track, eps=50)
 
################################
### GRASS REGION CONFIGURING ###
################################
# EXTENT
xrange <- as.character(c(pts$window$xrange[1]-5000, pts$window$xrange[2]+5000))
yrange <- as.character(c(pts$window$yrange[1]-5000, pts$window$yrange[2]+5000))
 
# CONFIGURING
execGRASS("g.region", flags = "p", parameters = list(rast = "mnt50", w = xrange[1], s = yrange[1], e = xrange[2], n = yrange[2]))
 
# GRID CREATING & GETTING THE NUMBER OF CELLS(for further programming)
grd <- gmeta2grd()
ncells <- grd@cells.dim[1]*grd@cells.dim[2]
 
#######################################
### GRASS LINE-OF-SIGHT CALCULATION ###
#######################################
# POINT XY COORDINATES
coords <- cbind(as.character(pts$x), as.character(pts$y))
 
# GRID VALUES INITIALIZATION before LOOPING
sumV <- rep(0, ncells)
 
# LOOP
for (i in seq(1, pts$n)) {
# GRASS LOS CALCULATION
execGRASS("r.los", parameters = list(input = "dem50", output = "los", coordinate = coords[i,], obs_elev = 2, max_dist = 2500), flags = c("overwrite"))
los <- readRAST6("los")
values <- ifelse(is.na(los@data[[1]]), 0, 1)
sumV <- values + sumV
}
 
# 0 VALUES TO NA
sumV[sumV==0]<-NA
save(sumV, file="sumV.RData")
 
#############################
### MAPPING DATA TO GRID ###
#############################
sgdf <- SpatialGridDataFrame(grd, data = data.frame(sum=sumV))
 
# EXPORT DATA FILLED GRID TO TIFF
writeGDAL(sgdf["sum"], "trackLos.tiff", drivername="GTiff", type="Float32"

Created by Pretty R at inside-R.org

Here is the result:

This image show the locations on the landscape from where the infrastrructure is the most visible, and conversely, the elements of the landscape that are the most visible from the railroad

Here are some little explanations of some parts relative to the code:

They key command is  r.los which is a line-of-sight raster analysis program.
r.los input=string output=string coordinate=x,y [patt_map=string] [obs_elev=float] [max_dist=float] [–overwrite]
For more details on r.los

####################
### READING DATA ###
####################
track <- readOGR(".", "railroad")
track <- as(track, "psp")
 
#########################
### GENERATING POINTS ###
#########################
# every 50 meters (dem resolution)
pts <- pointsOnLines(track, eps=50)

This part reads railroad.shp then coerces the SpatialLines object to a psp object so that it can be processed by pointsOnLines spatstat function. pointsOnLines will create a point every 50 meters along the line. 50 has been chosen because it corresponds to the dem resolution.


################################
### GRASS REGION CONFIGURING ###
################################
# EXTENT
xrange <- as.character(c(pts$window$xrange[1]-5000, pts$window$xrange[2]+5000))
yrange <- as.character(c(pts$window$yrange[1]-5000, pts$window$yrange[2]+5000))
 
# CONFIGURING
execGRASS("g.region", flags = "p", parameters = list(rast = "mnt50", w = xrange[1], s = yrange[1], e = xrange[2], n = yrange[2])) 


Here we configure a region which extent is the track extent extended with 5 kilometers because we’ll have to calculate LOS at each extremity of the track line also. The extent must be transmitted as String to g.region

# GRID CREATING & GETTING THE NUMBER OF CELLS(for further programming)
grd <- gmeta2grd()
ncells <- grd@cells.dim[1]*grd@cells.dim[2]
 
#######################################
### GRASS LINE-OF-SIGHT CALCULATION ###
#######################################
# POINT XY COORDINATES
coords <- cbind(as.character(pts$x), as.character(pts$y))
 
# GRID VALUES INITIALIZATION before LOOPING
sumV <- rep(0, ncells)
grd is a SpatialGrid object created from GRASS region paramemters: extent and resolution. ncells is the number of cells (nrows * ncols) We create a vector of values of same length as the number of cells and filled with 0 values.

# LOOP
for (i in seq(1, pts$n)) {
# GRASS LOS CALCULATION
execGRASS("r.los", parameters = list(input = "dem50", output = "los", coordinate = coords[i,], obs_elev = 2, max_dist = 2500), flags = c("overwrite"))
los <- readRAST6("los")
values <- ifelse(is.na(los@data[[1]]), 0, 1)
sumV <- values + sumV
}
 
# 0 VALUES TO NA
sumV[sumV==0]<-NA
We launch Line-Of-Sight calculation for each point, and each value of the raster derived from the observation of an individual XYZ point is added to that of the previous point, recursively.

# 0 VALUES TO NA
sumV[sumV==0]<-NA 
0 values of the raster are replaced by NA to provide transparency

sgdf <- SpatialGridDataFrame(grd, data = data.frame(sum=sumV))
 
# EXPORT DATA FILLED GRID TO TIFF
writeGDAL(sgdf["sum"], "trackLos.tiff", drivername="GTiff", type="Float32")