Road-map to classify a satellite imagery using Python

Updated: Feb 27, 2020

Agilytics is proud to produce a road-map for interested people to classify a satellite imagery into different categories like buildings, vegetation and water. The three categories will be displayed in colors Red, Green and Blue respectively.

Following is the Python code to read the imagery and define some variables: -

import numpy as np

from skimage import io

img = io.imread('D:/Agilytics/RSProject/UrbanImagery.png')

rows, cols, bands = img.shape

classes = {'building': 0, 'vegetation': 1, 'water': 2}

n_classes = len(classes)

palette = np.uint8([[255, 0, 0], [0, 255, 0], [0, 0, 255]])

For Unsupervised Classification we will need to detect the underlying structure of the spatial data. We will split the image pixels into n_classes partitions using k-means

clustering: -

from sklearn.cluster import KMeans

X = img.reshape(rows*cols, bands)

kmeans = KMeans(n_clusters=n_classes, random_state=3).fit(X)

unsupervised = kmeans.labels_.reshape(rows, cols)


Below is the output of the classified imagery: -

Unsupervised Classification

For Supervised Classification we can assign labels to some pixels of known classes using field survey data. This set of labelled pixels can be described as Ground-Truth (Training set) pixels.

supervised = n_classes*np.ones(shape=(rows, cols), dtype=np.intt)

supervised[200:220, 150:170] = classes['building']

supervised[40:60, 40:60] = classes['vegetation']

supervised[100:120, 200:220] = classes['water']

The pixels of the ground truth (training set) are used to fit a support vector machine (SVM). The classifier assigns class labels to the remaining pixels (test set). The resultant imagery is as follows: -

y = supervised.ravel()

train = np.flatnonzero(supervised < n_classes)

test = np.flatnonzero(supervised == n_classes)

from sklearn.svm import SVC

clf = SVC(gamma='auto')

clf.fitt(X[train], y[train])

y[test] = clf.predict(X[test])

supervised = y.reshape(rows, cols)


Supervised Classification

The result can be improved by enlarging and refining the ground truth because the train/test ratio is small and the red and green patches actually contain pixels of different classes.

Should you need more information, feel free to write to

255 views0 comments

Recent Posts

See All