OUR MAGIC
We apply advanced analytics to efficiently generate intelligence from various visual data sources using a unique combination of cutting-edge AI and Machine Learning techniques.
Our algorithms are sensor agnostic, so we scale up easily by using data from a wide range of sensors.
SEMANTIC SEGMENTATION
Semantic segmentation, or image segmentation, is the task of clustering parts of an image together which belong to the same object class. It is a form of pixel-level prediction because each pixel in an image is classified according to a category.
We manually label items of interest over a training area of less than 1% of the entire site. Then semantic segmentation modules are trained on this data and deployed on the wider site.
VISUAL SEARCH & IMAGE RETREIVAL
In an interactive fashion, the user can select a bounding box around the object of interest (such as tracks, trees, trains etc. and the system identifies and returns all the other semantically similar objects to the user, in real-time.
OBJECT DETECTION
WHAT MAKES US DIFFERENT?
01 / PRE-PROCESSING
When working with ortho-images, in particular satellite/aerial imagery, one needs to be careful that images are all in correct projection, have similar bands (and the bands are in similar order), images have same pixel resolution, etc.
In the pre-processing stage, we script functions that check the above and make necessary corrections, for example by re-projecting images or rescaling them. GIS tools such as GDAL, and computer vision tools, such as OpenCV, can be used to achieve many image processing tasks efficiently.
02 / DOMAIN TRANSFER
Despite the fact that natural images share underlying statistics, different visual domains have their own distinctive characters. Applying a deep learning model trained on one domain, e.g. landscape, and deploying that model in a new domain such as aerial images, will result in suboptimal performance.
We use ‘self-supervised’ learning techniques (such as “inpainting”) to transfer the pre-trained model from source domain to the target domain. In this step we use the data itself to guide the operations and there is no need for any human annotation.
03 / TRANSFER LEARNING
Images are very high dimensional signals, and natural images occupy only a small subset (manifold) of this space. Therefore, natural images share a great amount of statistical characteristics.
We employ a deep neural network trained on publicly available datasets, such as ImageNet, and use it as initialisation for our own model. This enables us to train our models using significantly less training data.
04 / FEW-SHOT LEARNING
Instead of labelling all the data in one try, we label only a small fraction of the data, e.g. 1% and train our model. The train model is then run on part/whole of unlabelled data.
This allows us to focus our labelling efforts on areas that the model is struggling with most. Consequently, this simple mechanism leads to faster training and overall requires less human supervision.
05 / BILATERAL FILTERING
Some tasks such as semantic segmentation require pixel-level labelling. Classic computer vision and signal processing methods, such as guided bilateral filters, conditional ransom fields, etc. can help to improve the output of some deep learning models. Whenever appropriate, we employ these techniques to further improve the output of our models.
06 / ITERATIVE INTERACTIVE LABELLING
An intuitive interface is provided to the user to visually observe the outputs/predictions produced by the model. The same interface allows the annotator to annotate the data in areas that the model is generating wrong labels each time providing more supervision to the system.
Our algorithms are sensor agnostic, so we scale up easily by using data from a wide range of sources.
Photogrammetry technology is used to capture 3D laser point scans of assets and buildings.
Both still and video cameras mounted on assets capture imagery that is uploaded to the cloud on a regular basis
Drone are used by asset management organisation to capture aerial data using RGB, lidar and infrared technologies.