Browsing by Author "Nam, David"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Open Access Automatic x-ray image segmentation and clustering for threat detection(SPIE, 2017-10-05) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Nam, David; Belloni, CaroleFirearms currently pose a known risk at the borders. The enormous number of X-ray images from parcels, luggage and freight coming into each country via rail, aviation and maritime presents a continual challenge to screening officers. To further improve UK capability and aid officers in their search for firearms we suggest an automated object segmentation and clustering architecture to focus officers’ attentions to high-risk threat objects. Our proposal utilizes dual-view single/ dual-energy 2D X-ray imagery and is a blend of radiology, image processing and computer vision concepts. It consists of a triple-layered processing scheme that supports segmenting the luggage contents based on the effective atomic number of each object, which is then followed by a dual-layered clustering procedure. The latter comprises of mild and a hard clustering phase. The former is based on a number of morphological operations obtained from the image-processing domain and aims at disjoining mild-connected objects and to filter noise. The hard clustering phase exploits local feature matching techniques obtained from the computer vision domain, aiming at sub-clustering the clusters obtained from the mild clustering stage. Evaluation on highly challenging single and dual-energy X-ray imagery reveals the architecture’s promising performance.Item Open Access Towards scene understanding implementing the stixel world(IEEE, 2019-03-07) Grenier, Amélie; Alzoubi, Alaa; Feetham, Luke; Nam, DavidIn this paper, we present our work towards scene understanding based on modeling the scene prior to understanding its content. We describe the environment representation model used, the Stixel World, and its benefits for compact scene representation. We show our preliminary results of its application in a diverse environment and the limitations reached in our experiments using imaging systems. We argue that this method has been developed in an ideal scenario and does not generalise well to uncommon changes in the environment. We also found that this method is sensitive to the quality of the stereo rectification and the calibration of the optics, among other parameters, which makes it time-consuming and delicate to prepare in real-time applications. We think that pixel-wise semantic segmentation techniques can address some of the shortcomings of the concept presented in a theoretical discussion.Item Open Access Vehicle Obstacle Interaction Dataset (VOIDataset)(Cranfield University, 2018-10-11 13:26) Alzoubi, Alaa; Nam, DavidVehicle-Obstacle Interaction Dataset (VOIDataset) includes 277 trajectories (sequences of x,y positions of the vehicle and the obstacle) of three different scenarios (67 crash, 106 left-pass, and 104 right-pass trajectories). The distance between the vehicle and the obstacle (length of the trajectory) is 50 meters. The trajectories were manually annotated, and used to evaluate our activity recognition method. Data was gathered using a simulation environment developed in Virtual Battlespace 3 (VBS3), with the Logitech G29 Driving Force Racing Wheel and pedals. Here a model of a Dubai highway was used. We consider a six lane road with an obstacle in the centre lane. The experiment consisted of 40 participants, all of varying ages, genders and driving experiences. Participants were asked to use their driving experience to avoid the obstacle. A Skoda Octavia was used in all trails, and with maximum speed 50KPH. We recorded the obstacle and ego-vehicle's coordinates (the centre position of the vehicle), velocity, heading angle, and distance from each other. The generated trajectories were recorded at 10Hz. Version 2: no change to the dataset, but appending contact details for more information: Alaa Alzoubi: alaa.alzoubi@buckingham.ac.uk David Nam: d.nam@cranfield.ac.uk