Public | Automated Build

Last pushed: 10 days ago
Short Description
Sensory substitution rock and roll
Full Description

Idea

The current actively developed program is segmentertester. Objects are segmented then tracked then selected then sounds generated from that. See the program segmentertester.cpp.

See data/img/FlowOld.pdf for a slightly old diagram of the data flow.

Quick Start

Install (K/L/X/)ubuntu 16.04 then download and run one of the following scripts with an internet connection:

https://bitbucket.org/damienjadeduff/scaper/raw/master/scripts/install_pcl_from_source_ubuntu_1604.sh

This will set up dependencies, PCL, depthsense drivers, and the current repository, all from source, on your computer.

The regular start

1) Install Ubuntu 16.04.

2) Install prerequisites (CMake, compiler, boost, Point Cloud Library dependencies, OpenAL, For kinect/xtion, OpenNI). Tested in Kubuntu 14.*4 with ROS repository enabled (may not be necessary).

sudo apt-get install build-essential git mercurial libboost-all-dev libspnav-dev libqt4-dev libflann-dev libvtk6-dev libvtk6-qt-dev libusb-1.0-0-dev libusb-1.0-0-dev libeigen3-dev cmake-curses-gui  libopenni-dev libopenni-sensor-primesense0 libopenni-sensor-primesense-dev libopenni0 openni-utils libeigen3-dev libopenni2-dev openni2-utils alsoft-conf libopenal-dev libalut-dev libalut0 libopenal1 openni2-utils  libqt4-opengl  libqtcore4 libqtgui4

If some of these packages are unavailable you may need to first enable ROS repositories:

wget https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -O - | sudo apt-key add -
sudo apt-get update

For the STK-based functionality (in development) - 14.04:

sudo apt-get install sox libsox-dev stk libstk0-dev libstk0c2a stk-doc

For the STK-based functionality (in development) - 16.04:

sudo apt-get install sox libsox-dev stk libstk0-dev stk-doc 

Here are some suggested extras:

sudo apt-get install dos2unix kdevelop kate gitk git-cola

3) Install the depthsense drivers if you plan on using it:

mkdir -p ~/tmp
cd ~/tmp
wget http://files.djduff.net/Download/DepthSense/DepthSenseSDK-1.9.0-5-amd64-deb.run
chmod u+x DepthSenseSDK-1.9.0-5-amd64-deb.run
sudo ./DepthSenseSDK-1.9.0-5-amd64-deb.run # accept agreement

To test the drivers are installed, run:

/opt/softkinetic/DepthSenseSDK/bin/DepthSenseViewer -sa

(see 'known issues' below)

4) Install PCL from source. It is recommended that you build PCL twice, once in DEBUG mode and once in RELEASE mode. The debug mode version is necessary for debugging but does not have the speed necessary for this program to work in real-time.

mkdir -p ~/software/
cd ~/software
git clone https://github.com/PointCloudLibrary/pcl.git
cd pcl
mkdir build_rel # for release build
mkdir build_deb # for debug build

Note that we are temporarily using a fork of PCL while we wait for some patches to be accepted by the PCL project.

Now build PCL in release mode:

cd build_rel
cmake -DWITH_DSSDK=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_apps=ON -DBUILD_segmentation=ON -DBUILD_features=on -DBUILD_io=ON -DBUILD_stereo=ON -DBUILD_recognition=ON -DPCL_QT_VERSION=4 -DBUILD_filters=ON -DBUILD_search=ON -DBUILD_kdtree=ON -DWITH_OPENNI=ON -DWITH_OPENNI2=ON -DWITH_QT=ON -DWITH_VTK=ON -DWITH_ENSENSO=OFF -DWITH_DAVIDSDK=OFF ..
make

Now build PCL in debug mode:

cd ../build_deb
cmake -DWITH_DSSDK=ON -DCMAKE_BUILD_TYPE=Debug -DBUILD_apps=ON -DBUILD_segmentation=ON -DBUILD_features=on -DBUILD_io=ON -DBUILD_stereo=ON -DBUILD_recognition=ON -DPCL_QT_VERSION=4 -DBUILD_filters=ON -DBUILD_search=ON -DBUILD_kdtree=ON -DWITH_OPENNI=ON -DWITH_OPENNI2=ON -DWITH_QT=ON -DWITH_VTK=ON -DWITH_ENSENSO=OFF -DWITH_DAVIDSDK=OFF ..
make

If the PCL install directory is different edit the PCL_DIR setting appropriately.

5) Install scaper.

cd ~/software/
git clone https://bitbucket.org/damienjadeduff/scaper.git
cd scaper
mkdir build_rel
mkdir build_deb
cd build_rel
cmake -DPCL_DIR=~/software/pcl/build_rel -DCMAKE_BUILD_TYPE=Release .. # for release build
make
cd ../build_deb
cmake -DPCL_DIR=~/software/pcl/build_deb -DCMAKE_BUILD_TYPE=Debug .. # for debug build
make

6) Run it. To see the options run:

./segmentertester -h

You can choose your datasource (OpenNI2, OpenNI, DepthSense, file), visualiser, parameters for algorithms, etc.

To create dataset for FPFH approach

First use the following command lines to create required folders:

mkdir ~/software/scaper/data/FPFH-Dataset
mkdir ~/software/scaper/data/FPFH-Dataset/Original_dataset
mkdir ~/software/scaper/data/FPFH-Dataset/Organized_pcd

Second get the dataset:

1) download RGB-D images from Washington RGB-D dataset by usig below link and save it in this directory ~/software/scaper/data/FPFH-Dataset/Original_dataset:
http://rgbd-dataset.cs.washington.edu/dataset/rgbd-dataset_pcd_ascii/

So,First you need to go to the required directory (~/software/scaper/data/FPFH-Dataset/Original_dataset) by using below command:

cd ~/software/scaper/data/FPFH-Dataset/Original_dataset

Second, e.g. download apple_1.tar file from th website by ths command:
e.g.
wget -O ~/software/scaper/data/FPFH-Dataset/Original_dataset "http://rgbd-dataset.cs.washington.edu/dataset/rgbd-dataset_pcd_ascii/apple_1.tar"

Third, extract the file that you have downloaded:
e.g.
tar -xvf apple_1.tar

2) install matlab

3) add the path of our project to the matlab path by using this command in matlab:

addpath ('/software/scaper/data/FPFH-Dataset/Original_dataset')

4) then run 'unorg2org.m' file from scaper/src/tools/ in matlab by using below command in your terminal:

-at this step we are trying to make Organized point clouds from the unorganized point clouds that we have downloaded by using unorg2org.m matlab code
-this command line will help you to run the matlab code n your terminal with no need to use matlab GUI

matlab -nojvm -nodesktop -r "run scaper/src/tools/unorg2org.m" 

NOTE: check the pathes to make sure that they are same with the path of directories you have made.

Third calculate FPFH for the dataset: use following command line to calculate FPFHs

1)you need to go to the build directory

  cd ~/software/scaper/build_rel

2)run the following code in your terminal to calculate FPFH values:

  ./OrgPCD-to-FPFH    

NOTE: make sure that the input path that you enter as an argument need to be same with the path that you have saved your Organized pointcloud.

SYNTHETIC DATA

To get synthetic data for testing (so that you can control the objects in the scene, etc.), the two programs synthesise_make_pose_list and synthesise_pose_list_to_pcds have been created. The first creates a text file containing a list of poses - you need to give it an object model and use it interactively (note: the key 'P' saves a pose to the target pose-list file). The second reads this file and also the object model and outputs PCD files.

An example has already been done: the model file is tableobjects.ply and an output file was created called data/synthetic/tableobjects_scan1.txt. The second step, conversion of this file to PCD files, has not been done but can be done calling the script test_synthesis.sh. This file also contains the command line used to generate tableobjects_scan1.txt.

All necessary files to generate data are in the repository.

You may need;

sudo apt-get install glew-utils libglew-dev freeglut3-dev
sudo apt-get install libxmu-dev libxi-dev

You need to enable the flag WITH_SYNTHESIS using cmake or ccmake.

WARNING: As of now (July 14) this only works with a fork of PCL that we made with some patches. If you took PCL from the main pointcloudlibrary on github, our fork can be incorporated like this:

  1. Using the shell, first change into your PCL directory.
  2. Run the following in the shell:

    ``git remote add fork https://github.com/damienjadeduff/pcl.git``
    
    ``git pull fork master``
    
  3. Now recompile PCL (see above for how to do that).

WARNING: If you get an error that looks like Failed loading vertex shader this is probably because the shaders are not available in the current path. Try copying them to your current directory:

cp ~/software/pcl/simulation/src/*frag ./
cp ~/software/pcl/simulation/src/*vert ./

KNOWN ISSUES

Issue: The histogram visualiser is buggy when used together with the other visualisers.

Solution:

Don't use it together with the other visualisers. It is probably a problem with the PCL code.

Issue: I want to run DepthSenseViewer but I get "no enumerator found some dll are missing" error when I call stand alone program

Solution:

This is a problem from Ubuntu 14.04 omitting an older file.

Used the following command line:

sudo apt-get install libudev1:i386
sudo ln -sf /lib/x86_64-linux-gnu/libudev.so.1 /lib/x86_64-linux-gnu/libudev.so.0

--> Now your viewer works

Issue: When I build make gives error with boost library.

Solution:

The problem is depth sense grabber requires boost and you need to link it.

On cmakelist.txt target link libraries, add boost_thread at the end. Since the library on the right most is called first, the dependency problem will be solved.

TODO (most urgent):

  • More sophisticated sound generation & feature extraction.
  • Make tracking / selection more stable.
  • More stable segmentation.
  • Allow keyboard input at any time as alternative to serial (Arduino) input.
  • Make the physical prototype.
  • Code cleanup:

    • Make boost program options use config files.
    • Make all pointers smart.
    • Do some hiding of visualisation and timing complexities.
    • Naming: segmenter is not a euclidean segmenter.
    • Give visualisers white backgrounds.
    • Remove compiler warnings.

TODO (medium):

  • Create the "scanning" approach.
  • Make sounds move to get motion localisation effect (move them up the tunnell for example). (??what does this mean, someone??)

TODO (big):

  • Multiple sensors for wider field of view.
  • Try it with stereo.

Coding Practices

There are enough people working on the project that we need to follow some coding practices. Here are some suggested practices for starters. Your suggestions please for further practices.

  • The master branch should always be well tested on real data. If you are experimenting and want to share your experimental code, use a different branch, like experimental_XXX. See this tutorial: https://www.atlassian.com/git/tutorials/using-branches/.

  • However, merge your changes back into master frequently and merge changes from master into your code frequently (this requires you test frequently so that you can merge them). Do not allow your code to diverge.

  • We are currently using the c++0x C++ standard. We can review this if necessary.

  • Use boost::program_options to take command line commands and config files.

  • Use the std::shared_ptr or other std library smart pointers instead of raw pointers except where performance is very important.

  • We try to practice abstraction and information hiding - including PIMPL to help with modularity.

Other programs:

depthsense_pcd_recorder for recording from DepthSense sensor to file.

limittest for checking limits in OpenAL implementation - number of possible sources.

vis1 just copied from the organised planar segmentation app from PCL - want to use code for visualisation (TODO).

merge a very simple initial combining of OpenAL and PCL.

organized_segmentation_demo the PCL organised segmentation demo adapted for DepthSense

Donk

A previously tried idea is is donk. The metaphor with donk is that the image is divided up into a grid and for each grid entry a large projectile is fired out into the world but only in that part of the scene. When it hits the nearest part of the scene it makes a sound whose frequency is directly related to the curvature of the object local to where it hits (which is, in a sense, a proxy for its size). Big objects should produce low sounds, etc. Then another projectile is fired for that grid entry and so on. The default number of projectiles is one.

Tested with DepthSense 325. Different parameters may be necessary to make it work with other sensors. DepthSense has the advantage of working well close up. For xtion/kinect I would recommend testing with far away objects.

The command line necessary would be:

./donk --depthsense  --projectile_velocity 2.0 --DepthDependentSmoothing true`
Docker Pull Command
Owner
damienjadeduff
Source Repository

Comments (0)