Follow the steps below to setup, run, and analyze your scenario:
- Set up the AVXConnector in CarMaker.
- Set up the Object Sensor in CarMaker.
- Create routes for Traffic Objects via Scenario Editor.
- Create Traffic Objects and name them (e.g., 'CAR1', 'CAR2', 'BIC1', 'PED1').
- Select 'Movie Geometry' for each Traffic Object.
- Add a route and set a start position for each Traffic Object.
- Set a motion model and maneuver for each Traffic Object.
- For optimal performance, limit traffic objects to a maximum of 7-8 to avoid memory errors.
- Create a maneuver for EGO car.
- Select OutputQuantities for each Traffic Object (ds.x,y,z :3, r_zyx.x,y,z :3, Time). E.g. if you used 5 CARs, 2 PEDs, 1 BIC, hence 48 selections + Time at a frequency of 100Hz.
- Click the 'Start' button in CarMaker to launch your scenario.
- Go to the output folder where Ansys AVxcelerate Sensors Simulator's outputs are saved.
- Copy the CarMaker .DAT file from its saved location to the location where point clouds and contribution outputs are saved.
- Copy
/Python_scripts
into the same directory where Ansys AVxcelerate Sensors CarMaker Cosimulation's output resides. - Open
RunSequenceScripts.py
and define the directories for processing in thedirectory_paths
variable. - Run the script with the command:
python RunSequenceScripts.py
.
After running the script, you will notice there are several output files:
deleted_files.txt
: Point clouds deleted due to duplication. See README.md.Extents.txt
: The dimensions - length, width, and height of the traffic objects. See README.md.IDs.txt
: EntityIDs corresponding to each traffic object. See README.md.lidar_xxxxx_label.txt
: Label according to KITTI label format. See README.md.lidar_xxxxx_pointcloud.npy
: Point cloud according to KITTI coordinates. See README.md.
Check the Ready
folder for renamed point clouds and their corresponding labels. Please note, this folder only contains the point clouds and labels that successfully captured traffic objects within the field of view.
To visualize the 3D bounding boxes during the label generation process, comment out a specific section in the LidarPointCloudLabelGenerator.py
script.
# Comment out the following part if you want to visualize the bounding boxes
# filtered_bboxes = [bbox for bbox in bboxes if bbox is not None]
# bbox_array = np.array(filtered_bboxes)
# bbox_array = np.squeeze(bbox_array)
# bbox_array = np.reshape (bbox_array, (bbox_array.shape [0], -1))
# V_mayavi.draw_scenes (points=filt_filt_points, gt_boxes = bbox_array )
# mlab.show(stop=True)
To create a larger synthetic point cloud database with KITTI-like labels, you can create additional routes and scenarios. For details, see Scaling_Simulated_Scenarios. In my research, I used the CarMaker Co-simulation Library to simulate driving scenarios in various virtual environments, generating a significant number of useful point cloud frames.
You can define multiple directories for processing in the directory_paths
variable in RunSequenceScripts.py.
Please note that only the data inside the Ready
folder will be utilized in further processing. You may delete intermediate outputs to conserve storage space.
Use the CollectData.py script to assemble your dataset from your own unique scenarios. Ensure to modify variables such as dst_labels_dir
and dst_points_dir
, and folders
to match your requirements.
Deploy the CalibGenerator.py script to duplicate the constant calibration parameter used for every frame during training and evaluation. Modify calib_dir
and label_dir
inside CalibGenerator.py
as needed.
With these steps, your training and evaluation sets are now ready.
The experiments in my study were performed on a high-performance computing infrastructure, specifically an Intel(R) Xeon(R) Gold 6238R CPU running at 2.20 GHz with 32 cores, and a RTX8000P-8Q virtual GPU with a total memory of 8192MB.
The datasets were transferred to a Virtual Machine (VM) using WinSCP, stored in the following folders:
Train
├── point_AVX : This contains synthetic point cloud data files in .npy format.
├── labels_AVX : This contains the corresponding labels for the point cloud frames. These labels, provided in KITTI format.
├── calib_AVX : This contains the corresponding calibration parameters for the point cloud frames, also in KITTI format.
Test
├── point_AVX : This contains synthetic point cloud data files in .npy format.
├── labels_AVX : This contains the corresponding labels for the point cloud frames. These labels, provided in KITTI format.
├── calib_AVX : This contains the corresponding calibration parameters for the point cloud frames, also in KITTI format.
The experiments were conducted using Python 3.10.6, compiled with GCC 11.3.0. Training and testing of the models were carried out using PyTorch version 1.13.1+cu117, with support from NVIDIA's CUDA toolkit (version 11.7) for GPU-accelerated operations.
The OpenPCDet platform is utilized for training and evaluation.
Ensure to install OpenPCDet as described in INSTALL.md after fulfilling all requirements.
First, download the official KITTI 3D object detection dataset and organize it as follows:
OpenPCDet
├── data
│ ├── kitti
│ │ │── ImageSets
│ │ │── training
│ │ │ ├──calib & velodyne & label_2 & image_2
├── pcdet
├── tools
Transfer the kitti_training_train.7z and kitti_training_val.7z files to ./OpenPCDet/data
on the VM and extract them. These files are identical to the downloaded KITTI data, but are split into training and validation sets for your convenience.
For custom training sets, copy the create_trainset.py script to ./OpenPCDet/
.
Modify parameters like dataset_name
, my_data_size
, and percent_synthtetic
as per your requirements. E.g., if you want a training set with 90% synthetic and 10% KITTI data, set percent_synthtetic
to 0.9.
The KITTI training data typically contains 3,712 samples. If you wish to create a training set using only a fraction of the KITTI samples, set percent_synthtetic
to 0 and adjust my_data_size
. E.g., for a 20% KITTI training set, set my_data_size
to (3712*0.2).
For custom test sets, copy the create_testset_avx.py and create_testset_kitti.py scripts into ./OpenPCDet/
. Ensure you correctly specify the directory names in the scripts.
Next, move AVX_testset_KITTI.yaml and KITTI_testset_KITTI.yaml files to ./OpenPCDet/tools/cfgs/dataset_configs/
.
Transfer the kitti_dataset.py file to ./OpenPCDet/pcdet/datasets/kitti/
. Make sure to update the data_path
and save_path
parameters to match your specific requirements.
For instance, if you want to create a test set with only synthetic samples, set the following values in the kitti_dataset.py
script:
data_path=ROOT_DIR / 'data' / 'AVX_testset'
save_path=ROOT_DIR / 'data' / 'AVX_testset'
Make sure to comment out the sections of the kitti_dataset.py related to training, while leaving the testing related sections uncommented:
##################### for TEST
# For Test use this part, for Train comment this part
dataset.set_split(val_split)
kitti_infos_val = dataset.get_infos(num_workers=workers, has_label=True, count_inside_pts=True)
with open(val_filename, 'wb') as f:
pickle.dump(kitti_infos_val, f)
print('Kitti info val file is saved to %s' % val_filename)
##################### END for TEST
Now, run the following commands to create the data info:
python3 -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/ AVX_testset_KITTI.yaml
and
python3 -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/ KITTI_testset_KITTI.yaml
This data info will store necessary parameters, such as bounding box information, for training and evaluation.
For additional details, refer to GETTING_STARTED.md.
For detailed information about the experiments, see README.md.
-
Transfer kitti.yaml to the VM directory
./OpenPCDet/tools/cfgs/dataset_configs/
. -
Transfer pointpillar_kitti.yaml to the VM directory
./OpenPCDet/tools/cfgs/kitti_models/
. -
Modify the kitti_dataset.py file, which is already transferred to
./OpenPCDet/pcdet/datasets/kitti/
. Uncomment the sections related to training and comment the sections related to testing. For instance, uncomment the following block:##################### START for TRAIN # For Train use this part, for Test comment this part dataset.set_split(train_split) kitti_infos_train = dataset.get_infos(num_workers=workers, has_label=True, count_inside_pts=True) with open(train_filename, 'wb') as f: pickle.dump(kitti_infos_train, f) print('Kitti info train file is saved to %s' % train_filename) print('---------------Start create groundtruth database for data augmentation---------------') dataset.set_split(train_split) dataset.create_groundtruth_database(train_filename, split=train_split) ##################### END for TRAIN
-
Inside kitti_dataset.py, adjust the
data_path
andsave_path
parameters according to your specific needs. Make sure to change the values tokitti
.
-
Run the following command to generate the data info needed for training:
python3 -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti.yaml
-
Copy train_modified.py to
./OpenPCDet/tools/
. -
Start the training process with this command:
python3 train_modified.py --cfg_file cfgs/kitti_models/pointpillar_kitti.yaml --extra_tag $(if_you_need)$
For KITTI evaluation, if you only want to evaluate the last epoch, copy the script test_single_KITTI.py to ./OpenPCDet/tools/
. Use the following command to run the evaluation:
python3 test_single_KITTI.py --ckpt [specify_checkpoint_directory] --extra_tag [optional_extra_tag]
You should replace specify_checkpoint_directory
with the directory where your checkpoint is located. Optionally, include the --extra_tag
parameter if needed.
For AVX evaluation, do the same with test_single_AVX.py.
To evaluate multiple epochs selectively, copy the script select_checkpoints to ./OpenPCDet/tools/
. This step allows you to choose specific checkpoints for evaluation. If you wish to evaluate all checkpoints, you can skip this step, but note that it may take longer.
Inside select_checkpoints you can also modify the following:
checkpoints_to_copy = [1, 10, 20, 30, 40, 50, 60, 70, 80]
to specify the checkpoints you want to copy and then evaluate later. Execute the following command to run the selection process:
python3 select_checkpoints.py [checkpoint_directory]
Replace checkpoint_directory
with the actual path to the directory containing the checkpoints.
For KITTI evaluation with multiple epochs, copy the script test_multiple_KITTI.py to ./OpenPCDet/tools/
. Use the following command to perform the evaluation:
python3 test_multiple_KITTI.py --ckpt_dir [specify_checkpoints_directory] --extra_tag [optional_extra_tag]
Replace specify_checkpoints_directory
with the actual directory path of the checkpoints. Additionally, include the --extra_tag
parameter if needed.
Similarly, for AVX evaluation with multiple epochs, copy the script test_multiple_AVX.py to ./OpenPCDet/tools/
. Use the following command:
python3 test_multiple_AVX.py --ckpt [specify_checkpoints_directory] --extra_tag [optional_extra_tag]
Replace specify_checkpoints_directory
with the actual directory path of the checkpoints. Add the --extra_tag
parameter if required.
Once you have the results of your epoch evaluations, you can visualize them with TensorBoard. To do this, follow these steps:
- Copy the script tensorboard_visualize.sh to
./OpenPCDet/
. - Modify the
log_dir
parameter in the script to specify the directory containing the log files. - Run the script as an executable:
./tensorboard_visualize.sh
- Open your preferred web browser and access the generated TensorBoard visualization. By executing these steps, you will be able to visualize and analyze the epoch evolution graphs using TensorBoard.
To create the AVX trainset, follow these steps:
- Utilize the create_trainset.py script.
- Modify the
dataset_name
parameter in the script to reflect the desired dataset name. - Adjust the
my_data_size
parameter to match the data size of the AVX trainset (your synthetic train set). - Set the
percent_synthtetic
parameter to 1 to include only AVX data. - Copy AVX_trainset_KITTI.yaml to
./OpenPCDet/tools/cfgs/dataset_configs/
. - Copy AVX_trainset.yaml to
./OpenPCDet/tools/cfgs/kitti_models/
- Open the file kitti_dataset.py and modify the
data_path
andsave_path
parameters according to your requirements. Again use it in training mode! Ensure that you make the necessary adjustments to the script and configuration files to suit your specific needs.
To generate the required data infos for training, execute the following command:
python3 -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/AVX_trainset_KITTI.yaml
You can use the previously copied training script train_modified.py. Then, run the training using the following command:
python3 train_modified.py --cfg_file cfgs/kitti_models/AVX_trainset.yaml --extra_tag [optional_extra_tag]
Replace optional_extra_tag
with any additional tag if necessary. The evaluation steps will remain the same as described above [Step 3, Step 4, Step 5].
For AVX trainsets consisting of 80% AVX and 20% KITTI data, and AVX trainsets with an equal distribution of 50% AVX and 50% KITTI data, the process is straightforward. Now we'll focus on the AVX 90% KITTI 10% case.
To create the AVX trainset with a composition of 90% AVX and 10% KITTI data, follow these steps:
- Utilize the create_trainset.py script.
- Modify the
dataset_name
parameter in the script to reflect the desired dataset name. - Set the
my_data_size
parameter to 3712, which represents the total number of samples in the KITTI train dataset. - Adjust the
percent_synthtetic
parameter to 0.9, indicating the desired proportion of AVX data. - Copy AVX_90_kitti_10_train_KITTI.yaml to
./OpenPCDet/tools/cfgs/dataset_configs/
. - Copy AVX_90_kitti_10_train_KITTI.yaml to
./OpenPCDet/tools/cfgs/kitti_models/
. - Open the file kitti_dataset.py and modify the
data_path
andsave_path
parameters according to your requirements. Remember to use it in training mode! Ensure that you make the necessary adjustments to the script and configuration files to suit your specific needs.
To generate the required data infos for training, execute the following command:
python3 -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/AVX_90_kitti_10_train_KITTI.yaml
You can use the previously copied training script train_modified.py. Then, run the training using the following command:
python3 train_modified.py --cfg_file cfgs/kitti_models/AVX_90_kitti_10_train_KITTI.yaml --extra_tag [optional_extra_tag]
Replace optional_extra_tag
with any additional tag if necessary. The evaluation steps will remain the same as described above [Step 3, Step 4, Step 5 of Experiment 1].
The cases for 20% KITTI and 50% KITTI are straightforward. Now we'll focus on the 10% KITTI case.
To create the KITTI trainset with a composition of 10% of the total KITTI data, follow these steps:
- Utilize the create_trainset.py script.
- Modify the
dataset_name
parameter in the script to reflect the desired dataset name. - Set the
my_data_size
parameter to (3712 * 0.1), which represents the total number of samples in the 10% KITTI train dataset. - Adjust the
percent_synthtetic
parameter to 0, indicating that no synthetic data will be included in the trainset. - Copy kitti_10_train_KITTI.yaml to
./OpenPCDet/tools/cfgs/dataset_configs/
. - Copy kitti_10_train_KITTI.yaml to
./OpenPCDet/tools/cfgs/kitti_models/
. - Open the file kitti_dataset.py and modify the
data_path
andsave_path
parameters according to your requirements. Remember to use it in training mode! Ensure that you make the necessary adjustments to the script and configuration files to suit your specific needs.
To generate the required data infos for training, execute the following command:
python3 -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_10_train_KITTI.yaml
You can use the previously copied training script train_modified.py. Then, run the training using the following command with pre-training:
python3 train_modified.py --cfg_file cfgs/kitti_models/kitti_10_train_KITTI.yaml --extra_tag [optional_extra_tag] --pretrained_model [checkpoint_location]
Replace optional_extra_tag
with any additional tag if necessary. Provide the location of the checkpoint to load the pre-trained parameters using the checkpoint_location
parameter. The evaluation steps will remain the same as described above [Step 3, Step 4, Step 5 of Experiment 1].
To train without pre-training, use the following command:
python3 train_modified.py --cfg_file cfgs/kitti_models/kitti_10_train_KITTI.yaml --extra_tag [optional_extra_tag]
Replace optional_extra_tag
with any additional tag if necessary. The evaluation steps will remain the same as described above [Step 3, Step 4, Step 5 of Experiment 1].
After the evaluations, navigate to the directories ./OpenPCDet/output/kitti_models/AVX_testset/[extra_tag]
for the evaluation on synthetic data, or ./OpenPCDet/output/kitti_models/KITTI_testset/[extra_tag]
for the evaluation on the real data. In these directories, you will find the 'log_eval_XXXXXX' text files, which contain the Average Precision (AP) scores.
To create plots using the result_dict.pkl
files for each epoch and evaluation, access the corresponding result_dict.pkl
files. Utilize these files to generate the necessary plots for further analysis and visualization of the results.
To generate screenshots of the frames and create a video, follow these steps:
- Copy the script demo_video.py to the directory
./OpenPCDet/tools/
. - Modify the parameters in the script as follows:
--cfg_file
: Specify the configuration file path.--data_path
: Specify the data path.--ckpt
: Specify the checkpoint path.--ext
: Specify the extension of the point clouds,.npy
or.bin
.--idx
: Specify the number of frames to capture.--name
: Specify the name of the output folder where the screenshots will be saved.
- Run the script. It will save screenshots of the first
--idx
frames within the--data_path
, using the network parameters loaded from--ckpt
. The outputs will be saved as.png
files in the specified--name
folder. The screenshots will display both the detections and the ground truths.
To create a video from the generated screenshots, follow these steps:
- Copy the script create_video.py to the directory
./OpenPCDet/tools/
. - Modify the parameters in the script as follows:
--image_folder
: Specify the folder path containing the screenshots.--output_video_path
: Specify the path where the video will be saved.--fps
: Specify the frames per second (FPS) for the output video.
- Run the script. It will create a video using the screenshots located in the
--image_folder
. The resulting video will be saved at the specified--output_video_path
, with the desired--fps
value.
Using these scripts, the videos on the following repository were created.
If you prefer to investigate frames individually by zooming and translating, you can follow these steps:
- Copy the script demo_investigate.py to the directory
./OpenPCDet/tools/
. - Modify the parameters in the script as follows:
--cfg_file
: Specify the configuration file path.--data_path
: Specify the data path.--ckpt
: Specify the checkpoint path.
- Run the script. It will allow you to investigate frames one by one, providing zooming and translation capabilities.
To visualize the ground truth data and object point clouds used for data augmentation in OpenPCDet, follow these steps:
- Copy the script visualize_gtdatabase.py to the
./OpenPCDet/
directory. - Run the script using the command:
python3 visualize_gtdatabase.py --object_class [object_class] --folder_path [folder_path]
Replace object_class
with the desired object class (e.g., Car).
Replace folder_path
with the path to the folder containing the ground truth database (e.g., data/AVX_testset/gt_database).
To visualize ground truth boxes on point clouds, follow these steps:
- Copy the script visualize_gtboxes.py to the
./OpenPCDet/
directory. - Run the script using the command:
python3 visualize_gtboxes.py --pkl_path [pkl_path] --extension [extension]
Replace pkl_path
with the path to the .pkl
file containing the ground truth box information (e.g., ./data/KITTI_testset/kitti_infos_val.pkl).
Replace extension
with the file extension of the point clouds (e.g., .npy).