GithubHelp home page GithubHelp logo

mosure / bevy_light_field Goto Github PK

View Code? Open in Web Editor NEW
4.0 1.0 0.0 34.11 MB

rust bevy light field array tooling

License: GNU Affero General Public License v3.0

Dockerfile 0.45% Rust 99.02% WGSL 0.53%
avatar-generator bevy light-field light-field-cameras light-field-processing light-field-reconstruction onshape person-segmentation rtsp rtsp-stream

bevy_light_field's People

Contributors

mosure avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

bevy_light_field's Issues

TODO: add a recording finished event when all streams are closed, then execute t...

.insert(RawStreams {

streams: raw_streams,

})

.insert(PipelineConfig::default());

// TODO: TODO: add a recording finished event when all streams are closed, then execute the following command if pipeline auto-processing is enabled (not ideal for fast recording)

        if person_timeout.elapsed_secs() > 3.0 {
            person_timeout.reset();

            info!("no person detected for 3 seconds, stop recording");

            let _session_entity = live_session.0.take().unwrap();
            let _raw_streams = stream_manager.stop_recording();

            // TODO: TODO: add a recording finished event when all streams are closed, then execute the following command if pipeline auto-processing is enabled (not ideal for fast recording)

            // commands.entity(session_entity)
            //     .insert(RawStreams {
            //         streams: raw_streams,
            //     })
            //     .insert(PipelineConfig::default());
        }

        person_timeout.tick(time.delta());

parse the json at each frame path to get the bounding boxes

// TODO: parse the json at each frame path to get the bounding boxes

}


#[derive(Component, Default)]
pub struct YoloFrames {
    pub frames: HashMap<StreamId, Vec<Vec<BoundingBox>>>,
    pub directory: String,
}
impl YoloFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/yolo_frames", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut yolo_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        yolo_frames.reload();

        yolo_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("json"))
                    .map(|entry| std::fs::File::open(entry.path()).unwrap())
                    .map(|yolo_json_file| {
                        let bounding_boxes: Vec<BoundingBox> = serde_json::from_reader(&yolo_json_file).unwrap();

                        bounding_boxes
                    })
                    .collect::<Vec<_>>();

                // TODO: parse the json at each frame path to get the bounding boxes

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn write(&self) {
        self.frames.iter()
            .for_each(|(stream_id, frames)| {
                let output_directory = format!("{}/{}", self.directory, stream_id.0);
                std::fs::create_dir_all(&output_directory).unwrap();

                frames.iter()
                    .enumerate()
                    .for_each(|(frame_idx, bounding_boxes)| {
                        let path = format!("{}/{}.json", output_directory, frame_idx);
                        let _ = serde_json::to_writer(std::fs::File::create(path).unwrap(), bounding_boxes);
                    });
            });
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/yolo_frames", session.directory);
        std::fs::metadata(output_directory).is_ok()
    }

    pub fn image(&self, _camera: usize, _frame: usize) -> Option<Image> {
        todo!()
    }
}



#[derive(Component, Default)]
pub struct RotatedFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl RotatedFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/rotated_frames", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut raw_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        raw_frames.reload();

        raw_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/rotated_frames", session.directory);
        std::fs::metadata(output_directory).is_ok()
    }

    pub fn image(&self, _camera: usize, _frame: usize) -> Option<Image> {
        todo!()
    }
}



#[derive(Component, Default, Reflect)]
pub struct MaskFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String
}
impl MaskFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/masks", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut mask_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        mask_frames.reload();

        mask_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/masks", session.directory);
        std::fs::metadata(output_directory).is_ok()
    }

    pub fn image(&self, _camera: usize, _frame: usize) -> Option<Image> {

separate image loading and onnx inference (so the image loading result can be vi...

// TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)

}


fn generate_rotated_frames(
    mut commands: Commands,
    descriptors: Res<StreamDescriptors>,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<RotatedFrames>,
    >,
) {
    // TODO: create a caching/loading system wrapper over run_node interior
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        // TODO: get stream descriptor rotation

        if config.rotate_raw_frames {
            let run_node = !RotatedFrames::exists(session);
            let mut rotated_frames = RotatedFrames::load_from_session(session);

            if run_node {
                let rotations: HashMap<StreamId, f32> = descriptors.0.iter()
                    .enumerate()
                    .map(|(id, descriptor)| (StreamId(id), descriptor.rotation.unwrap_or_default()))
                    .collect();

                info!("generating rotated frames for session {}", session.id);

                raw_frames.frames.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", rotated_frames.directory, stream_id.0);
                        std::fs::create_dir_all(&output_directory).unwrap();

                        let frames = frames.par_iter()
                            .map(|frame| {
                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();
                                let output_path = format!("{}/{}.png", output_directory, frame_idx);

                                rotate_image(
                                    std::path::Path::new(frame),
                                    std::path::Path::new(&output_path),
                                    rotations[stream_id],
                                ).unwrap();

                                output_path
                            })
                            .collect::<Vec<_>>();

                            rotated_frames.frames.insert(*stream_id, frames);
                    });
            } else {
                info!("rotated frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(rotated_frames);
        }
    }
}


fn generate_mask_frames(
    mut commands: Commands,
    frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RotatedFrames,
            &Session,
        ),
        Without<MaskFrames>,
    >,
    modnet: Res<Modnet>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        frames,
        session,
    ) in frames.iter() {
        if config.mask_frames {
            if onnx_assets.get(&modnet.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&modnet.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !MaskFrames::exists(session);
            let mut mask_frames = MaskFrames::load_from_session(session);

            if run_node {
                info!("generating mask frames for session {}", session.id);

                frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                let mask_images = frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    modnet_inference(
                                        onnx_session,
                                        &[&image],
                                        Some((512, 512)),
                                    ).pop().unwrap(),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                mask_images.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        let mask_paths = frames.iter()
                            .map(|(frame_idx, frame)| {
                                let path = format!("{}/{}.png", output_directory, frame_idx);

                                let buffer = ImageBuffer::<Luma<u8>, Vec<u8>>::from_raw(
                                    frame.width(),
                                    frame.height(),
                                    frame.data.clone(),
                                ).unwrap();

                                let _ = buffer.save(&path);

                                path
                            })
                            .collect::<Vec<_>>();

                        mask_frames.frames.insert(**stream_id, mask_paths);
                    });
            } else {
                info!("mask frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(mask_frames);
        }
    }
}


fn generate_yolo_frames(
    mut commands: Commands,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<YoloFrames>,
    >,
    yolo: Res<Yolo>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        if config.yolo {
            if onnx_assets.get(&yolo.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&yolo.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !YoloFrames::exists(session);
            let mut yolo_frames = YoloFrames::load_from_session(session);

            if run_node {
                info!("generating yolo frames for session {}", session.id);

                raw_frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                // TODO: support async ort inference (re. progress bars)
                let bounding_box_streams = raw_frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    yolo_inference(
                                        onnx_session,
                                        &image,
                                        0.5,
                                    ),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                bounding_box_streams.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        let bounding_boxes = frames.iter()
                            .map(|(frame_idx, bounding_boxes)| {
                                let path = format!("{}/{}.json", output_directory, frame_idx);

                                let _ = serde_json::to_writer(std::fs::File::create(path).unwrap(), bounding_boxes);

                                bounding_boxes.clone()
                            })
                            .collect::<Vec<_>>();

                        yolo_frames.frames.insert(**stream_id, bounding_boxes);
                    });
            } else {
                info!("yolo frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(yolo_frames);
        }
    }
}


// TODO: alphablend frames
#[derive(Component, Default)]
pub struct AlphablendFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl AlphablendFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/alphablend", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut alphablend_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        alphablend_frames.reload();

        alphablend_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/alphablend", session.directory);
        std::fs::metadata(output_directory).is_ok()
    }

    pub fn image(&self, _camera: usize, _frame: usize) -> Option<Image> {
        todo!()
    }
}



// TODO: support loading maskframes -> images into a pipeline mask viewer


#[derive(Component, Default)]
pub struct RawFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl RawFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut raw_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        raw_frames.reload();

        raw_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(

add pipeline viewer /w left/right arrow keys and UI controls to switch between f...

// TODO: add pipeline viewer /w left/right arrow keys and UI controls to switch between frames

}


// TODO: add pipeline viewer /w left/right arrow keys and UI controls to switch between frames



fn calculate_grid_dimensions(window_width: f32, window_height: f32, num_streams: usize) -> (usize, usize, f32, f32) {
    let window_aspect_ratio = window_width / window_height;
    let stream_aspect_ratio: f32 = 16.0 / 9.0;

alphablend frames

// TODO: alphablend frames

}


fn generate_rotated_frames(
    mut commands: Commands,
    descriptors: Res<StreamDescriptors>,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<RotatedFrames>,
    >,
) {
    // TODO: create a caching/loading system wrapper over run_node interior
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        // TODO: get stream descriptor rotation

        if config.rotate_raw_frames {
            let run_node = !RotatedFrames::exists(session);
            let mut rotated_frames = RotatedFrames::load_from_session(session);

            if run_node {
                let rotations: HashMap<StreamId, f32> = descriptors.0.iter()
                    .enumerate()
                    .map(|(id, descriptor)| (StreamId(id), descriptor.rotation.unwrap_or_default()))
                    .collect();

                info!("generating rotated frames for session {}", session.id);

                raw_frames.frames.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", rotated_frames.directory, stream_id.0);
                        std::fs::create_dir_all(&output_directory).unwrap();

                        let frames = frames.par_iter()
                            .map(|frame| {
                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();
                                let output_path = format!("{}/{}.png", output_directory, frame_idx);

                                rotate_image(
                                    std::path::Path::new(frame),
                                    std::path::Path::new(&output_path),
                                    rotations[stream_id],
                                ).unwrap();

                                output_path
                            })
                            .collect::<Vec<_>>();

                            rotated_frames.frames.insert(*stream_id, frames);
                    });
            } else {
                info!("rotated frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(rotated_frames);
        }
    }
}


fn generate_mask_frames(
    mut commands: Commands,
    frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RotatedFrames,
            &Session,
        ),
        Without<MaskFrames>,
    >,
    modnet: Res<Modnet>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        frames,
        session,
    ) in frames.iter() {
        if config.mask_frames {
            if onnx_assets.get(&modnet.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&modnet.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !MaskFrames::exists(session);
            let mut mask_frames = MaskFrames::load_from_session(session);

            if run_node {
                info!("generating mask frames for session {}", session.id);

                frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                let mask_images = frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    modnet_inference(
                                        onnx_session,
                                        &[&image],
                                        Some((512, 512)),
                                    ).pop().unwrap(),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                mask_images.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        let mask_paths = frames.iter()
                            .map(|(frame_idx, frame)| {
                                let path = format!("{}/{}.png", output_directory, frame_idx);

                                let buffer = ImageBuffer::<Luma<u8>, Vec<u8>>::from_raw(
                                    frame.width(),
                                    frame.height(),
                                    frame.data.clone(),
                                ).unwrap();

                                let _ = buffer.save(&path);

                                path
                            })
                            .collect::<Vec<_>>();

                        mask_frames.frames.insert(**stream_id, mask_paths);
                    });
            } else {
                info!("mask frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(mask_frames);
        }
    }
}


fn generate_yolo_frames(
    mut commands: Commands,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<YoloFrames>,
    >,
    yolo: Res<Yolo>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        if config.yolo {
            if onnx_assets.get(&yolo.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&yolo.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !YoloFrames::exists(session);
            let mut yolo_frames = YoloFrames::load_from_session(session);

            if run_node {
                info!("generating yolo frames for session {}", session.id);

                raw_frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                // TODO: support async ort inference (re. progress bars)
                let bounding_box_streams = raw_frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    yolo_inference(
                                        onnx_session,
                                        &image,
                                        0.5,
                                    ),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                bounding_box_streams.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        let bounding_boxes = frames.iter()
                            .map(|(frame_idx, bounding_boxes)| {
                                let path = format!("{}/{}.json", output_directory, frame_idx);

                                let _ = serde_json::to_writer(std::fs::File::create(path).unwrap(), bounding_boxes);

                                bounding_boxes.clone()
                            })
                            .collect::<Vec<_>>();

                        yolo_frames.frames.insert(**stream_id, bounding_boxes);
                    });
            } else {
                info!("yolo frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(yolo_frames);
        }
    }
}


// TODO: alphablend frames
#[derive(Component, Default)]
pub struct AlphablendFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl AlphablendFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/alphablend", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut alphablend_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        alphablend_frames.reload();

        alphablend_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/alphablend", session.directory);
        std::fs::metadata(output_directory).is_ok()
    }

    pub fn image(&self, _camera: usize, _frame: usize) -> Option<Image> {
        todo!()
    }
}



// TODO: support loading maskframes -> images into a pipeline mask viewer


#[derive(Component, Default)]
pub struct RawFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl RawFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut raw_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        raw_frames.reload();

        raw_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(

support enabling/disabling decoding/matting per stream (e.g. during 'record mode...

// TODO: support enabling/disabling decoding/matting per stream (e.g. during 'record mode')

        ..default()
    };

    // TODO: support enabling/disabling decoding/matting per stream (e.g. during 'record mode')

    let input_images: Vec<Handle<Image>> = stream_uris.0.iter()
        .enumerate()
        .map(|(index, descriptor)| {
            let entity = commands.spawn_empty().id();

            let mut image = Image {

move new and exists to a trait

// TODO: move new and exists to a trait

use bevy::prelude::*;
use image::DynamicImage;
use rayon::prelude::*;


pub struct PipelinePlugin;
impl Plugin for PipelinePlugin {
    fn build(&self, app: &mut App) {
        app.add_systems(Update, generate_annotations);
    }
}


#[derive(Component, Reflect)]
pub struct PipelineConfig {
    pub raw_frames: bool,
    pub subject_refinement: bool,           // https://github.com/onnx/models/tree/main/validated/vision/body_analysis/ultraface
    pub repair_frames: bool,                // https://huggingface.co/docs/diffusers/en/optimization/onnx & https://github.com/bnm6900030/swintormer
    pub upsample_frames: bool,              // https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx
    pub mask_frames: bool,                  // https://github.com/ZHKKKe/MODNet
    pub light_field_cameras: bool,          // https://github.com/jasonyzhang/RayDiffusion
    pub depth_maps: bool,                   // https://github.com/fabio-sim/Depth-Anything-ONNX
    pub gaussian_cloud: bool,
}

impl Default for PipelineConfig {
    fn default() -> Self {
        Self {
            raw_frames: true,
            subject_refinement: false,
            repair_frames: false,
            upsample_frames: false,
            mask_frames: false,
            light_field_cameras: false,
            depth_maps: false,
            gaussian_cloud: false,
        }
    }
}


#[derive(Bundle, Default, Reflect)]
pub struct StreamSessionBundle {
    pub config: PipelineConfig,
    pub raw_streams: RawStreams,
    pub session: Session,
}

// TODO: use an entity saver to write Session and it's components (e.g. `0/session.ron`)


#[derive(Component, Default, Reflect)]
pub struct Session {
    pub id: usize,
    pub directory: String,
}

impl Session {
    pub fn new(directory: String) -> Self {
        let id = get_next_session_id(&directory);
        let directory = format!("{}/{}", directory, id);
        std::fs::create_dir_all(&directory).unwrap();

        Self { id, directory }
    }
}


pub trait PipelineNode {
    fn new(session: &Session) -> Self;
    fn exists(session: &Session) -> bool;
}


#[derive(Component, Default, Reflect)]
pub struct RawStreams {
    pub streams: Vec<String>,
}
impl RawStreams {
    pub fn decoders(&self) -> () {
        // TODO: create decoders for each h264 file stream
        todo!()
    }
}


// TODO: add a pipeline config for the session describing which components to run


fn generate_annotations(
    mut commands: Commands,
    raw_streams: Query<
        (
            Entity,
            &PipelineConfig,
            &RawStreams,
            &Session,
        ),
        Without<RawFrames>,
    >,
) {
    for (
        entity,
        config,
        raw_streams,
        session,
    ) in raw_streams.iter() {
        let raw_frames = RawFrames::new(session);

        if config.raw_frames {
            let run_node = !RawFrames::exists(session);
            let mut raw_frames = RawFrames::new(session);

            if run_node {
                // TODO: add (identifier == "{}_{}", stream, frame) output to this first stage (forward to mask stage /w image in RAM)
                let frames = raw_streams.streams.par_iter()
                    .map(|stream| {
                        let decoder = todo!();

                        (stream, decoder)
                    })
                    .map(|(stream, decoder)| {
                        let frames: Vec<DynamicImage> = vec![];

                        // TODO: read all frames

                        (stream, frames)
                    })
                    .collect::<Vec<_>>();

                frames.par_iter()
                    .for_each(|(stream, frames)| {
                        frames.iter().enumerate()
                            .for_each(|(i, frame)| {
                                let path = format!(
                                    "{}/frames/{}_{}.png",
                                    session.directory,
                                    stream,
                                    i,
                                );
                                frame.save(path).unwrap();
                            });
                    });
            }
        }

        commands.entity(entity).insert(raw_frames);
    }
}


#[derive(Component, Default, Reflect)]
pub struct RawFrames {
    pub frames: Vec<String>,
}
impl RawFrames {
    // TODO: move new and exists to a trait

    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        // TODO: load all files that are already in the directory

        Self {
            frames: vec![],
        }
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::metadata(&output_directory).is_ok()
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Component, Default, Reflect)]
pub struct MaskFrames {
    pub frames: Vec<String>,
}
impl MaskFrames {
    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/masks", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        Self {
            frames: vec![],
        }
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Default, Clone, Reflect)]
pub struct LightFieldCamera {
    // TODO: intrinsics/extrinsics
}

#[derive(Component, Default, Reflect)]
pub struct LightFieldCameras {
    pub cameras: Vec<LightFieldCamera>,
}



fn get_next_session_id(output_directory: &str) -> usize {
    match std::fs::read_dir(output_directory) {
        Ok(entries) => entries.filter_map(|entry| {
            let entry = entry.ok()?;
                if entry.path().is_dir() {
                    entry.file_name().to_string_lossy().parse::<usize>().ok()
                } else {
                    None
                }
            })
            .max()
            .map_or(0, |max_id| max_id + 1),
        Err(_) => 0,
    }
}

intrinsics/extrinsics

// TODO: intrinsics/extrinsics

use bevy::prelude::*;
use image::DynamicImage;
use rayon::prelude::*;


pub struct PipelinePlugin;
impl Plugin for PipelinePlugin {
    fn build(&self, app: &mut App) {
        app.add_systems(Update, generate_annotations);
    }
}


#[derive(Component, Reflect)]
pub struct PipelineConfig {
    pub raw_frames: bool,
    pub subject_refinement: bool,           // https://github.com/onnx/models/tree/main/validated/vision/body_analysis/ultraface
    pub repair_frames: bool,                // https://huggingface.co/docs/diffusers/en/optimization/onnx & https://github.com/bnm6900030/swintormer
    pub upsample_frames: bool,              // https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx
    pub mask_frames: bool,                  // https://github.com/ZHKKKe/MODNet
    pub light_field_cameras: bool,          // https://github.com/jasonyzhang/RayDiffusion
    pub depth_maps: bool,                   // https://github.com/fabio-sim/Depth-Anything-ONNX
    pub gaussian_cloud: bool,
}

impl Default for PipelineConfig {
    fn default() -> Self {
        Self {
            raw_frames: true,
            subject_refinement: false,
            repair_frames: false,
            upsample_frames: false,
            mask_frames: false,
            light_field_cameras: false,
            depth_maps: false,
            gaussian_cloud: false,
        }
    }
}


#[derive(Bundle, Default, Reflect)]
pub struct StreamSessionBundle {
    pub config: PipelineConfig,
    pub raw_streams: RawStreams,
    pub session: Session,
}

// TODO: use an entity saver to write Session and it's components (e.g. `0/session.ron`)


#[derive(Component, Default, Reflect)]
pub struct Session {
    pub id: usize,
    pub directory: String,
}

impl Session {
    pub fn new(directory: String) -> Self {
        let id = get_next_session_id(&directory);
        let directory = format!("{}/{}", directory, id);
        std::fs::create_dir_all(&directory).unwrap();

        Self { id, directory }
    }
}


pub trait PipelineNode {
    fn new(session: &Session) -> Self;
    fn exists(session: &Session) -> bool;
}


#[derive(Component, Default, Reflect)]
pub struct RawStreams {
    pub streams: Vec<String>,
}
impl RawStreams {
    pub fn decoders(&self) -> () {
        // TODO: create decoders for each h264 file stream
        todo!()
    }
}


// TODO: add a pipeline config for the session describing which components to run


fn generate_annotations(
    mut commands: Commands,
    raw_streams: Query<
        (
            Entity,
            &PipelineConfig,
            &RawStreams,
            &Session,
        ),
        Without<RawFrames>,
    >,
) {
    for (
        entity,
        config,
        raw_streams,
        session,
    ) in raw_streams.iter() {
        let raw_frames = RawFrames::new(session);

        if config.raw_frames {
            let run_node = !RawFrames::exists(session);
            let mut raw_frames = RawFrames::new(session);

            if run_node {
                // TODO: add (identifier == "{}_{}", stream, frame) output to this first stage (forward to mask stage /w image in RAM)
                let frames = raw_streams.streams.par_iter()
                    .map(|stream| {
                        let decoder = todo!();

                        (stream, decoder)
                    })
                    .map(|(stream, decoder)| {
                        let frames: Vec<DynamicImage> = vec![];

                        // TODO: read all frames

                        (stream, frames)
                    })
                    .collect::<Vec<_>>();

                frames.par_iter()
                    .for_each(|(stream, frames)| {
                        frames.iter().enumerate()
                            .for_each(|(i, frame)| {
                                let path = format!(
                                    "{}/frames/{}_{}.png",
                                    session.directory,
                                    stream,
                                    i,
                                );
                                frame.save(path).unwrap();
                            });
                    });
            }
        }

        commands.entity(entity).insert(raw_frames);
    }
}


#[derive(Component, Default, Reflect)]
pub struct RawFrames {
    pub frames: Vec<String>,
}
impl RawFrames {
    // TODO: move new and exists to a trait

    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        // TODO: load all files that are already in the directory

        Self {
            frames: vec![],
        }
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::metadata(&output_directory).is_ok()
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Component, Default, Reflect)]
pub struct MaskFrames {
    pub frames: Vec<String>,
}
impl MaskFrames {
    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/masks", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        Self {
            frames: vec![],
        }
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Default, Clone, Reflect)]
pub struct LightFieldCamera {
    // TODO: intrinsics/extrinsics
}

#[derive(Component, Default, Reflect)]
pub struct LightFieldCameras {
    pub cameras: Vec<LightFieldCamera>,
}



fn get_next_session_id(output_directory: &str) -> usize {
    match std::fs::read_dir(output_directory) {
        Ok(entries) => entries.filter_map(|entry| {
            let entry = entry.ok()?;
                if entry.path().is_dir() {
                    entry.file_name().to_string_lossy().parse::<usize>().ok()
                } else {
                    None
                }
            })
            .max()
            .map_or(0, |max_id| max_id + 1),
        Err(_) => 0,
    }
}

clean this mess

// TODO: clean this mess

    },
    codec::VideoFrame,
};
use url::Url;


const RTSP_URIS: [&str; 2] = [
    // "rtsp://localhost:554/lizard",
    // "rtsp://rtspstream:[email protected]/movie",
    // "rtsp://rtspstream:[email protected]/pattern",
    "rtsp://192.168.1.23/user=admin&password=admin123&channel=1&stream=0.sdp?",
    "rtsp://192.168.1.24/user=admin&password=admin123&channel=1&stream=0.sdp?",
];


fn main() {
    App::new()
        .add_plugins(DefaultPlugins)
        .add_systems(Update, press_esc_close)
        .add_systems(Startup, spawn_tasks)
        .add_systems(Update, task_loop)
        .add_systems(Update, handle_tasks)
        .run();
}


#[derive(Component)]
struct ReadRtspFrame(Task<CommandQueue>);

#[derive(Component, Clone)]
struct RtspStream {
    uri: String,
    index: usize,
    demuxed: Arc<Mutex<Option<Demuxed>>>,
    decoder: Arc<Mutex<Option<Decoder>>>,
}


// TODO: decouple bevy async tasks and the multi-stream RTSP handling
//       a bevy system should query the readiness of a frame for each RTSP stream and update texture as needed
//       the RTSP handling should be a separate async tokio pool that updates the readiness of each RTSP stream /w buffer for transfer to texture


fn spawn_tasks(
    mut commands: Commands,
) {
    RTSP_URIS.iter()
        .enumerate()
        .for_each(|(index, &url)| {
            let entity = commands.spawn_empty().id();

            let api = openh264::OpenH264API::from_source();
            let decoder = Decoder::new(api).unwrap();

            let rtsp_stream = RtspStream {
                uri: url.to_string(),
                index,
                demuxed: Arc::new(Mutex::new(None)),
                decoder: Arc::new(Mutex::new(decoder.into())),
            };
            let demuxed_arc = rtsp_stream.demuxed.clone();

            let session_result = futures::executor::block_on(Compat::new(create_session(&url)));
            match session_result {
                Ok(playing) => {
                    let mut demuxed = demuxed_arc.lock().unwrap();
                    *demuxed = Some(playing.demuxed().unwrap());

                    println!("created demuxer for {}", url);
                },
                Err(e) => panic!("Failed to create session: {}", e),
            }

            queue_rtsp_frame(
                &rtsp_stream,
                &mut commands,
                entity,
            );

            commands.entity(entity).insert(rtsp_stream);
        });
}


fn task_loop(
    mut commands: Commands,
    streams: Query<
        (
            Entity,
            &RtspStream,
        ),
        Without<ReadRtspFrame>,
    >,
) {
    for (entity, rtsp_stream) in &mut streams.iter() {
        queue_rtsp_frame(
            rtsp_stream,
            &mut commands,
            entity,
        );
    }
}


fn handle_tasks(
    mut commands: Commands,
    mut tasks: Query<&mut ReadRtspFrame>,
) {
    for mut task in &mut tasks {
        if let Some(mut commands_queue) = block_on(future::poll_once(&mut task.0)) {
            commands.append(&mut commands_queue);
        }
    }
}


fn convert_h264(data: &mut [u8]) -> Result<(), Error> {
    let mut i = 0;
    while i < data.len() - 3 {
        // Replace each NAL's length with the Annex B start code b"\x00\x00\x00\x01".
        let len = u32::from_be_bytes([data[i], data[i + 1], data[i + 2], data[i + 3]]) as usize;
        data[i] = 0;
        data[i + 1] = 0;
        data[i + 2] = 0;
        data[i + 3] = 1;
        i += 4 + len;
        if i > data.len() {
            bail!("partial NAL body");
        }
    }
    if i < data.len() {
        bail!("partial NAL length");
    }
    Ok(())
}

fn queue_rtsp_frame(
    rtsp_stream: &RtspStream,
    commands: &mut Commands,
    entity: Entity,
) {
    let idx = rtsp_stream.index;
    let demuxed_arc = rtsp_stream.demuxed.clone();
    let decoder_arc = rtsp_stream.decoder.clone();

    let task = AsyncComputeTaskPool::get().spawn(async move {
        let result = futures::executor::block_on(Compat::new(capture_frame(demuxed_arc.clone())));

        // TODO: clean this mess
        match result {
            Ok(frame) => {
                let mut decoder_lock = decoder_arc.lock().unwrap();
                let decoder = decoder_lock.as_mut().unwrap();

                let mut data = frame.into_data();
                let annex_b_convert = convert_h264(&mut data);

                match annex_b_convert {
                    Ok(_) => for packet in nal_units(&data) {
                        let result = decoder.decode(packet);

                        match result {
                            Ok(decoded_frame) => {
                                println!("decoded frame from stream {}", idx);
                                // TODO: populate 'latest frame' with new data
                            },
                            Err(e) => println!("failed to decode frame for stream {}: {}", idx, e),
                        }
                    },
                    Err(e) => println!("failed to convert NAL unit to Annex B format: {}", e)
                }
            },
            Err(e) => println!("failed to capture frame for stream {}: {}", idx, e),
        }

        let mut command_queue = CommandQueue::default();
        command_queue.push(move |world: &mut World| {
            world.entity_mut(entity).remove::<ReadRtspFrame>();
        });

        command_queue
    });

    commands.entity(entity).insert(ReadRtspFrame(task));
}

async fn create_session(url: &str) -> Result<Session<Playing>, Box<dyn std::error::Error + Send + Sync>> {
    let parsed_url = Url::parse(url)?;

    let username = parsed_url.username();

add system to detect person mask in camera 0 for automatic recording

// TODO: add system to detect person mask in camera 0 for automatic recording

    }
}


// TODO: add system to detect person mask in camera 0 for automatic recording
fn automatic_recording(
    mut commands: Commands,
    stream_manager: Res<RtspStreamManager>,
    mut live_session: ResMut<LiveSession>,
) {
    if live_session.0.is_some() {
        return;
    }

    // TODO: check the segmentation mask labeled for automatic recording detection
}


#[derive(Resource, Default)]
pub struct LiveSession(Option<Entity>);

fn press_r_start_recording(
    mut commands: Commands,
    keys: Res<ButtonInput<KeyCode>>,
    stream_manager: Res<RtspStreamManager>,
    mut live_session: ResMut<LiveSession>,
) {
    if keys.just_pressed(KeyCode::KeyR) {
        if live_session.0.is_some() {
            return;
        }

        let session = Session {
            directory: "capture".to_string(),
            ..default()
        };

        stream_manager.start_recording(
            &session.directory,
        );

        let entity = commands.spawn(
            StreamSessionBundle {
                session: session,
                raw_streams: RawStreams {
                    streams: vec![],
                },
                config: PipelineConfig::default(),
            },
        ).id();
        live_session.0 = Some(entity);
    }
}

fn press_s_stop_recording(
    mut commands: Commands,
    keys: Res<ButtonInput<KeyCode>>,
    stream_manager: Res<RtspStreamManager>,
    mut live_session: ResMut<LiveSession>,
) {
    if keys.just_pressed(KeyCode::KeyS) && live_session.0.is_some() {
        let session_entity = live_session.0.take().unwrap();

        let raw_streams = stream_manager.stop_recording();

        commands.entity(session_entity)
            .insert(RawStreams {
                streams: raw_streams,
            });
    }
}

move to config file

"rtsp://192.168.1.24/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.24/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.23/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.24/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.23/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.24/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.23/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.24/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.25/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.26/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.27/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.28/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.29/user=admin&password=admin123&channel=1&stream=0.sdp?",

"rtsp://192.168.1.30/user=admin&password=admin123&channel=1&stream=0.sdp?",

// TODO: move to config file

};


// TODO: move to config file
const RTSP_URIS: [&str; 8] = [
    "rtsp://192.168.1.23/user=admin&password=admin123&channel=1&stream=0.sdp?",
    "rtsp://192.168.1.24/user=admin&password=admin123&channel=1&stream=0.sdp?",

    "rtsp://192.168.1.25/user=admin&password=admin123&channel=1&stream=0.sdp?",
    "rtsp://192.168.1.26/user=admin&password=admin123&channel=1&stream=0.sdp?",
    "rtsp://192.168.1.27/user=admin&password=admin123&channel=1&stream=0.sdp?",
    "rtsp://192.168.1.28/user=admin&password=admin123&channel=1&stream=0.sdp?",
    "rtsp://192.168.1.29/user=admin&password=admin123&channel=1&stream=0.sdp?",
    "rtsp://192.168.1.30/user=admin&password=admin123&channel=1&stream=0.sdp?",
];

add (identifier == "{}_{}", stream, frame) output to this first stage (forward t...

// TODO: add (identifier == "{}_{}", stream, frame) output to this first stage (forward to mask stage /w image in RAM)

use bevy::prelude::*;
use image::DynamicImage;
use rayon::prelude::*;


pub struct PipelinePlugin;
impl Plugin for PipelinePlugin {
    fn build(&self, app: &mut App) {
        app.add_systems(Update, generate_annotations);
    }
}


#[derive(Component, Reflect)]
pub struct PipelineConfig {
    pub raw_frames: bool,
    pub subject_refinement: bool,           // https://github.com/onnx/models/tree/main/validated/vision/body_analysis/ultraface
    pub repair_frames: bool,                // https://huggingface.co/docs/diffusers/en/optimization/onnx & https://github.com/bnm6900030/swintormer
    pub upsample_frames: bool,              // https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx
    pub mask_frames: bool,                  // https://github.com/ZHKKKe/MODNet
    pub light_field_cameras: bool,          // https://github.com/jasonyzhang/RayDiffusion
    pub depth_maps: bool,                   // https://github.com/fabio-sim/Depth-Anything-ONNX
    pub gaussian_cloud: bool,
}

impl Default for PipelineConfig {
    fn default() -> Self {
        Self {
            raw_frames: true,
            subject_refinement: false,
            repair_frames: false,
            upsample_frames: false,
            mask_frames: false,
            light_field_cameras: false,
            depth_maps: false,
            gaussian_cloud: false,
        }
    }
}


#[derive(Bundle, Default, Reflect)]
pub struct StreamSessionBundle {
    pub config: PipelineConfig,
    pub raw_streams: RawStreams,
    pub session: Session,
}

// TODO: use an entity saver to write Session and it's components (e.g. `0/session.ron`)


#[derive(Component, Default, Reflect)]
pub struct Session {
    pub id: usize,
    pub directory: String,
}

impl Session {
    pub fn new(directory: String) -> Self {
        let id = get_next_session_id(&directory);
        let directory = format!("{}/{}", directory, id);
        std::fs::create_dir_all(&directory).unwrap();

        Self { id, directory }
    }
}


pub trait PipelineNode {
    fn new(session: &Session) -> Self;
    fn exists(session: &Session) -> bool;
}


#[derive(Component, Default, Reflect)]
pub struct RawStreams {
    pub streams: Vec<String>,
}
impl RawStreams {
    pub fn decoders(&self) -> () {
        // TODO: create decoders for each h264 file stream
        todo!()
    }
}


// TODO: add a pipeline config for the session describing which components to run


fn generate_annotations(
    mut commands: Commands,
    raw_streams: Query<
        (
            Entity,
            &PipelineConfig,
            &RawStreams,
            &Session,
        ),
        Without<RawFrames>,
    >,
) {
    for (
        entity,
        config,
        raw_streams,
        session,
    ) in raw_streams.iter() {
        let raw_frames = RawFrames::new(session);

        if config.raw_frames {
            let run_node = !RawFrames::exists(session);
            let mut raw_frames = RawFrames::new(session);

            if run_node {
                // TODO: add (identifier == "{}_{}", stream, frame) output to this first stage (forward to mask stage /w image in RAM)
                let frames = raw_streams.streams.par_iter()
                    .map(|stream| {
                        let decoder = todo!();

                        (stream, decoder)
                    })
                    .map(|(stream, decoder)| {
                        let frames: Vec<DynamicImage> = vec![];

                        // TODO: read all frames

                        (stream, frames)
                    })
                    .collect::<Vec<_>>();

                frames.par_iter()
                    .for_each(|(stream, frames)| {
                        frames.iter().enumerate()
                            .for_each(|(i, frame)| {
                                let path = format!(
                                    "{}/frames/{}_{}.png",
                                    session.directory,
                                    stream,
                                    i,
                                );
                                frame.save(path).unwrap();
                            });
                    });
            }
        }

        commands.entity(entity).insert(raw_frames);
    }
}


#[derive(Component, Default, Reflect)]
pub struct RawFrames {
    pub frames: Vec<String>,
}
impl RawFrames {
    // TODO: move new and exists to a trait

    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        // TODO: load all files that are already in the directory

        Self {
            frames: vec![],
        }
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::metadata(&output_directory).is_ok()
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Component, Default, Reflect)]
pub struct MaskFrames {
    pub frames: Vec<String>,
}
impl MaskFrames {
    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/masks", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        Self {
            frames: vec![],
        }
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Default, Clone, Reflect)]
pub struct LightFieldCamera {
    // TODO: intrinsics/extrinsics
}

#[derive(Component, Default, Reflect)]
pub struct LightFieldCameras {
    pub cameras: Vec<LightFieldCamera>,
}



fn get_next_session_id(output_directory: &str) -> usize {
    match std::fs::read_dir(output_directory) {
        Ok(entries) => entries.filter_map(|entry| {
            let entry = entry.ok()?;
                if entry.path().is_dir() {
                    entry.file_name().to_string_lossy().parse::<usize>().ok()
                } else {
                    None
                }
            })
            .max()
            .map_or(0, |max_id| max_id + 1),
        Err(_) => 0,
    }
}

support async ort inference (re. progress bars)

// TODO: support async ort inference (re. progress bars)

}


fn generate_rotated_frames(
    mut commands: Commands,
    descriptors: Res<StreamDescriptors>,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<RotatedFrames>,
    >,
) {
    // TODO: create a caching/loading system wrapper over run_node interior
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        // TODO: get stream descriptor rotation

        if config.rotate_raw_frames {
            let run_node = !RotatedFrames::exists(session);
            let mut rotated_frames = RotatedFrames::load_from_session(session);

            if run_node {
                let rotations: HashMap<StreamId, f32> = descriptors.0.iter()
                    .enumerate()
                    .map(|(id, descriptor)| (StreamId(id), descriptor.rotation.unwrap_or_default()))
                    .collect();

                info!("generating rotated frames for session {}", session.id);

                raw_frames.frames.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", rotated_frames.directory, stream_id.0);
                        std::fs::create_dir_all(&output_directory).unwrap();

                        let frames = frames.par_iter()
                            .map(|frame| {
                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();
                                let output_path = format!("{}/{}.png", output_directory, frame_idx);

                                rotate_image(
                                    std::path::Path::new(frame),
                                    std::path::Path::new(&output_path),
                                    rotations[stream_id],
                                ).unwrap();

                                output_path
                            })
                            .collect::<Vec<_>>();

                            rotated_frames.frames.insert(*stream_id, frames);
                    });
            } else {
                info!("rotated frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(rotated_frames);
        }
    }
}


fn generate_mask_frames(
    mut commands: Commands,
    frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RotatedFrames,
            &Session,
        ),
        Without<MaskFrames>,
    >,
    modnet: Res<Modnet>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        frames,
        session,
    ) in frames.iter() {
        if config.mask_frames {
            if onnx_assets.get(&modnet.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&modnet.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !MaskFrames::exists(session);
            let mut mask_frames = MaskFrames::load_from_session(session);

            if run_node {
                info!("generating mask frames for session {}", session.id);

                frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                let mask_images = frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    modnet_inference(
                                        onnx_session,
                                        &[&image],
                                        Some((512, 512)),
                                    ).pop().unwrap(),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                mask_images.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        let mask_paths = frames.iter()
                            .map(|(frame_idx, frame)| {
                                let path = format!("{}/{}.png", output_directory, frame_idx);

                                let buffer = ImageBuffer::<Luma<u8>, Vec<u8>>::from_raw(
                                    frame.width(),
                                    frame.height(),
                                    frame.data.clone(),
                                ).unwrap();

                                let _ = buffer.save(&path);

                                path
                            })
                            .collect::<Vec<_>>();

                        mask_frames.frames.insert(**stream_id, mask_paths);
                    });
            } else {
                info!("mask frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(mask_frames);
        }
    }
}


fn generate_yolo_frames(
    mut commands: Commands,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<YoloFrames>,
    >,
    yolo: Res<Yolo>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        if config.yolo {
            if onnx_assets.get(&yolo.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&yolo.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !YoloFrames::exists(session);
            let mut yolo_frames = YoloFrames::load_from_session(session);

            if run_node {
                info!("generating yolo frames for session {}", session.id);

                raw_frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                // TODO: support async ort inference (re. progress bars)
                let bounding_box_streams = raw_frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    yolo_inference(
                                        onnx_session,
                                        &image,
                                        0.5,
                                    ),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                bounding_box_streams.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        let bounding_boxes = frames.iter()
                            .map(|(frame_idx, bounding_boxes)| {
                                let path = format!("{}/{}.json", output_directory, frame_idx);

                                let _ = serde_json::to_writer(std::fs::File::create(path).unwrap(), bounding_boxes);

                                bounding_boxes.clone()
                            })
                            .collect::<Vec<_>>();

                        yolo_frames.frames.insert(**stream_id, bounding_boxes);
                    });
            } else {
                info!("yolo frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(yolo_frames);
        }
    }
}


// TODO: alphablend frames
#[derive(Component, Default)]
pub struct AlphablendFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl AlphablendFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/alphablend", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut alphablend_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        alphablend_frames.reload();

        alphablend_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/alphablend", session.directory);
        std::fs::metadata(output_directory).is_ok()
    }

    pub fn image(&self, _camera: usize, _frame: usize) -> Option<Image> {
        todo!()
    }
}



// TODO: support loading maskframes -> images into a pipeline mask viewer


#[derive(Component, Default)]
pub struct RawFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl RawFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut raw_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        raw_frames.reload();

        raw_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(

separate image loading and onnx inference (so the image loading result can be vi...

// TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)

}


fn generate_rotated_frames(
    mut commands: Commands,
    descriptors: Res<StreamDescriptors>,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<RotatedFrames>,
    >,
) {
    // TODO: create a caching/loading system wrapper over run_node interior
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        // TODO: get stream descriptor rotation

        if config.rotate_raw_frames {
            let run_node = !RotatedFrames::exists(session);
            let mut rotated_frames = RotatedFrames::load_from_session(session);

            if run_node {
                let rotations: HashMap<StreamId, f32> = descriptors.0.iter()
                    .enumerate()
                    .map(|(id, descriptor)| (StreamId(id), descriptor.rotation.unwrap_or_default()))
                    .collect();

                info!("generating rotated frames for session {}", session.id);

                raw_frames.frames.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", rotated_frames.directory, stream_id.0);
                        std::fs::create_dir_all(&output_directory).unwrap();

                        let frames = frames.par_iter()
                            .map(|frame| {
                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();
                                let output_path = format!("{}/{}.png", output_directory, frame_idx);

                                rotate_image(
                                    std::path::Path::new(frame),
                                    std::path::Path::new(&output_path),
                                    rotations[stream_id],
                                ).unwrap();

                                output_path
                            })
                            .collect::<Vec<_>>();

                            rotated_frames.frames.insert(*stream_id, frames);
                    });
            } else {
                info!("rotated frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(rotated_frames);
        }
    }
}


fn generate_mask_frames(
    mut commands: Commands,
    frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RotatedFrames,
            &Session,
        ),
        Without<MaskFrames>,
    >,
    modnet: Res<Modnet>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        frames,
        session,
    ) in frames.iter() {
        if config.mask_frames {
            if onnx_assets.get(&modnet.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&modnet.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !MaskFrames::exists(session);
            let mut mask_frames = MaskFrames::load_from_session(session);

            if run_node {
                info!("generating mask frames for session {}", session.id);

                frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                let mask_images = frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    modnet_inference(
                                        onnx_session,
                                        &[&image],
                                        Some((512, 512)),
                                    ).pop().unwrap(),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                mask_images.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        let mask_paths = frames.iter()
                            .map(|(frame_idx, frame)| {
                                let path = format!("{}/{}.png", output_directory, frame_idx);

                                let buffer = ImageBuffer::<Luma<u8>, Vec<u8>>::from_raw(
                                    frame.width(),
                                    frame.height(),
                                    frame.data.clone(),
                                ).unwrap();

                                let _ = buffer.save(&path);

                                path
                            })
                            .collect::<Vec<_>>();

                        mask_frames.frames.insert(**stream_id, mask_paths);
                    });
            } else {
                info!("mask frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(mask_frames);
        }
    }
}


fn generate_yolo_frames(
    mut commands: Commands,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<YoloFrames>,
    >,
    yolo: Res<Yolo>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        if config.yolo {
            if onnx_assets.get(&yolo.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&yolo.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !YoloFrames::exists(session);
            let mut yolo_frames = YoloFrames::load_from_session(session);

            if run_node {
                info!("generating yolo frames for session {}", session.id);

                raw_frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                // TODO: support async ort inference (re. progress bars)
                let bounding_box_streams = raw_frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    yolo_inference(
                                        onnx_session,
                                        &image,
                                        0.5,
                                    ),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                bounding_box_streams.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        let bounding_boxes = frames.iter()
                            .map(|(frame_idx, bounding_boxes)| {
                                let path = format!("{}/{}.json", output_directory, frame_idx);

                                let _ = serde_json::to_writer(std::fs::File::create(path).unwrap(), bounding_boxes);

                                bounding_boxes.clone()
                            })
                            .collect::<Vec<_>>();

                        yolo_frames.frames.insert(**stream_id, bounding_boxes);
                    });
            } else {
                info!("yolo frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(yolo_frames);
        }
    }
}


// TODO: alphablend frames
#[derive(Component, Default)]
pub struct AlphablendFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl AlphablendFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/alphablend", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut alphablend_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        alphablend_frames.reload();

        alphablend_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/alphablend", session.directory);
        std::fs::metadata(output_directory).is_ok()
    }

    pub fn image(&self, _camera: usize, _frame: usize) -> Option<Image> {
        todo!()
    }
}



// TODO: support loading maskframes -> images into a pipeline mask viewer


#[derive(Component, Default)]
pub struct RawFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl RawFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut raw_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        raw_frames.reload();

        raw_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(

read all frames

// TODO: read all frames

use bevy::prelude::*;
use image::DynamicImage;
use rayon::prelude::*;


pub struct PipelinePlugin;
impl Plugin for PipelinePlugin {
    fn build(&self, app: &mut App) {
        app.add_systems(Update, generate_annotations);
    }
}


#[derive(Component, Reflect)]
pub struct PipelineConfig {
    pub raw_frames: bool,
    pub subject_refinement: bool,           // https://github.com/onnx/models/tree/main/validated/vision/body_analysis/ultraface
    pub repair_frames: bool,                // https://huggingface.co/docs/diffusers/en/optimization/onnx & https://github.com/bnm6900030/swintormer
    pub upsample_frames: bool,              // https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx
    pub mask_frames: bool,                  // https://github.com/ZHKKKe/MODNet
    pub light_field_cameras: bool,          // https://github.com/jasonyzhang/RayDiffusion
    pub depth_maps: bool,                   // https://github.com/fabio-sim/Depth-Anything-ONNX
    pub gaussian_cloud: bool,
}

impl Default for PipelineConfig {
    fn default() -> Self {
        Self {
            raw_frames: true,
            subject_refinement: false,
            repair_frames: false,
            upsample_frames: false,
            mask_frames: false,
            light_field_cameras: false,
            depth_maps: false,
            gaussian_cloud: false,
        }
    }
}


#[derive(Bundle, Default, Reflect)]
pub struct StreamSessionBundle {
    pub config: PipelineConfig,
    pub raw_streams: RawStreams,
    pub session: Session,
}

// TODO: use an entity saver to write Session and it's components (e.g. `0/session.ron`)


#[derive(Component, Default, Reflect)]
pub struct Session {
    pub id: usize,
    pub directory: String,
}

impl Session {
    pub fn new(directory: String) -> Self {
        let id = get_next_session_id(&directory);
        let directory = format!("{}/{}", directory, id);
        std::fs::create_dir_all(&directory).unwrap();

        Self { id, directory }
    }
}


pub trait PipelineNode {
    fn new(session: &Session) -> Self;
    fn exists(session: &Session) -> bool;
}


#[derive(Component, Default, Reflect)]
pub struct RawStreams {
    pub streams: Vec<String>,
}
impl RawStreams {
    pub fn decoders(&self) -> () {
        // TODO: create decoders for each h264 file stream
        todo!()
    }
}


// TODO: add a pipeline config for the session describing which components to run


fn generate_annotations(
    mut commands: Commands,
    raw_streams: Query<
        (
            Entity,
            &PipelineConfig,
            &RawStreams,
            &Session,
        ),
        Without<RawFrames>,
    >,
) {
    for (
        entity,
        config,
        raw_streams,
        session,
    ) in raw_streams.iter() {
        let raw_frames = RawFrames::new(session);

        if config.raw_frames {
            let run_node = !RawFrames::exists(session);
            let mut raw_frames = RawFrames::new(session);

            if run_node {
                // TODO: add (identifier == "{}_{}", stream, frame) output to this first stage (forward to mask stage /w image in RAM)
                let frames = raw_streams.streams.par_iter()
                    .map(|stream| {
                        let decoder = todo!();

                        (stream, decoder)
                    })
                    .map(|(stream, decoder)| {
                        let frames: Vec<DynamicImage> = vec![];

                        // TODO: read all frames

                        (stream, frames)
                    })
                    .collect::<Vec<_>>();

                frames.par_iter()
                    .for_each(|(stream, frames)| {
                        frames.iter().enumerate()
                            .for_each(|(i, frame)| {
                                let path = format!(
                                    "{}/frames/{}_{}.png",
                                    session.directory,
                                    stream,
                                    i,
                                );
                                frame.save(path).unwrap();
                            });
                    });
            }
        }

        commands.entity(entity).insert(raw_frames);
    }
}


#[derive(Component, Default, Reflect)]
pub struct RawFrames {
    pub frames: Vec<String>,
}
impl RawFrames {
    // TODO: move new and exists to a trait

    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        // TODO: load all files that are already in the directory

        Self {
            frames: vec![],
        }
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::metadata(&output_directory).is_ok()
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Component, Default, Reflect)]
pub struct MaskFrames {
    pub frames: Vec<String>,
}
impl MaskFrames {
    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/masks", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        Self {
            frames: vec![],
        }
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Default, Clone, Reflect)]
pub struct LightFieldCamera {
    // TODO: intrinsics/extrinsics
}

#[derive(Component, Default, Reflect)]
pub struct LightFieldCameras {
    pub cameras: Vec<LightFieldCamera>,
}



fn get_next_session_id(output_directory: &str) -> usize {
    match std::fs::read_dir(output_directory) {
        Ok(entries) => entries.filter_map(|entry| {
            let entry = entry.ok()?;
                if entry.path().is_dir() {
                    entry.file_name().to_string_lossy().parse::<usize>().ok()
                } else {
                    None
                }
            })
            .max()
            .map_or(0, |max_id| max_id + 1),
        Err(_) => 0,
    }
}

populate 'latest frame' with new data

// TODO: populate 'latest frame' with new data

    },
    codec::VideoFrame,
};
use url::Url;


const RTSP_URIS: [&str; 2] = [
    // "rtsp://localhost:554/lizard",
    // "rtsp://rtspstream:[email protected]/movie",
    // "rtsp://rtspstream:[email protected]/pattern",
    "rtsp://192.168.1.23/user=admin&password=admin123&channel=1&stream=0.sdp?",
    "rtsp://192.168.1.24/user=admin&password=admin123&channel=1&stream=0.sdp?",
];


fn main() {
    App::new()
        .add_plugins(DefaultPlugins)
        .add_systems(Update, press_esc_close)
        .add_systems(Startup, spawn_tasks)
        .add_systems(Update, task_loop)
        .add_systems(Update, handle_tasks)
        .run();
}


#[derive(Component)]
struct ReadRtspFrame(Task<CommandQueue>);

#[derive(Component, Clone)]
struct RtspStream {
    uri: String,
    index: usize,
    demuxed: Arc<Mutex<Option<Demuxed>>>,
    decoder: Arc<Mutex<Option<Decoder>>>,
}


// TODO: decouple bevy async tasks and the multi-stream RTSP handling
//       a bevy system should query the readiness of a frame for each RTSP stream and update texture as needed
//       the RTSP handling should be a separate async tokio pool that updates the readiness of each RTSP stream /w buffer for transfer to texture


fn spawn_tasks(
    mut commands: Commands,
) {
    RTSP_URIS.iter()
        .enumerate()
        .for_each(|(index, &url)| {
            let entity = commands.spawn_empty().id();

            let api = openh264::OpenH264API::from_source();
            let decoder = Decoder::new(api).unwrap();

            let rtsp_stream = RtspStream {
                uri: url.to_string(),
                index,
                demuxed: Arc::new(Mutex::new(None)),
                decoder: Arc::new(Mutex::new(decoder.into())),
            };
            let demuxed_arc = rtsp_stream.demuxed.clone();

            let session_result = futures::executor::block_on(Compat::new(create_session(&url)));
            match session_result {
                Ok(playing) => {
                    let mut demuxed = demuxed_arc.lock().unwrap();
                    *demuxed = Some(playing.demuxed().unwrap());

                    println!("created demuxer for {}", url);
                },
                Err(e) => panic!("Failed to create session: {}", e),
            }

            queue_rtsp_frame(
                &rtsp_stream,
                &mut commands,
                entity,
            );

            commands.entity(entity).insert(rtsp_stream);
        });
}


fn task_loop(
    mut commands: Commands,
    streams: Query<
        (
            Entity,
            &RtspStream,
        ),
        Without<ReadRtspFrame>,
    >,
) {
    for (entity, rtsp_stream) in &mut streams.iter() {
        queue_rtsp_frame(
            rtsp_stream,
            &mut commands,
            entity,
        );
    }
}


fn handle_tasks(
    mut commands: Commands,
    mut tasks: Query<&mut ReadRtspFrame>,
) {
    for mut task in &mut tasks {
        if let Some(mut commands_queue) = block_on(future::poll_once(&mut task.0)) {
            commands.append(&mut commands_queue);
        }
    }
}


fn convert_h264(data: &mut [u8]) -> Result<(), Error> {
    let mut i = 0;
    while i < data.len() - 3 {
        // Replace each NAL's length with the Annex B start code b"\x00\x00\x00\x01".
        let len = u32::from_be_bytes([data[i], data[i + 1], data[i + 2], data[i + 3]]) as usize;
        data[i] = 0;
        data[i + 1] = 0;
        data[i + 2] = 0;
        data[i + 3] = 1;
        i += 4 + len;
        if i > data.len() {
            bail!("partial NAL body");
        }
    }
    if i < data.len() {
        bail!("partial NAL length");
    }
    Ok(())
}

fn queue_rtsp_frame(
    rtsp_stream: &RtspStream,
    commands: &mut Commands,
    entity: Entity,
) {
    let idx = rtsp_stream.index;
    let demuxed_arc = rtsp_stream.demuxed.clone();
    let decoder_arc = rtsp_stream.decoder.clone();

    let task = AsyncComputeTaskPool::get().spawn(async move {
        let result = futures::executor::block_on(Compat::new(capture_frame(demuxed_arc.clone())));

        // TODO: clean this mess
        match result {
            Ok(frame) => {
                let mut decoder_lock = decoder_arc.lock().unwrap();
                let decoder = decoder_lock.as_mut().unwrap();

                let mut data = frame.into_data();
                let annex_b_convert = convert_h264(&mut data);

                match annex_b_convert {
                    Ok(_) => for packet in nal_units(&data) {
                        let result = decoder.decode(packet);

                        match result {
                            Ok(decoded_frame) => {
                                println!("decoded frame from stream {}", idx);
                                // TODO: populate 'latest frame' with new data
                            },
                            Err(e) => println!("failed to decode frame for stream {}: {}", idx, e),
                        }
                    },
                    Err(e) => println!("failed to convert NAL unit to Annex B format: {}", e)
                }
            },
            Err(e) => println!("failed to capture frame for stream {}: {}", idx, e),
        }

        let mut command_queue = CommandQueue::default();
        command_queue.push(move |world: &mut World| {
            world.entity_mut(entity).remove::<ReadRtspFrame>();
        });

        command_queue
    });

    commands.entity(entity).insert(ReadRtspFrame(task));
}

async fn create_session(url: &str) -> Result<Session<Playing>, Box<dyn std::error::Error + Send + Sync>> {
    let parsed_url = Url::parse(url)?;

    let username = parsed_url.username();

load all files that are already in the directory

// TODO: load all files that are already in the directory

use bevy::prelude::*;
use image::DynamicImage;
use rayon::prelude::*;


pub struct PipelinePlugin;
impl Plugin for PipelinePlugin {
    fn build(&self, app: &mut App) {
        app.add_systems(Update, generate_annotations);
    }
}


#[derive(Component, Reflect)]
pub struct PipelineConfig {
    pub raw_frames: bool,
    pub subject_refinement: bool,           // https://github.com/onnx/models/tree/main/validated/vision/body_analysis/ultraface
    pub repair_frames: bool,                // https://huggingface.co/docs/diffusers/en/optimization/onnx & https://github.com/bnm6900030/swintormer
    pub upsample_frames: bool,              // https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx
    pub mask_frames: bool,                  // https://github.com/ZHKKKe/MODNet
    pub light_field_cameras: bool,          // https://github.com/jasonyzhang/RayDiffusion
    pub depth_maps: bool,                   // https://github.com/fabio-sim/Depth-Anything-ONNX
    pub gaussian_cloud: bool,
}

impl Default for PipelineConfig {
    fn default() -> Self {
        Self {
            raw_frames: true,
            subject_refinement: false,
            repair_frames: false,
            upsample_frames: false,
            mask_frames: false,
            light_field_cameras: false,
            depth_maps: false,
            gaussian_cloud: false,
        }
    }
}


#[derive(Bundle, Default, Reflect)]
pub struct StreamSessionBundle {
    pub config: PipelineConfig,
    pub raw_streams: RawStreams,
    pub session: Session,
}

// TODO: use an entity saver to write Session and it's components (e.g. `0/session.ron`)


#[derive(Component, Default, Reflect)]
pub struct Session {
    pub id: usize,
    pub directory: String,
}

impl Session {
    pub fn new(directory: String) -> Self {
        let id = get_next_session_id(&directory);
        let directory = format!("{}/{}", directory, id);
        std::fs::create_dir_all(&directory).unwrap();

        Self { id, directory }
    }
}


pub trait PipelineNode {
    fn new(session: &Session) -> Self;
    fn exists(session: &Session) -> bool;
}


#[derive(Component, Default, Reflect)]
pub struct RawStreams {
    pub streams: Vec<String>,
}
impl RawStreams {
    pub fn decoders(&self) -> () {
        // TODO: create decoders for each h264 file stream
        todo!()
    }
}


// TODO: add a pipeline config for the session describing which components to run


fn generate_annotations(
    mut commands: Commands,
    raw_streams: Query<
        (
            Entity,
            &PipelineConfig,
            &RawStreams,
            &Session,
        ),
        Without<RawFrames>,
    >,
) {
    for (
        entity,
        config,
        raw_streams,
        session,
    ) in raw_streams.iter() {
        let raw_frames = RawFrames::new(session);

        if config.raw_frames {
            let run_node = !RawFrames::exists(session);
            let mut raw_frames = RawFrames::new(session);

            if run_node {
                // TODO: add (identifier == "{}_{}", stream, frame) output to this first stage (forward to mask stage /w image in RAM)
                let frames = raw_streams.streams.par_iter()
                    .map(|stream| {
                        let decoder = todo!();

                        (stream, decoder)
                    })
                    .map(|(stream, decoder)| {
                        let frames: Vec<DynamicImage> = vec![];

                        // TODO: read all frames

                        (stream, frames)
                    })
                    .collect::<Vec<_>>();

                frames.par_iter()
                    .for_each(|(stream, frames)| {
                        frames.iter().enumerate()
                            .for_each(|(i, frame)| {
                                let path = format!(
                                    "{}/frames/{}_{}.png",
                                    session.directory,
                                    stream,
                                    i,
                                );
                                frame.save(path).unwrap();
                            });
                    });
            }
        }

        commands.entity(entity).insert(raw_frames);
    }
}


#[derive(Component, Default, Reflect)]
pub struct RawFrames {
    pub frames: Vec<String>,
}
impl RawFrames {
    // TODO: move new and exists to a trait

    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        // TODO: load all files that are already in the directory

        Self {
            frames: vec![],
        }
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::metadata(&output_directory).is_ok()
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Component, Default, Reflect)]
pub struct MaskFrames {
    pub frames: Vec<String>,
}
impl MaskFrames {
    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/masks", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        Self {
            frames: vec![],
        }
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Default, Clone, Reflect)]
pub struct LightFieldCamera {
    // TODO: intrinsics/extrinsics
}

#[derive(Component, Default, Reflect)]
pub struct LightFieldCameras {
    pub cameras: Vec<LightFieldCamera>,
}



fn get_next_session_id(output_directory: &str) -> usize {
    match std::fs::read_dir(output_directory) {
        Ok(entries) => entries.filter_map(|entry| {
            let entry = entry.ok()?;
                if entry.path().is_dir() {
                    entry.file_name().to_string_lossy().parse::<usize>().ok()
                } else {
                    None
                }
            })
            .max()
            .map_or(0, |max_id| max_id + 1),
        Err(_) => 0,
    }
}

upgrade to [bevy-tokio-tasks](https://github.com/EkardNT/bevy-tokio-tasks) to sh...

// TODO: upgrade to [bevy-tokio-tasks](https://github.com/EkardNT/bevy-tokio-tasks) to share tokio runtime between rtsp and inference - waiting on: https://github.com/pykeio/ort/pull/174

impl FromWorld for RtspStreamManager {
    fn from_world(_world: &mut World) -> Self {

        // TODO: upgrade to [bevy-tokio-tasks](https://github.com/EkardNT/bevy-tokio-tasks) to share tokio runtime between rtsp and inference - waiting on: https://github.com/pykeio/ort/pull/174
        let mut runtime = tokio::runtime::Builder::new_multi_thread();
        runtime.enable_all();
        let runtime = runtime.build().unwrap();
        let handle = runtime.handle().clone();

        std::thread::spawn(move || {

create a caching/loading system wrapper over run_node interior

// TODO: create a caching/loading system wrapper over run_node interior

}


fn generate_rotated_frames(
    mut commands: Commands,
    descriptors: Res<StreamDescriptors>,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<RotatedFrames>,
    >,
) {
    // TODO: create a caching/loading system wrapper over run_node interior
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        // TODO: get stream descriptor rotation

        if config.rotate_raw_frames {
            let run_node = !RotatedFrames::exists(session);
            let mut rotated_frames = RotatedFrames::load_from_session(session);

            if run_node {
                let rotations: HashMap<StreamId, f32> = descriptors.0.iter()
                    .enumerate()
                    .map(|(id, descriptor)| (StreamId(id), descriptor.rotation.unwrap_or_default()))
                    .collect();

                info!("generating rotated frames for session {}", session.id);

                raw_frames.frames.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", rotated_frames.directory, stream_id.0);
                        std::fs::create_dir_all(&output_directory).unwrap();

                        let frames = frames.par_iter()
                            .map(|frame| {
                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();
                                let output_path = format!("{}/{}.png", output_directory, frame_idx);

                                rotate_image(
                                    std::path::Path::new(frame),
                                    std::path::Path::new(&output_path),
                                    rotations[stream_id],
                                ).unwrap();

                                output_path
                            })
                            .collect::<Vec<_>>();

                            rotated_frames.frames.insert(*stream_id, frames);
                    });
            } else {
                info!("rotated frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(rotated_frames);
        }
    }
}


fn generate_mask_frames(
    mut commands: Commands,
    frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RotatedFrames,
            &Session,
        ),
        Without<MaskFrames>,
    >,
    modnet: Res<Modnet>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        frames,
        session,
    ) in frames.iter() {
        if config.mask_frames {
            if onnx_assets.get(&modnet.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&modnet.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !MaskFrames::exists(session);
            let mut mask_frames = MaskFrames::load_from_session(session);

            if run_node {
                info!("generating mask frames for session {}", session.id);

                frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                let mask_images = frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    modnet_inference(
                                        onnx_session,
                                        &[&image],
                                        Some((512, 512)),
                                    ).pop().unwrap(),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                mask_images.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        let mask_paths = frames.iter()
                            .map(|(frame_idx, frame)| {
                                let path = format!("{}/{}.png", output_directory, frame_idx);

                                let buffer = ImageBuffer::<Luma<u8>, Vec<u8>>::from_raw(
                                    frame.width(),
                                    frame.height(),
                                    frame.data.clone(),
                                ).unwrap();

                                let _ = buffer.save(&path);

                                path
                            })
                            .collect::<Vec<_>>();

                        mask_frames.frames.insert(**stream_id, mask_paths);
                    });
            } else {
                info!("mask frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(mask_frames);
        }
    }
}


fn generate_yolo_frames(
    mut commands: Commands,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<YoloFrames>,
    >,
    yolo: Res<Yolo>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        if config.yolo {
            if onnx_assets.get(&yolo.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&yolo.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !YoloFrames::exists(session);
            let mut yolo_frames = YoloFrames::load_from_session(session);

            if run_node {
                info!("generating yolo frames for session {}", session.id);

                raw_frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                // TODO: support async ort inference (re. progress bars)
                let bounding_box_streams = raw_frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    yolo_inference(
                                        onnx_session,
                                        &image,
                                        0.5,
                                    ),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                bounding_box_streams.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        let bounding_boxes = frames.iter()
                            .map(|(frame_idx, bounding_boxes)| {
                                let path = format!("{}/{}.json", output_directory, frame_idx);

                                let _ = serde_json::to_writer(std::fs::File::create(path).unwrap(), bounding_boxes);

                                bounding_boxes.clone()
                            })
                            .collect::<Vec<_>>();

                        yolo_frames.frames.insert(**stream_id, bounding_boxes);
                    });
            } else {
                info!("yolo frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(yolo_frames);
        }
    }
}


// TODO: alphablend frames
#[derive(Component, Default)]
pub struct AlphablendFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl AlphablendFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/alphablend", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut alphablend_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        alphablend_frames.reload();

        alphablend_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/alphablend", session.directory);
        std::fs::metadata(output_directory).is_ok()
    }

    pub fn image(&self, _camera: usize, _frame: usize) -> Option<Image> {
        todo!()
    }
}



// TODO: support loading maskframes -> images into a pipeline mask viewer


#[derive(Component, Default)]
pub struct RawFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl RawFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut raw_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        raw_frames.reload();

        raw_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(

decouple bevy async tasks and the multi-stream RTSP handling

a bevy system should query the readiness of a frame for each RTSP stream and update texture as needed

the RTSP handling should be a separate async tokio pool that updates the readiness of each RTSP stream /w buffer for transfer to texture

// TODO: decouple bevy async tasks and the multi-stream RTSP handling

    },
    codec::VideoFrame,
};
use url::Url;


const RTSP_URIS: [&str; 2] = [
    // "rtsp://localhost:554/lizard",
    // "rtsp://rtspstream:[email protected]/movie",
    // "rtsp://rtspstream:[email protected]/pattern",
    "rtsp://192.168.1.23/user=admin&password=admin123&channel=1&stream=0.sdp?",
    "rtsp://192.168.1.24/user=admin&password=admin123&channel=1&stream=0.sdp?",
];


fn main() {
    App::new()
        .add_plugins(DefaultPlugins)
        .add_systems(Update, press_esc_close)
        .add_systems(Startup, spawn_tasks)
        .add_systems(Update, task_loop)
        .add_systems(Update, handle_tasks)
        .run();
}


#[derive(Component)]
struct ReadRtspFrame(Task<CommandQueue>);

#[derive(Component, Clone)]
struct RtspStream {
    uri: String,
    index: usize,
    demuxed: Arc<Mutex<Option<Demuxed>>>,
    decoder: Arc<Mutex<Option<Decoder>>>,
}


// TODO: decouple bevy async tasks and the multi-stream RTSP handling
//       a bevy system should query the readiness of a frame for each RTSP stream and update texture as needed
//       the RTSP handling should be a separate async tokio pool that updates the readiness of each RTSP stream /w buffer for transfer to texture


fn spawn_tasks(
    mut commands: Commands,
) {
    RTSP_URIS.iter()
        .enumerate()
        .for_each(|(index, &url)| {
            let entity = commands.spawn_empty().id();

            let api = openh264::OpenH264API::from_source();
            let decoder = Decoder::new(api).unwrap();

            let rtsp_stream = RtspStream {
                uri: url.to_string(),
                index,
                demuxed: Arc::new(Mutex::new(None)),
                decoder: Arc::new(Mutex::new(decoder.into())),
            };
            let demuxed_arc = rtsp_stream.demuxed.clone();

            let session_result = futures::executor::block_on(Compat::new(create_session(&url)));
            match session_result {
                Ok(playing) => {
                    let mut demuxed = demuxed_arc.lock().unwrap();
                    *demuxed = Some(playing.demuxed().unwrap());

                    println!("created demuxer for {}", url);
                },
                Err(e) => panic!("Failed to create session: {}", e),
            }

            queue_rtsp_frame(
                &rtsp_stream,
                &mut commands,
                entity,
            );

            commands.entity(entity).insert(rtsp_stream);
        });
}


fn task_loop(
    mut commands: Commands,
    streams: Query<
        (
            Entity,
            &RtspStream,
        ),
        Without<ReadRtspFrame>,
    >,
) {
    for (entity, rtsp_stream) in &mut streams.iter() {
        queue_rtsp_frame(
            rtsp_stream,
            &mut commands,
            entity,
        );
    }
}


fn handle_tasks(
    mut commands: Commands,
    mut tasks: Query<&mut ReadRtspFrame>,
) {
    for mut task in &mut tasks {
        if let Some(mut commands_queue) = block_on(future::poll_once(&mut task.0)) {
            commands.append(&mut commands_queue);
        }
    }
}


fn convert_h264(data: &mut [u8]) -> Result<(), Error> {
    let mut i = 0;
    while i < data.len() - 3 {
        // Replace each NAL's length with the Annex B start code b"\x00\x00\x00\x01".
        let len = u32::from_be_bytes([data[i], data[i + 1], data[i + 2], data[i + 3]]) as usize;
        data[i] = 0;
        data[i + 1] = 0;
        data[i + 2] = 0;
        data[i + 3] = 1;
        i += 4 + len;
        if i > data.len() {
            bail!("partial NAL body");
        }
    }
    if i < data.len() {
        bail!("partial NAL length");
    }
    Ok(())
}

fn queue_rtsp_frame(
    rtsp_stream: &RtspStream,
    commands: &mut Commands,
    entity: Entity,
) {
    let idx = rtsp_stream.index;
    let demuxed_arc = rtsp_stream.demuxed.clone();
    let decoder_arc = rtsp_stream.decoder.clone();

    let task = AsyncComputeTaskPool::get().spawn(async move {
        let result = futures::executor::block_on(Compat::new(capture_frame(demuxed_arc.clone())));

        // TODO: clean this mess
        match result {
            Ok(frame) => {
                let mut decoder_lock = decoder_arc.lock().unwrap();
                let decoder = decoder_lock.as_mut().unwrap();

                let mut data = frame.into_data();
                let annex_b_convert = convert_h264(&mut data);

                match annex_b_convert {
                    Ok(_) => for packet in nal_units(&data) {
                        let result = decoder.decode(packet);

                        match result {
                            Ok(decoded_frame) => {
                                println!("decoded frame from stream {}", idx);
                                // TODO: populate 'latest frame' with new data
                            },
                            Err(e) => println!("failed to decode frame for stream {}: {}", idx, e),
                        }
                    },
                    Err(e) => println!("failed to convert NAL unit to Annex B format: {}", e)
                }
            },
            Err(e) => println!("failed to capture frame for stream {}: {}", idx, e),
        }

        let mut command_queue = CommandQueue::default();
        command_queue.push(move |world: &mut World| {
            world.entity_mut(entity).remove::<ReadRtspFrame>();
        });

        command_queue
    });

    commands.entity(entity).insert(ReadRtspFrame(task));
}

async fn create_session(url: &str) -> Result<Session<Playing>, Box<dyn std::error::Error + Send + Sync>> {
    let parsed_url = Url::parse(url)?;

    let username = parsed_url.username();

build pipeline config from args

// TODO: build pipeline config from args

}


fn automatic_recording(
    mut commands: Commands,
    time: Res<Time>,
    mut ev_person: EventReader<PersonDetectedEvent>,
    stream_manager: Res<RtspStreamManager>,
    mut live_session: ResMut<LiveSession>,
    mut person_timeout: Local<Stopwatch>,
) {
    if live_session.0.is_some() {
        if person_timeout.elapsed_secs() > 3.0 {
            person_timeout.reset();

            println!("no person detected for 3 seconds, stopping recording");

            let session_entity = live_session.0.take().unwrap();
            let raw_streams = stream_manager.stop_recording();

            commands.entity(session_entity)
                .insert(RawStreams {
                    streams: raw_streams,
                });
        }

        person_timeout.tick(time.delta());

        for _ev in ev_person.read() {
            person_timeout.reset();
        }

        return;
    }

    for ev in ev_person.read() {
        println!("person detected: {:?}", ev);

        // TODO: deduplicate start recording logic
        let session = Session::new("capture".to_string());

        stream_manager.start_recording(
            &session,
        );

        // TODO: build pipeline config from args
        let entity = commands.spawn(
            StreamSessionBundle {
                session: session,
                raw_streams: RawStreams::default(),
                config: PipelineConfig::default(),
            },
        ).id();
        live_session.0 = Some(entity);

        break;
    }
}

get stream descriptor rotation

// TODO: get stream descriptor rotation

}


fn generate_rotated_frames(
    mut commands: Commands,
    descriptors: Res<StreamDescriptors>,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<RotatedFrames>,
    >,
) {
    // TODO: create a caching/loading system wrapper over run_node interior
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        // TODO: get stream descriptor rotation

        if config.rotate_raw_frames {
            let run_node = !RotatedFrames::exists(session);
            let mut rotated_frames = RotatedFrames::load_from_session(session);

            if run_node {
                let rotations: HashMap<StreamId, f32> = descriptors.0.iter()
                    .enumerate()
                    .map(|(id, descriptor)| (StreamId(id), descriptor.rotation.unwrap_or_default()))
                    .collect();

                info!("generating rotated frames for session {}", session.id);

                raw_frames.frames.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", rotated_frames.directory, stream_id.0);
                        std::fs::create_dir_all(&output_directory).unwrap();

                        let frames = frames.par_iter()
                            .map(|frame| {
                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();
                                let output_path = format!("{}/{}.png", output_directory, frame_idx);

                                rotate_image(
                                    std::path::Path::new(frame),
                                    std::path::Path::new(&output_path),
                                    rotations[stream_id],
                                ).unwrap();

                                output_path
                            })
                            .collect::<Vec<_>>();

                            rotated_frames.frames.insert(*stream_id, frames);
                    });
            } else {
                info!("rotated frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(rotated_frames);
        }
    }
}


fn generate_mask_frames(
    mut commands: Commands,
    frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RotatedFrames,
            &Session,
        ),
        Without<MaskFrames>,
    >,
    modnet: Res<Modnet>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        frames,
        session,
    ) in frames.iter() {
        if config.mask_frames {
            if onnx_assets.get(&modnet.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&modnet.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !MaskFrames::exists(session);
            let mut mask_frames = MaskFrames::load_from_session(session);

            if run_node {
                info!("generating mask frames for session {}", session.id);

                frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                let mask_images = frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    modnet_inference(
                                        onnx_session,
                                        &[&image],
                                        Some((512, 512)),
                                    ).pop().unwrap(),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                mask_images.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        let mask_paths = frames.iter()
                            .map(|(frame_idx, frame)| {
                                let path = format!("{}/{}.png", output_directory, frame_idx);

                                let buffer = ImageBuffer::<Luma<u8>, Vec<u8>>::from_raw(
                                    frame.width(),
                                    frame.height(),
                                    frame.data.clone(),
                                ).unwrap();

                                let _ = buffer.save(&path);

                                path
                            })
                            .collect::<Vec<_>>();

                        mask_frames.frames.insert(**stream_id, mask_paths);
                    });
            } else {
                info!("mask frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(mask_frames);
        }
    }
}


fn generate_yolo_frames(
    mut commands: Commands,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<YoloFrames>,
    >,
    yolo: Res<Yolo>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        if config.yolo {
            if onnx_assets.get(&yolo.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&yolo.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !YoloFrames::exists(session);
            let mut yolo_frames = YoloFrames::load_from_session(session);

            if run_node {
                info!("generating yolo frames for session {}", session.id);

                raw_frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                // TODO: support async ort inference (re. progress bars)
                let bounding_box_streams = raw_frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    yolo_inference(
                                        onnx_session,
                                        &image,
                                        0.5,
                                    ),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                bounding_box_streams.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        let bounding_boxes = frames.iter()
                            .map(|(frame_idx, bounding_boxes)| {
                                let path = format!("{}/{}.json", output_directory, frame_idx);

                                let _ = serde_json::to_writer(std::fs::File::create(path).unwrap(), bounding_boxes);

                                bounding_boxes.clone()
                            })
                            .collect::<Vec<_>>();

                        yolo_frames.frames.insert(**stream_id, bounding_boxes);
                    });
            } else {
                info!("yolo frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(yolo_frames);
        }
    }
}


// TODO: alphablend frames
#[derive(Component, Default)]
pub struct AlphablendFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl AlphablendFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/alphablend", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut alphablend_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        alphablend_frames.reload();

        alphablend_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/alphablend", session.directory);
        std::fs::metadata(output_directory).is_ok()
    }

    pub fn image(&self, _camera: usize, _frame: usize) -> Option<Image> {
        todo!()
    }
}



// TODO: support loading maskframes -> images into a pipeline mask viewer


#[derive(Component, Default)]
pub struct RawFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl RawFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut raw_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        raw_frames.reload();

        raw_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(

this should move to client::VideoParameters::sample_entry() or some such.

0x00, 0x48, 0x00, 0x00, // vertresolution

0x00, 0x00, 0x00, 0x00, // reserved

0x00, 0x01, // frame count

0x00, 0x00, 0x00, 0x00, // compressorname

0x00, 0x00, 0x00, 0x00, //

0x00, 0x00, 0x00, 0x00, //

0x00, 0x00, 0x00, 0x00, //

0x00, 0x00, 0x00, 0x00, //

0x00, 0x00, 0x00, 0x00, //

0x00, 0x00, 0x00, 0x00, //

0x00, 0x00, 0x00, 0x00, //

0x00, 0x18, 0xff, 0xff, // depth + pre_defined

scan through self.video_sample_index.

// TODO: this should move to client::VideoParameters::sample_entry() or some such.

// https://github.com/scottlamb/retina/blob/main/examples/client/src/mp4.rs

use anyhow::{anyhow, bail, Error};
use bytes::{Buf, BufMut, BytesMut};
use retina::codec::{AudioParameters, ParametersRef, VideoParameters};

use std::convert::TryFrom;
use std::io::SeekFrom;
use tokio::io::{AsyncSeek, AsyncSeekExt, AsyncWrite, AsyncWriteExt};


/// Writes a box length for everything appended in the supplied scope.
macro_rules! write_box {
    ($buf:expr, $fourcc:expr, $b:block) => {{
        let _: &mut BytesMut = $buf; // type-check.
        let pos_start = ($buf as &BytesMut).len();
        let fourcc: &[u8; 4] = $fourcc;
        $buf.extend_from_slice(&[0, 0, 0, 0, fourcc[0], fourcc[1], fourcc[2], fourcc[3]]);
        let r = {
            $b;
        };
        let pos_end = ($buf as &BytesMut).len();
        let len = pos_end.checked_sub(pos_start).unwrap();
        $buf[pos_start..pos_start + 4].copy_from_slice(&u32::try_from(len)?.to_be_bytes()[..]);
        r
    }};
}

/// Writes `.mp4` data to a sink.
/// See module-level documentation for details.
pub struct Mp4Writer<W: AsyncWrite + AsyncSeek + Send + Unpin> {
    mdat_start: u32,
    mdat_pos: u32,
    video_params: Vec<VideoParameters>,

    /// The most recently used 1-based index within `video_params`.
    cur_video_params_sample_description_index: Option<u32>,
    audio_params: Option<Box<AudioParameters>>,
    allow_loss: bool,

    /// The (1-indexed) video sample (frame) number of each sync sample (random access point).
    video_sync_sample_nums: Vec<u32>,

    video_trak: TrakTracker,
    audio_trak: TrakTracker,
    inner: W,
}

/// A chunk: a group of samples that have consecutive byte positions and same sample description.
struct Chunk {
    first_sample_number: u32, // 1-based index
    byte_pos: u32,            // starting byte of first sample
    sample_description_index: u32,
}

/// Tracks the parts of a `trak` atom which are common between video and audio samples.
#[derive(Default)]
struct TrakTracker {
    samples: u32,
    next_pos: Option<u32>,
    chunks: Vec<Chunk>,
    sizes: Vec<u32>,

    /// The durations of samples in a run-length encoding form: (number of samples, duration).
    /// This lags one sample behind calls to `add_sample` because each sample's duration
    /// is calculated using the PTS of the following sample.
    durations: Vec<(u32, u32)>,
    last_pts: Option<i64>,
    tot_duration: u64,
}

impl TrakTracker {
    fn add_sample(
        &mut self,
        sample_description_index: u32,
        byte_pos: u32,
        size: u32,
        timestamp: retina::Timestamp,
        loss: u16,
        allow_loss: bool,
    ) -> Result<(), Error> {
        if self.samples > 0 && loss > 0 && !allow_loss {
            bail!("Lost {} RTP packets mid-stream", loss);
        }
        self.samples += 1;
        if self.next_pos != Some(byte_pos)
            || self.chunks.last().map(|c| c.sample_description_index)
                != Some(sample_description_index)
        {
            self.chunks.push(Chunk {
                first_sample_number: self.samples,
                byte_pos,
                sample_description_index,
            });
        }
        self.sizes.push(size);
        self.next_pos = Some(byte_pos + size);
        if let Some(last_pts) = self.last_pts.replace(timestamp.timestamp()) {
            let duration = timestamp.timestamp().checked_sub(last_pts).unwrap();
            self.tot_duration += u64::try_from(duration).unwrap();
            let duration = u32::try_from(duration)?;
            match self.durations.last_mut() {
                Some((s, d)) if *d == duration => *s += 1,
                _ => self.durations.push((1, duration)),
            }
        }
        Ok(())
    }

    fn finish(&mut self) {
        if self.last_pts.is_some() {
            self.durations.push((1, 0));
        }
    }

    /// Estimates the sum of the variable-sized portions of the data.
    fn size_estimate(&self) -> usize {
        (self.durations.len() * 8) + // stts
        (self.chunks.len() * 12) +   // stsc
        (self.sizes.len() * 4) +     // stsz
        (self.chunks.len() * 4) // stco
    }

    fn write_common_stbl_parts(&self, buf: &mut BytesMut) -> Result<(), Error> {
        // TODO: add an edit list so the video and audio tracks are in sync.
        write_box!(buf, b"stts", {
            buf.put_u32(0);
            buf.put_u32(u32::try_from(self.durations.len())?);
            for (samples, duration) in &self.durations {
                buf.put_u32(*samples);
                buf.put_u32(*duration);
            }
        });
        write_box!(buf, b"stsc", {
            buf.put_u32(0); // version
            buf.put_u32(u32::try_from(self.chunks.len())?);
            let mut prev_sample_number = 1;
            let mut chunk_number = 1;
            if !self.chunks.is_empty() {
                for c in &self.chunks[1..] {
                    buf.put_u32(chunk_number);
                    buf.put_u32(c.first_sample_number - prev_sample_number);
                    buf.put_u32(c.sample_description_index);
                    prev_sample_number = c.first_sample_number;
                    chunk_number += 1;
                }
                buf.put_u32(chunk_number);
                buf.put_u32(self.samples + 1 - prev_sample_number);
                buf.put_u32(1); // sample_description_index
            }
        });
        write_box!(buf, b"stsz", {
            buf.put_u32(0); // version
            buf.put_u32(0); // sample_size
            buf.put_u32(u32::try_from(self.sizes.len())?);
            for s in &self.sizes {
                buf.put_u32(*s);
            }
        });
        write_box!(buf, b"stco", {
            buf.put_u32(0); // version
            buf.put_u32(u32::try_from(self.chunks.len())?); // entry_count
            for c in &self.chunks {
                buf.put_u32(c.byte_pos);
            }
        });
        Ok(())
    }
}

impl<W: AsyncWrite + AsyncSeek + Send + Unpin> Mp4Writer<W> {
    pub async fn new(
        audio_params: Option<Box<AudioParameters>>,
        allow_loss: bool,
        mut inner: W,
    ) -> Result<Self, Error> {
        let mut buf = BytesMut::new();
        write_box!(&mut buf, b"ftyp", {
            buf.extend_from_slice(&[
                b'i', b's', b'o', b'm', // major_brand
                0, 0, 0, 0, // minor_version
                b'i', b's', b'o', b'm', // compatible_brands[0]
            ]);
        });
        buf.extend_from_slice(&b"\0\0\0\0mdat"[..]);
        let mdat_start = u32::try_from(buf.len())?;
        inner.write_all(&buf).await?;
        Ok(Mp4Writer {
            inner,
            video_params: Vec::new(),
            cur_video_params_sample_description_index: None,
            audio_params,
            allow_loss,
            video_trak: TrakTracker::default(),
            audio_trak: TrakTracker::default(),
            video_sync_sample_nums: Vec::new(),
            mdat_start,
            mdat_pos: mdat_start,
        })
    }

    pub async fn finish(mut self) -> Result<(), Error> {
        self.video_trak.finish();
        self.audio_trak.finish();
        let mut buf = BytesMut::with_capacity(
            1024 + self.video_trak.size_estimate()
                + self.audio_trak.size_estimate()
                + 4 * self.video_sync_sample_nums.len(),
        );
        write_box!(&mut buf, b"moov", {
            write_box!(&mut buf, b"mvhd", {
                buf.put_u32(1 << 24); // version
                buf.put_u64(0); // creation_time
                buf.put_u64(0); // modification_time
                buf.put_u32(90000); // timescale
                buf.put_u64(self.video_trak.tot_duration);
                buf.put_u32(0x00010000); // rate
                buf.put_u16(0x0100); // volume
                buf.put_u16(0); // reserved
                buf.put_u64(0); // reserved
                for v in &[0x00010000, 0, 0, 0, 0x00010000, 0, 0, 0, 0x40000000] {
                    buf.put_u32(*v); // matrix
                }
                for _ in 0..6 {
                    buf.put_u32(0); // pre_defined
                }
                buf.put_u32(2); // next_track_id
            });
            if self.video_trak.samples > 0 {
                self.write_video_trak(&mut buf)?;
            }
            if self.audio_trak.samples > 0 {
                self.write_audio_trak(&mut buf, self.audio_params.as_ref().unwrap())?;
            }
        });
        self.inner.write_all(&buf).await?;
        self.inner
            .seek(SeekFrom::Start(u64::from(self.mdat_start - 8)))
            .await?;
        self.inner
            .write_all(&(self.mdat_pos + 8 - self.mdat_start).to_be_bytes()[..])
            .await?;
        Ok(())
    }

    fn write_video_trak(&self, buf: &mut BytesMut) -> Result<(), Error> {
        write_box!(buf, b"trak", {
            write_box!(buf, b"tkhd", {
                buf.put_u32((1 << 24) | 7); // version, flags
                buf.put_u64(0); // creation_time
                buf.put_u64(0); // modification_time
                buf.put_u32(1); // track_id
                buf.put_u32(0); // reserved
                buf.put_u64(self.video_trak.tot_duration);
                buf.put_u64(0); // reserved
                buf.put_u16(0); // layer
                buf.put_u16(0); // alternate_group
                buf.put_u16(0); // volume
                buf.put_u16(0); // reserved
                for v in &[0x00010000, 0, 0, 0, 0x00010000, 0, 0, 0, 0x40000000] {
                    buf.put_u32(*v); // matrix
                }
                let dims = self.video_params.iter().fold((0, 0), |prev_dims, p| {
                    let dims = p.pixel_dimensions();
                    (
                        std::cmp::max(prev_dims.0, dims.0),
                        std::cmp::max(prev_dims.1, dims.1),
                    )
                });
                let width = u32::from(u16::try_from(dims.0)?) << 16;
                let height = u32::from(u16::try_from(dims.1)?) << 16;
                buf.put_u32(width);
                buf.put_u32(height);
            });
            write_box!(buf, b"mdia", {
                write_box!(buf, b"mdhd", {
                    buf.put_u32(1 << 24); // version
                    buf.put_u64(0); // creation_time
                    buf.put_u64(0); // modification_time
                    buf.put_u32(90000); // timebase
                    buf.put_u64(self.video_trak.tot_duration);
                    buf.put_u32(0x55c40000); // language=und + pre-defined
                });
                write_box!(buf, b"hdlr", {
                    buf.extend_from_slice(&[
                        0x00, 0x00, 0x00, 0x00, // version + flags
                        0x00, 0x00, 0x00, 0x00, // pre_defined
                        b'v', b'i', b'd', b'e', // handler = vide
                        0x00, 0x00, 0x00, 0x00, // reserved[0]
                        0x00, 0x00, 0x00, 0x00, // reserved[1]
                        0x00, 0x00, 0x00, 0x00, // reserved[2]
                        0x00, // name, zero-terminated (empty)
                    ]);
                });
                write_box!(buf, b"minf", {
                    write_box!(buf, b"vmhd", {
                        buf.put_u32(1);
                        buf.put_u64(0);
                    });
                    write_box!(buf, b"dinf", {
                        write_box!(buf, b"dref", {
                            buf.put_u32(0);
                            buf.put_u32(1); // entry_count
                            write_box!(buf, b"url ", {
                                buf.put_u32(1); // version, flags=self-contained
                            });
                        });
                    });
                    write_box!(buf, b"stbl", {
                        write_box!(buf, b"stsd", {
                            buf.put_u32(0); // version
                            buf.put_u32(u32::try_from(self.video_params.len())?); // entry_count
                            for p in &self.video_params {
                                self.write_video_sample_entry(buf, p)?;
                            }
                        });
                        self.video_trak.write_common_stbl_parts(buf)?;
                        write_box!(buf, b"stss", {
                            buf.put_u32(0); // version
                            buf.put_u32(u32::try_from(self.video_sync_sample_nums.len())?);
                            for n in &self.video_sync_sample_nums {
                                buf.put_u32(*n);
                            }
                        });
                    });
                });
            });
        });
        Ok(())
    }

    fn write_audio_trak(
        &self,
        buf: &mut BytesMut,
        parameters: &AudioParameters,
    ) -> Result<(), Error> {
        write_box!(buf, b"trak", {
            write_box!(buf, b"tkhd", {
                buf.put_u32((1 << 24) | 7); // version, flags
                buf.put_u64(0); // creation_time
                buf.put_u64(0); // modification_time
                buf.put_u32(2); // track_id
                buf.put_u32(0); // reserved
                buf.put_u64(self.audio_trak.tot_duration);
                buf.put_u64(0); // reserved
                buf.put_u16(0); // layer
                buf.put_u16(0); // alternate_group
                buf.put_u16(0); // volume
                buf.put_u16(0); // reserved
                for v in &[0x00010000, 0, 0, 0, 0x00010000, 0, 0, 0, 0x40000000] {
                    buf.put_u32(*v); // matrix
                }
                buf.put_u32(0); // width
                buf.put_u32(0); // height
            });
            write_box!(buf, b"mdia", {
                write_box!(buf, b"mdhd", {
                    buf.put_u32(1 << 24); // version
                    buf.put_u64(0); // creation_time
                    buf.put_u64(0); // modification_time
                    buf.put_u32(parameters.clock_rate());
                    buf.put_u64(self.audio_trak.tot_duration);
                    buf.put_u32(0x55c40000); // language=und + pre-defined
                });
                write_box!(buf, b"hdlr", {
                    buf.extend_from_slice(&[
                        0x00, 0x00, 0x00, 0x00, // version + flags
                        0x00, 0x00, 0x00, 0x00, // pre_defined
                        b's', b'o', b'u', b'n', // handler = soun
                        0x00, 0x00, 0x00, 0x00, // reserved[0]
                        0x00, 0x00, 0x00, 0x00, // reserved[1]
                        0x00, 0x00, 0x00, 0x00, // reserved[2]
                        0x00, // name, zero-terminated (empty)
                    ]);
                });
                write_box!(buf, b"minf", {
                    write_box!(buf, b"smhd", {
                        buf.extend_from_slice(&[
                            0x00, 0x00, 0x00, 0x00, // version + flags
                            0x00, 0x00, // balance
                            0x00, 0x00, // reserved
                        ]);
                    });
                    write_box!(buf, b"dinf", {
                        write_box!(buf, b"dref", {
                            buf.put_u32(0);
                            buf.put_u32(1); // entry_count
                            write_box!(buf, b"url ", {
                                buf.put_u32(1); // version, flags=self-contained
                            });
                        });
                    });
                    write_box!(buf, b"stbl", {
                        write_box!(buf, b"stsd", {
                            buf.put_u32(0); // version
                            buf.put_u32(1); // entry_count
                            buf.extend_from_slice(
                                parameters
                                    .sample_entry()
                                    .expect("all added streams have sample entries"),
                            );
                        });
                        self.audio_trak.write_common_stbl_parts(buf)?;

                        // AAC requires two samples (really, each is a set of 960 or 1024 samples)
                        // to decode accurately. See
                        // https://developer.apple.com/library/archive/documentation/QuickTime/QTFF/QTFFAppenG/QTFFAppenG.html .
                        write_box!(buf, b"sgpd", {
                            // BMFF section 8.9.3: SampleGroupDescriptionBox
                            buf.put_u32(0); // version
                            buf.extend_from_slice(b"roll"); // grouping type
                            buf.put_u32(1); // entry_count
                                            // BMFF section 10.1: AudioRollRecoveryEntry
                            buf.put_i16(-1); // roll_distance
                        });
                        write_box!(buf, b"sbgp", {
                            // BMFF section 8.9.2: SampleToGroupBox
                            buf.put_u32(0); // version
                            buf.extend_from_slice(b"roll"); // grouping type
                            buf.put_u32(1); // entry_count
                            buf.put_u32(self.audio_trak.samples);
                            buf.put_u32(1); // group_description_index
                        });
                    });
                });
            });
        });
        Ok(())
    }

    fn write_video_sample_entry(
        &self,
        buf: &mut BytesMut,
        parameters: &VideoParameters,
    ) -> Result<(), Error> {
        // TODO: this should move to client::VideoParameters::sample_entry() or some such.
        write_box!(buf, b"avc1", {
            buf.put_u32(0);
            buf.put_u32(1); // data_reference_index = 1
            buf.extend_from_slice(&[0; 16]);
            buf.put_u16(u16::try_from(parameters.pixel_dimensions().0)?);
            buf.put_u16(u16::try_from(parameters.pixel_dimensions().1)?);
            buf.extend_from_slice(&[
                0x00, 0x48, 0x00, 0x00, // horizresolution
                0x00, 0x48, 0x00, 0x00, // vertresolution
                0x00, 0x00, 0x00, 0x00, // reserved
                0x00, 0x01, // frame count
                0x00, 0x00, 0x00, 0x00, // compressorname
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x18, 0xff, 0xff, // depth + pre_defined
            ]);
            write_box!(buf, b"avcC", {
                buf.extend_from_slice(parameters.extra_data());
            });
        });
        Ok(())
    }

    pub async fn video(
        &mut self,
        stream: &retina::client::Stream,
        frame: &retina::codec::VideoFrame,
    ) -> Result<(), Error> {
        let sample_description_index = if let (Some(i), false) = (
            self.cur_video_params_sample_description_index,
            frame.has_new_parameters(),
        ) {
            // Use the most recent sample description index for most frames, without having to
            // scan through self.video_sample_index.
            i
        } else {
            match stream.parameters() {
                Some(ParametersRef::Video(params)) => {
                    let pos = self.video_params.iter().position(|p| p == params);
                    if let Some(pos) = pos {
                        u32::try_from(pos + 1)?
                    } else {
                        self.video_params.push(params.clone());
                        u32::try_from(self.video_params.len())?
                    }
                }
                None => {
                    return Ok(());
                }
                _ => unreachable!(),
            }
        };
        self.cur_video_params_sample_description_index = Some(sample_description_index);
        let size = u32::try_from(frame.data().remaining())?;
        self.video_trak.add_sample(
            sample_description_index,
            self.mdat_pos,
            size,
            frame.timestamp(),
            frame.loss(),
            self.allow_loss,
        )?;
        self.mdat_pos = self
            .mdat_pos
            .checked_add(size)
            .ok_or_else(|| anyhow!("mdat_pos overflow"))?;
        if frame.is_random_access_point() {
            self.video_sync_sample_nums.push(self.video_trak.samples);
        }
        self.inner.write_all(frame.data()).await?;
        Ok(())
    }

    pub async fn audio(&mut self, frame: retina::codec::AudioFrame) -> Result<(), Error> {
        println!(
            "{}: {}-byte audio frame",
            frame.timestamp(),
            frame.data().remaining()
        );
        let size = u32::try_from(frame.data().remaining())?;
        self.audio_trak.add_sample(
            /* sample_description_index */ 1,
            self.mdat_pos,
            size,
            frame.timestamp(),
            frame.loss(),
            self.allow_loss,
        )?;
        self.mdat_pos = self
            .mdat_pos
            .checked_add(size)
            .ok_or_else(|| anyhow!("mdat_pos overflow"))?;
        self.inner.write_all(frame.data()).await?;
        Ok(())
    }
}

use R8 format

format: TextureFormat::Rgba8UnormSrgb, // TODO: use R8 format

        })
        .collect();

    let mask_images = input_images.iter()
        .enumerate()
        .map(|(index, image)| {
            let mut mask_images = Image {
                asset_usage: RenderAssetUsages::all(),
                texture_descriptor: TextureDescriptor {
                    label: None,
                    size,
                    dimension: TextureDimension::D2,
                    format: TextureFormat::Rgba8UnormSrgb,  // TODO: use R8 format
                    mip_level_count: 1,
                    sample_count: 1,
                    usage: TextureUsages::COPY_DST

create decoders for each h264 file stream

// TODO: create decoders for each h264 file stream

use bevy::prelude::*;
use image::DynamicImage;
use rayon::prelude::*;


pub struct PipelinePlugin;
impl Plugin for PipelinePlugin {
    fn build(&self, app: &mut App) {
        app.add_systems(Update, generate_annotations);
    }
}


#[derive(Component, Reflect)]
pub struct PipelineConfig {
    pub raw_frames: bool,
    pub subject_refinement: bool,           // https://github.com/onnx/models/tree/main/validated/vision/body_analysis/ultraface
    pub repair_frames: bool,                // https://huggingface.co/docs/diffusers/en/optimization/onnx & https://github.com/bnm6900030/swintormer
    pub upsample_frames: bool,              // https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx
    pub mask_frames: bool,                  // https://github.com/ZHKKKe/MODNet
    pub light_field_cameras: bool,          // https://github.com/jasonyzhang/RayDiffusion
    pub depth_maps: bool,                   // https://github.com/fabio-sim/Depth-Anything-ONNX
    pub gaussian_cloud: bool,
}

impl Default for PipelineConfig {
    fn default() -> Self {
        Self {
            raw_frames: true,
            subject_refinement: false,
            repair_frames: false,
            upsample_frames: false,
            mask_frames: false,
            light_field_cameras: false,
            depth_maps: false,
            gaussian_cloud: false,
        }
    }
}


#[derive(Bundle, Default, Reflect)]
pub struct StreamSessionBundle {
    pub config: PipelineConfig,
    pub raw_streams: RawStreams,
    pub session: Session,
}

// TODO: use an entity saver to write Session and it's components (e.g. `0/session.ron`)


#[derive(Component, Default, Reflect)]
pub struct Session {
    pub id: usize,
    pub directory: String,
}

impl Session {
    pub fn new(directory: String) -> Self {
        let id = get_next_session_id(&directory);
        let directory = format!("{}/{}", directory, id);
        std::fs::create_dir_all(&directory).unwrap();

        Self { id, directory }
    }
}


pub trait PipelineNode {
    fn new(session: &Session) -> Self;
    fn exists(session: &Session) -> bool;
}


#[derive(Component, Default, Reflect)]
pub struct RawStreams {
    pub streams: Vec<String>,
}
impl RawStreams {
    pub fn decoders(&self) -> () {
        // TODO: create decoders for each h264 file stream
        todo!()
    }
}


// TODO: add a pipeline config for the session describing which components to run


fn generate_annotations(
    mut commands: Commands,
    raw_streams: Query<
        (
            Entity,
            &PipelineConfig,
            &RawStreams,
            &Session,
        ),
        Without<RawFrames>,
    >,
) {
    for (
        entity,
        config,
        raw_streams,
        session,
    ) in raw_streams.iter() {
        let raw_frames = RawFrames::new(session);

        if config.raw_frames {
            let run_node = !RawFrames::exists(session);
            let mut raw_frames = RawFrames::new(session);

            if run_node {
                // TODO: add (identifier == "{}_{}", stream, frame) output to this first stage (forward to mask stage /w image in RAM)
                let frames = raw_streams.streams.par_iter()
                    .map(|stream| {
                        let decoder = todo!();

                        (stream, decoder)
                    })
                    .map(|(stream, decoder)| {
                        let frames: Vec<DynamicImage> = vec![];

                        // TODO: read all frames

                        (stream, frames)
                    })
                    .collect::<Vec<_>>();

                frames.par_iter()
                    .for_each(|(stream, frames)| {
                        frames.iter().enumerate()
                            .for_each(|(i, frame)| {
                                let path = format!(
                                    "{}/frames/{}_{}.png",
                                    session.directory,
                                    stream,
                                    i,
                                );
                                frame.save(path).unwrap();
                            });
                    });
            }
        }

        commands.entity(entity).insert(raw_frames);
    }
}


#[derive(Component, Default, Reflect)]
pub struct RawFrames {
    pub frames: Vec<String>,
}
impl RawFrames {
    // TODO: move new and exists to a trait

    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        // TODO: load all files that are already in the directory

        Self {
            frames: vec![],
        }
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::metadata(&output_directory).is_ok()
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Component, Default, Reflect)]
pub struct MaskFrames {
    pub frames: Vec<String>,
}
impl MaskFrames {
    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/masks", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        Self {
            frames: vec![],
        }
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Default, Clone, Reflect)]
pub struct LightFieldCamera {
    // TODO: intrinsics/extrinsics
}

#[derive(Component, Default, Reflect)]
pub struct LightFieldCameras {
    pub cameras: Vec<LightFieldCamera>,
}



fn get_next_session_id(output_directory: &str) -> usize {
    match std::fs::read_dir(output_directory) {
        Ok(entries) => entries.filter_map(|entry| {
            let entry = entry.ok()?;
                if entry.path().is_dir() {
                    entry.file_name().to_string_lossy().parse::<usize>().ok()
                } else {
                    None
                }
            })
            .max()
            .map_or(0, |max_id| max_id + 1),
        Err(_) => 0,
    }
}

add a pipeline config for the session describing which components to run

// TODO: add a pipeline config for the session describing which components to run

use bevy::prelude::*;
use image::DynamicImage;
use rayon::prelude::*;


pub struct PipelinePlugin;
impl Plugin for PipelinePlugin {
    fn build(&self, app: &mut App) {
        app.add_systems(Update, generate_annotations);
    }
}


#[derive(Component, Reflect)]
pub struct PipelineConfig {
    pub raw_frames: bool,
    pub subject_refinement: bool,           // https://github.com/onnx/models/tree/main/validated/vision/body_analysis/ultraface
    pub repair_frames: bool,                // https://huggingface.co/docs/diffusers/en/optimization/onnx & https://github.com/bnm6900030/swintormer
    pub upsample_frames: bool,              // https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx
    pub mask_frames: bool,                  // https://github.com/ZHKKKe/MODNet
    pub light_field_cameras: bool,          // https://github.com/jasonyzhang/RayDiffusion
    pub depth_maps: bool,                   // https://github.com/fabio-sim/Depth-Anything-ONNX
    pub gaussian_cloud: bool,
}

impl Default for PipelineConfig {
    fn default() -> Self {
        Self {
            raw_frames: true,
            subject_refinement: false,
            repair_frames: false,
            upsample_frames: false,
            mask_frames: false,
            light_field_cameras: false,
            depth_maps: false,
            gaussian_cloud: false,
        }
    }
}


#[derive(Bundle, Default, Reflect)]
pub struct StreamSessionBundle {
    pub config: PipelineConfig,
    pub raw_streams: RawStreams,
    pub session: Session,
}

// TODO: use an entity saver to write Session and it's components (e.g. `0/session.ron`)


#[derive(Component, Default, Reflect)]
pub struct Session {
    pub id: usize,
    pub directory: String,
}

impl Session {
    pub fn new(directory: String) -> Self {
        let id = get_next_session_id(&directory);
        let directory = format!("{}/{}", directory, id);
        std::fs::create_dir_all(&directory).unwrap();

        Self { id, directory }
    }
}


pub trait PipelineNode {
    fn new(session: &Session) -> Self;
    fn exists(session: &Session) -> bool;
}


#[derive(Component, Default, Reflect)]
pub struct RawStreams {
    pub streams: Vec<String>,
}
impl RawStreams {
    pub fn decoders(&self) -> () {
        // TODO: create decoders for each h264 file stream
        todo!()
    }
}


// TODO: add a pipeline config for the session describing which components to run


fn generate_annotations(
    mut commands: Commands,
    raw_streams: Query<
        (
            Entity,
            &PipelineConfig,
            &RawStreams,
            &Session,
        ),
        Without<RawFrames>,
    >,
) {
    for (
        entity,
        config,
        raw_streams,
        session,
    ) in raw_streams.iter() {
        let raw_frames = RawFrames::new(session);

        if config.raw_frames {
            let run_node = !RawFrames::exists(session);
            let mut raw_frames = RawFrames::new(session);

            if run_node {
                // TODO: add (identifier == "{}_{}", stream, frame) output to this first stage (forward to mask stage /w image in RAM)
                let frames = raw_streams.streams.par_iter()
                    .map(|stream| {
                        let decoder = todo!();

                        (stream, decoder)
                    })
                    .map(|(stream, decoder)| {
                        let frames: Vec<DynamicImage> = vec![];

                        // TODO: read all frames

                        (stream, frames)
                    })
                    .collect::<Vec<_>>();

                frames.par_iter()
                    .for_each(|(stream, frames)| {
                        frames.iter().enumerate()
                            .for_each(|(i, frame)| {
                                let path = format!(
                                    "{}/frames/{}_{}.png",
                                    session.directory,
                                    stream,
                                    i,
                                );
                                frame.save(path).unwrap();
                            });
                    });
            }
        }

        commands.entity(entity).insert(raw_frames);
    }
}


#[derive(Component, Default, Reflect)]
pub struct RawFrames {
    pub frames: Vec<String>,
}
impl RawFrames {
    // TODO: move new and exists to a trait

    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        // TODO: load all files that are already in the directory

        Self {
            frames: vec![],
        }
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::metadata(&output_directory).is_ok()
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Component, Default, Reflect)]
pub struct MaskFrames {
    pub frames: Vec<String>,
}
impl MaskFrames {
    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/masks", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        Self {
            frames: vec![],
        }
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Default, Clone, Reflect)]
pub struct LightFieldCamera {
    // TODO: intrinsics/extrinsics
}

#[derive(Component, Default, Reflect)]
pub struct LightFieldCameras {
    pub cameras: Vec<LightFieldCamera>,
}



fn get_next_session_id(output_directory: &str) -> usize {
    match std::fs::read_dir(output_directory) {
        Ok(entries) => entries.filter_map(|entry| {
            let entry = entry.ok()?;
                if entry.path().is_dir() {
                    entry.file_name().to_string_lossy().parse::<usize>().ok()
                } else {
                    None
                }
            })
            .max()
            .map_or(0, |max_id| max_id + 1),
        Err(_) => 0,
    }
}

deduplicate start recording logic

// TODO: deduplicate start recording logic

}


fn automatic_recording(
    mut commands: Commands,
    time: Res<Time>,
    mut ev_person: EventReader<PersonDetectedEvent>,
    stream_manager: Res<RtspStreamManager>,
    mut live_session: ResMut<LiveSession>,
    mut person_timeout: Local<Stopwatch>,
) {
    if live_session.0.is_some() {
        if person_timeout.elapsed_secs() > 3.0 {
            person_timeout.reset();

            println!("no person detected for 3 seconds, stopping recording");

            let session_entity = live_session.0.take().unwrap();
            let raw_streams = stream_manager.stop_recording();

            commands.entity(session_entity)
                .insert(RawStreams {
                    streams: raw_streams,
                });
        }

        person_timeout.tick(time.delta());

        for _ev in ev_person.read() {
            person_timeout.reset();
        }

        return;
    }

    for ev in ev_person.read() {
        println!("person detected: {:?}", ev);

        // TODO: deduplicate start recording logic
        let session = Session::new("capture".to_string());

        stream_manager.start_recording(
            &session,
        );

        // TODO: build pipeline config from args
        let entity = commands.spawn(
            StreamSessionBundle {
                session: session,
                raw_streams: RawStreams::default(),
                config: PipelineConfig::default(),
            },
        ).id();
        live_session.0 = Some(entity);

        break;
    }
}

add thresholds for detection

// TODO: add thresholds for detection

use std::cmp::{max, min};

use bevy::prelude::*;
use image::DynamicImage;
use rayon::prelude::*;

use crate::{
    matting::MattedStream,
    stream::StreamId,
};


pub struct PersonDetectPlugin;

impl Plugin for PersonDetectPlugin {
    fn build(&self, app: &mut App) {
        app.add_systems(Update, detect_person);
    }

}


#[derive(Component)]
pub struct DetectPersons;


#[derive(Debug, Clone, Reflect, PartialEq)]
pub struct BoundingBox {
    pub x: i32,
    pub y: i32,
    pub width: i32,
    pub height: i32,
}

#[derive(Event, Debug, Reflect, Clone)]
pub struct PersonDetectedEvent {
    pub stream_id: StreamId,
    pub bounding_box: BoundingBox,
    pub mask_sum: f32,
}


fn detect_person(
    mut ev_asset: EventReader<AssetEvent<Image>>,
    mut ev_person_detected: EventWriter<PersonDetectedEvent>,
    person_detect_streams: Query<(
        &MattedStream,
        &DetectPersons,
    )>,
    images: Res<Assets<Image>>,
) {
    for ev in ev_asset.read() {
        match ev {
            AssetEvent::Modified { id } => {
                for (matted_stream, _) in person_detect_streams.iter() {
                    if &matted_stream.output.id() == id {
                        let image = images.get(&matted_stream.output).unwrap().clone().try_into_dynamic().unwrap();

                        let bounding_box = masked_bounding_box(&image);
                        let sum = sum_masked_pixels(&image);

                        println!("bounding box: {:?}, sum: {}", bounding_box, sum);

                        // TODO: add thresholds for detection
                        let person_detected = false;
                        if person_detected {
                            ev_person_detected.send(PersonDetectedEvent {
                                stream_id: matted_stream.stream_id,
                                bounding_box: bounding_box.unwrap(),
                                mask_sum: sum,
                            });
                        }
                    }
                }
            }
            _ => {}
        }
    }
}



pub fn masked_bounding_box(image: &DynamicImage) -> Option<BoundingBox> {
    let img = image.as_luma8().unwrap();

    let bounding_boxes = img.enumerate_pixels()
        .par_bridge()
        .filter_map(|(x, y, pixel)| {
            if pixel[0] > 128 {
                Some((x as i32, y as i32, x as i32, y as i32))
            } else {
                None
            }
        })
        .reduce_with(|(
            min_x1,
            min_y1,
            max_x1,
            max_y1,
        ), (
            min_x2,
            min_y2,
            max_x2,
            max_y2,
        )| {
            (
                min(min_x1, min_x2),
                min(min_y1, min_y2),
                max(max_x1, max_x2),
                max(max_y1, max_y2),
            )
        });

    bounding_boxes.map(|(
        min_x,
        min_y,
        max_x,
        max_y
    )| {
        BoundingBox {
            x: min_x,
            y: min_y,
            width: max_x - min_x + 1,
            height: max_y - min_y + 1,
        }
    })
}


pub fn sum_masked_pixels(image: &DynamicImage) -> f32 {
    let img = image.as_luma8().unwrap();
    let pixels = img.pixels();

    let count = pixels.par_bridge()
        .map(|pixel| {
            pixel.0[0] as f32 / 255.0
        })
        .sum();

    count
}



#[cfg(test)]
mod tests {
    use super::*;
    use image::{ImageBuffer, Luma};
    use approx::assert_relative_eq;


    #[test]
    fn test_masked_bounding_box() {
        let width = 10;
        let height = 10;
        let mut img: ImageBuffer<Luma<u8>, Vec<u8>> = ImageBuffer::new(width, height);

        for x in 2..=5 {
            for y in 2..=5 {
                img.put_pixel(x, y, Luma([200]));
            }
        }

        let dynamic_img = DynamicImage::ImageLuma8(img);
        let result = masked_bounding_box(&dynamic_img).expect("expected a bounding box");

        let expected = BoundingBox {
            x:2,
            y: 2,
            width: 4,
            height: 4,
        };
        assert_eq!(result, expected, "the computed bounding box did not match the expected values.");
    }


    #[test]
    fn test_sum_masked_pixels() {
        let width = 4;
        let height = 4;
        let mut img: ImageBuffer<Luma<u8>, Vec<u8>> = ImageBuffer::new(width, height);

        img.put_pixel(0, 0, Luma([255]));
        img.put_pixel(1, 0, Luma([127]));
        img.put_pixel(2, 0, Luma([63]));

        let dynamic_img = DynamicImage::ImageLuma8(img);
        let result = sum_masked_pixels(&dynamic_img);

        let expected = (255.0 + 127.0 + 63.0) / 255.0;
        assert_relative_eq!(result, expected);
    }
}

use an entity saver to write Session and it's components (e.g. `0/session.ron`)

// TODO: use an entity saver to write Session and it's components (e.g. `0/session.ron`)

use bevy::prelude::*;
use image::DynamicImage;
use rayon::prelude::*;


pub struct PipelinePlugin;
impl Plugin for PipelinePlugin {
    fn build(&self, app: &mut App) {
        app.add_systems(Update, generate_annotations);
    }
}


#[derive(Component, Reflect)]
pub struct PipelineConfig {
    pub raw_frames: bool,
    pub subject_refinement: bool,           // https://github.com/onnx/models/tree/main/validated/vision/body_analysis/ultraface
    pub repair_frames: bool,                // https://huggingface.co/docs/diffusers/en/optimization/onnx & https://github.com/bnm6900030/swintormer
    pub upsample_frames: bool,              // https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx
    pub mask_frames: bool,                  // https://github.com/ZHKKKe/MODNet
    pub light_field_cameras: bool,          // https://github.com/jasonyzhang/RayDiffusion
    pub depth_maps: bool,                   // https://github.com/fabio-sim/Depth-Anything-ONNX
    pub gaussian_cloud: bool,
}

impl Default for PipelineConfig {
    fn default() -> Self {
        Self {
            raw_frames: true,
            subject_refinement: false,
            repair_frames: false,
            upsample_frames: false,
            mask_frames: false,
            light_field_cameras: false,
            depth_maps: false,
            gaussian_cloud: false,
        }
    }
}


#[derive(Bundle, Default, Reflect)]
pub struct StreamSessionBundle {
    pub config: PipelineConfig,
    pub raw_streams: RawStreams,
    pub session: Session,
}

// TODO: use an entity saver to write Session and it's components (e.g. `0/session.ron`)


#[derive(Component, Default, Reflect)]
pub struct Session {
    pub id: usize,
    pub directory: String,
}

impl Session {
    pub fn new(directory: String) -> Self {
        let id = get_next_session_id(&directory);
        let directory = format!("{}/{}", directory, id);
        std::fs::create_dir_all(&directory).unwrap();

        Self { id, directory }
    }
}


pub trait PipelineNode {
    fn new(session: &Session) -> Self;
    fn exists(session: &Session) -> bool;
}


#[derive(Component, Default, Reflect)]
pub struct RawStreams {
    pub streams: Vec<String>,
}
impl RawStreams {
    pub fn decoders(&self) -> () {
        // TODO: create decoders for each h264 file stream
        todo!()
    }
}


// TODO: add a pipeline config for the session describing which components to run


fn generate_annotations(
    mut commands: Commands,
    raw_streams: Query<
        (
            Entity,
            &PipelineConfig,
            &RawStreams,
            &Session,
        ),
        Without<RawFrames>,
    >,
) {
    for (
        entity,
        config,
        raw_streams,
        session,
    ) in raw_streams.iter() {
        let raw_frames = RawFrames::new(session);

        if config.raw_frames {
            let run_node = !RawFrames::exists(session);
            let mut raw_frames = RawFrames::new(session);

            if run_node {
                // TODO: add (identifier == "{}_{}", stream, frame) output to this first stage (forward to mask stage /w image in RAM)
                let frames = raw_streams.streams.par_iter()
                    .map(|stream| {
                        let decoder = todo!();

                        (stream, decoder)
                    })
                    .map(|(stream, decoder)| {
                        let frames: Vec<DynamicImage> = vec![];

                        // TODO: read all frames

                        (stream, frames)
                    })
                    .collect::<Vec<_>>();

                frames.par_iter()
                    .for_each(|(stream, frames)| {
                        frames.iter().enumerate()
                            .for_each(|(i, frame)| {
                                let path = format!(
                                    "{}/frames/{}_{}.png",
                                    session.directory,
                                    stream,
                                    i,
                                );
                                frame.save(path).unwrap();
                            });
                    });
            }
        }

        commands.entity(entity).insert(raw_frames);
    }
}


#[derive(Component, Default, Reflect)]
pub struct RawFrames {
    pub frames: Vec<String>,
}
impl RawFrames {
    // TODO: move new and exists to a trait

    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        // TODO: load all files that are already in the directory

        Self {
            frames: vec![],
        }
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/frames", session.directory);
        std::fs::metadata(&output_directory).is_ok()
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Component, Default, Reflect)]
pub struct MaskFrames {
    pub frames: Vec<String>,
}
impl MaskFrames {
    pub fn new(
        session: &Session,
    ) -> Self {
        let output_directory = format!("{}/masks", session.directory);
        std::fs::create_dir_all(&output_directory).unwrap();

        Self {
            frames: vec![],
        }
    }

    pub fn image(&self, camera: usize, frame: usize) -> Option<Image> {
        todo!()
    }
}


#[derive(Default, Clone, Reflect)]
pub struct LightFieldCamera {
    // TODO: intrinsics/extrinsics
}

#[derive(Component, Default, Reflect)]
pub struct LightFieldCameras {
    pub cameras: Vec<LightFieldCamera>,
}



fn get_next_session_id(output_directory: &str) -> usize {
    match std::fs::read_dir(output_directory) {
        Ok(entries) => entries.filter_map(|entry| {
            let entry = entry.ok()?;
                if entry.path().is_dir() {
                    entry.file_name().to_string_lossy().parse::<usize>().ok()
                } else {
                    None
                }
            })
            .max()
            .map_or(0, |max_id| max_id + 1),
        Err(_) => 0,
    }
}

add an edit list so the video and audio tracks are in sync.

buf.put_u32(0); // sample_size

buf.put_u32(u32::try_from(self.chunks.len())?); // entry_count

0, 0, 0, 0, // minor_version

b'i', b's', b'o', b'm', // compatible_brands[0]

buf.put_u64(0); // creation_time

buf.put_u64(0); // modification_time

buf.put_u32(90000); // timescale

buf.put_u16(0x0100); // volume

buf.put_u16(0); // reserved

buf.put_u64(0); // reserved

buf.put_u64(0); // creation_time

buf.put_u64(0); // modification_time

buf.put_u32(1); // track_id

buf.put_u32(0); // reserved

buf.put_u16(0); // layer

buf.put_u16(0); // alternate_group

buf.put_u16(0); // volume

buf.put_u16(0); // reserved

buf.put_u64(0); // creation_time

buf.put_u64(0); // modification_time

buf.put_u32(90000); // timebase

0x00, 0x00, 0x00, 0x00, // pre_defined

b'v', b'i', b'd', b'e', // handler = vide

0x00, 0x00, 0x00, 0x00, // reserved[0]

0x00, 0x00, 0x00, 0x00, // reserved[1]

0x00, 0x00, 0x00, 0x00, // reserved[2]

0x00, // name, zero-terminated (empty)

buf.put_u32(u32::try_from(self.video_params.len())?); // entry_count

buf.put_u64(0); // creation_time

buf.put_u64(0); // modification_time

buf.put_u32(2); // track_id

buf.put_u32(0); // reserved

buf.put_u16(0); // layer

buf.put_u16(0); // alternate_group

buf.put_u16(0); // volume

buf.put_u16(0); // reserved

buf.put_u32(0); // height

buf.put_u64(0); // creation_time

buf.put_u64(0); // modification_time

0x00, 0x00, 0x00, 0x00, // pre_defined

b's', b'o', b'u', b'n', // handler = soun

0x00, 0x00, 0x00, 0x00, // reserved[0]

0x00, 0x00, 0x00, 0x00, // reserved[1]

0x00, 0x00, 0x00, 0x00, // reserved[2]

0x00, // name, zero-terminated (empty)

0x00, 0x00, // balance

0x00, 0x00, // reserved

buf.put_u32(1); // entry_count

to decode accurately. See

https://developer\.apple\.com/library/archive/documentation/QuickTime/QTFF/QTFFAppenG/QTFFAppenG\.html .

buf.put_u32(0); // version

buf.extend_from_slice(b"roll"); // grouping type

buf.put_u32(1); // entry_count

BMFF section 10.1: AudioRollRecoveryEntry

buf.put_i16(-1); // roll_distance

buf.put_u32(0); // version

buf.extend_from_slice(b"roll"); // grouping type

buf.put_u32(1); // entry_count

// TODO: add an edit list so the video and audio tracks are in sync.

// https://github.com/scottlamb/retina/blob/main/examples/client/src/mp4.rs

use anyhow::{anyhow, bail, Error};
use bytes::{Buf, BufMut, BytesMut};
use retina::codec::{AudioParameters, ParametersRef, VideoParameters};

use std::convert::TryFrom;
use std::io::SeekFrom;
use tokio::io::{AsyncSeek, AsyncSeekExt, AsyncWrite, AsyncWriteExt};


/// Writes a box length for everything appended in the supplied scope.
macro_rules! write_box {
    ($buf:expr, $fourcc:expr, $b:block) => {{
        let _: &mut BytesMut = $buf; // type-check.
        let pos_start = ($buf as &BytesMut).len();
        let fourcc: &[u8; 4] = $fourcc;
        $buf.extend_from_slice(&[0, 0, 0, 0, fourcc[0], fourcc[1], fourcc[2], fourcc[3]]);
        let r = {
            $b;
        };
        let pos_end = ($buf as &BytesMut).len();
        let len = pos_end.checked_sub(pos_start).unwrap();
        $buf[pos_start..pos_start + 4].copy_from_slice(&u32::try_from(len)?.to_be_bytes()[..]);
        r
    }};
}

/// Writes `.mp4` data to a sink.
/// See module-level documentation for details.
pub struct Mp4Writer<W: AsyncWrite + AsyncSeek + Send + Unpin> {
    mdat_start: u32,
    mdat_pos: u32,
    video_params: Vec<VideoParameters>,

    /// The most recently used 1-based index within `video_params`.
    cur_video_params_sample_description_index: Option<u32>,
    audio_params: Option<Box<AudioParameters>>,
    allow_loss: bool,

    /// The (1-indexed) video sample (frame) number of each sync sample (random access point).
    video_sync_sample_nums: Vec<u32>,

    video_trak: TrakTracker,
    audio_trak: TrakTracker,
    inner: W,
}

/// A chunk: a group of samples that have consecutive byte positions and same sample description.
struct Chunk {
    first_sample_number: u32, // 1-based index
    byte_pos: u32,            // starting byte of first sample
    sample_description_index: u32,
}

/// Tracks the parts of a `trak` atom which are common between video and audio samples.
#[derive(Default)]
struct TrakTracker {
    samples: u32,
    next_pos: Option<u32>,
    chunks: Vec<Chunk>,
    sizes: Vec<u32>,

    /// The durations of samples in a run-length encoding form: (number of samples, duration).
    /// This lags one sample behind calls to `add_sample` because each sample's duration
    /// is calculated using the PTS of the following sample.
    durations: Vec<(u32, u32)>,
    last_pts: Option<i64>,
    tot_duration: u64,
}

impl TrakTracker {
    fn add_sample(
        &mut self,
        sample_description_index: u32,
        byte_pos: u32,
        size: u32,
        timestamp: retina::Timestamp,
        loss: u16,
        allow_loss: bool,
    ) -> Result<(), Error> {
        if self.samples > 0 && loss > 0 && !allow_loss {
            bail!("Lost {} RTP packets mid-stream", loss);
        }
        self.samples += 1;
        if self.next_pos != Some(byte_pos)
            || self.chunks.last().map(|c| c.sample_description_index)
                != Some(sample_description_index)
        {
            self.chunks.push(Chunk {
                first_sample_number: self.samples,
                byte_pos,
                sample_description_index,
            });
        }
        self.sizes.push(size);
        self.next_pos = Some(byte_pos + size);
        if let Some(last_pts) = self.last_pts.replace(timestamp.timestamp()) {
            let duration = timestamp.timestamp().checked_sub(last_pts).unwrap();
            self.tot_duration += u64::try_from(duration).unwrap();
            let duration = u32::try_from(duration)?;
            match self.durations.last_mut() {
                Some((s, d)) if *d == duration => *s += 1,
                _ => self.durations.push((1, duration)),
            }
        }
        Ok(())
    }

    fn finish(&mut self) {
        if self.last_pts.is_some() {
            self.durations.push((1, 0));
        }
    }

    /// Estimates the sum of the variable-sized portions of the data.
    fn size_estimate(&self) -> usize {
        (self.durations.len() * 8) + // stts
        (self.chunks.len() * 12) +   // stsc
        (self.sizes.len() * 4) +     // stsz
        (self.chunks.len() * 4) // stco
    }

    fn write_common_stbl_parts(&self, buf: &mut BytesMut) -> Result<(), Error> {
        // TODO: add an edit list so the video and audio tracks are in sync.
        write_box!(buf, b"stts", {
            buf.put_u32(0);
            buf.put_u32(u32::try_from(self.durations.len())?);
            for (samples, duration) in &self.durations {
                buf.put_u32(*samples);
                buf.put_u32(*duration);
            }
        });
        write_box!(buf, b"stsc", {
            buf.put_u32(0); // version
            buf.put_u32(u32::try_from(self.chunks.len())?);
            let mut prev_sample_number = 1;
            let mut chunk_number = 1;
            if !self.chunks.is_empty() {
                for c in &self.chunks[1..] {
                    buf.put_u32(chunk_number);
                    buf.put_u32(c.first_sample_number - prev_sample_number);
                    buf.put_u32(c.sample_description_index);
                    prev_sample_number = c.first_sample_number;
                    chunk_number += 1;
                }
                buf.put_u32(chunk_number);
                buf.put_u32(self.samples + 1 - prev_sample_number);
                buf.put_u32(1); // sample_description_index
            }
        });
        write_box!(buf, b"stsz", {
            buf.put_u32(0); // version
            buf.put_u32(0); // sample_size
            buf.put_u32(u32::try_from(self.sizes.len())?);
            for s in &self.sizes {
                buf.put_u32(*s);
            }
        });
        write_box!(buf, b"stco", {
            buf.put_u32(0); // version
            buf.put_u32(u32::try_from(self.chunks.len())?); // entry_count
            for c in &self.chunks {
                buf.put_u32(c.byte_pos);
            }
        });
        Ok(())
    }
}

impl<W: AsyncWrite + AsyncSeek + Send + Unpin> Mp4Writer<W> {
    pub async fn new(
        audio_params: Option<Box<AudioParameters>>,
        allow_loss: bool,
        mut inner: W,
    ) -> Result<Self, Error> {
        let mut buf = BytesMut::new();
        write_box!(&mut buf, b"ftyp", {
            buf.extend_from_slice(&[
                b'i', b's', b'o', b'm', // major_brand
                0, 0, 0, 0, // minor_version
                b'i', b's', b'o', b'm', // compatible_brands[0]
            ]);
        });
        buf.extend_from_slice(&b"\0\0\0\0mdat"[..]);
        let mdat_start = u32::try_from(buf.len())?;
        inner.write_all(&buf).await?;
        Ok(Mp4Writer {
            inner,
            video_params: Vec::new(),
            cur_video_params_sample_description_index: None,
            audio_params,
            allow_loss,
            video_trak: TrakTracker::default(),
            audio_trak: TrakTracker::default(),
            video_sync_sample_nums: Vec::new(),
            mdat_start,
            mdat_pos: mdat_start,
        })
    }

    pub async fn finish(mut self) -> Result<(), Error> {
        self.video_trak.finish();
        self.audio_trak.finish();
        let mut buf = BytesMut::with_capacity(
            1024 + self.video_trak.size_estimate()
                + self.audio_trak.size_estimate()
                + 4 * self.video_sync_sample_nums.len(),
        );
        write_box!(&mut buf, b"moov", {
            write_box!(&mut buf, b"mvhd", {
                buf.put_u32(1 << 24); // version
                buf.put_u64(0); // creation_time
                buf.put_u64(0); // modification_time
                buf.put_u32(90000); // timescale
                buf.put_u64(self.video_trak.tot_duration);
                buf.put_u32(0x00010000); // rate
                buf.put_u16(0x0100); // volume
                buf.put_u16(0); // reserved
                buf.put_u64(0); // reserved
                for v in &[0x00010000, 0, 0, 0, 0x00010000, 0, 0, 0, 0x40000000] {
                    buf.put_u32(*v); // matrix
                }
                for _ in 0..6 {
                    buf.put_u32(0); // pre_defined
                }
                buf.put_u32(2); // next_track_id
            });
            if self.video_trak.samples > 0 {
                self.write_video_trak(&mut buf)?;
            }
            if self.audio_trak.samples > 0 {
                self.write_audio_trak(&mut buf, self.audio_params.as_ref().unwrap())?;
            }
        });
        self.inner.write_all(&buf).await?;
        self.inner
            .seek(SeekFrom::Start(u64::from(self.mdat_start - 8)))
            .await?;
        self.inner
            .write_all(&(self.mdat_pos + 8 - self.mdat_start).to_be_bytes()[..])
            .await?;
        Ok(())
    }

    fn write_video_trak(&self, buf: &mut BytesMut) -> Result<(), Error> {
        write_box!(buf, b"trak", {
            write_box!(buf, b"tkhd", {
                buf.put_u32((1 << 24) | 7); // version, flags
                buf.put_u64(0); // creation_time
                buf.put_u64(0); // modification_time
                buf.put_u32(1); // track_id
                buf.put_u32(0); // reserved
                buf.put_u64(self.video_trak.tot_duration);
                buf.put_u64(0); // reserved
                buf.put_u16(0); // layer
                buf.put_u16(0); // alternate_group
                buf.put_u16(0); // volume
                buf.put_u16(0); // reserved
                for v in &[0x00010000, 0, 0, 0, 0x00010000, 0, 0, 0, 0x40000000] {
                    buf.put_u32(*v); // matrix
                }
                let dims = self.video_params.iter().fold((0, 0), |prev_dims, p| {
                    let dims = p.pixel_dimensions();
                    (
                        std::cmp::max(prev_dims.0, dims.0),
                        std::cmp::max(prev_dims.1, dims.1),
                    )
                });
                let width = u32::from(u16::try_from(dims.0)?) << 16;
                let height = u32::from(u16::try_from(dims.1)?) << 16;
                buf.put_u32(width);
                buf.put_u32(height);
            });
            write_box!(buf, b"mdia", {
                write_box!(buf, b"mdhd", {
                    buf.put_u32(1 << 24); // version
                    buf.put_u64(0); // creation_time
                    buf.put_u64(0); // modification_time
                    buf.put_u32(90000); // timebase
                    buf.put_u64(self.video_trak.tot_duration);
                    buf.put_u32(0x55c40000); // language=und + pre-defined
                });
                write_box!(buf, b"hdlr", {
                    buf.extend_from_slice(&[
                        0x00, 0x00, 0x00, 0x00, // version + flags
                        0x00, 0x00, 0x00, 0x00, // pre_defined
                        b'v', b'i', b'd', b'e', // handler = vide
                        0x00, 0x00, 0x00, 0x00, // reserved[0]
                        0x00, 0x00, 0x00, 0x00, // reserved[1]
                        0x00, 0x00, 0x00, 0x00, // reserved[2]
                        0x00, // name, zero-terminated (empty)
                    ]);
                });
                write_box!(buf, b"minf", {
                    write_box!(buf, b"vmhd", {
                        buf.put_u32(1);
                        buf.put_u64(0);
                    });
                    write_box!(buf, b"dinf", {
                        write_box!(buf, b"dref", {
                            buf.put_u32(0);
                            buf.put_u32(1); // entry_count
                            write_box!(buf, b"url ", {
                                buf.put_u32(1); // version, flags=self-contained
                            });
                        });
                    });
                    write_box!(buf, b"stbl", {
                        write_box!(buf, b"stsd", {
                            buf.put_u32(0); // version
                            buf.put_u32(u32::try_from(self.video_params.len())?); // entry_count
                            for p in &self.video_params {
                                self.write_video_sample_entry(buf, p)?;
                            }
                        });
                        self.video_trak.write_common_stbl_parts(buf)?;
                        write_box!(buf, b"stss", {
                            buf.put_u32(0); // version
                            buf.put_u32(u32::try_from(self.video_sync_sample_nums.len())?);
                            for n in &self.video_sync_sample_nums {
                                buf.put_u32(*n);
                            }
                        });
                    });
                });
            });
        });
        Ok(())
    }

    fn write_audio_trak(
        &self,
        buf: &mut BytesMut,
        parameters: &AudioParameters,
    ) -> Result<(), Error> {
        write_box!(buf, b"trak", {
            write_box!(buf, b"tkhd", {
                buf.put_u32((1 << 24) | 7); // version, flags
                buf.put_u64(0); // creation_time
                buf.put_u64(0); // modification_time
                buf.put_u32(2); // track_id
                buf.put_u32(0); // reserved
                buf.put_u64(self.audio_trak.tot_duration);
                buf.put_u64(0); // reserved
                buf.put_u16(0); // layer
                buf.put_u16(0); // alternate_group
                buf.put_u16(0); // volume
                buf.put_u16(0); // reserved
                for v in &[0x00010000, 0, 0, 0, 0x00010000, 0, 0, 0, 0x40000000] {
                    buf.put_u32(*v); // matrix
                }
                buf.put_u32(0); // width
                buf.put_u32(0); // height
            });
            write_box!(buf, b"mdia", {
                write_box!(buf, b"mdhd", {
                    buf.put_u32(1 << 24); // version
                    buf.put_u64(0); // creation_time
                    buf.put_u64(0); // modification_time
                    buf.put_u32(parameters.clock_rate());
                    buf.put_u64(self.audio_trak.tot_duration);
                    buf.put_u32(0x55c40000); // language=und + pre-defined
                });
                write_box!(buf, b"hdlr", {
                    buf.extend_from_slice(&[
                        0x00, 0x00, 0x00, 0x00, // version + flags
                        0x00, 0x00, 0x00, 0x00, // pre_defined
                        b's', b'o', b'u', b'n', // handler = soun
                        0x00, 0x00, 0x00, 0x00, // reserved[0]
                        0x00, 0x00, 0x00, 0x00, // reserved[1]
                        0x00, 0x00, 0x00, 0x00, // reserved[2]
                        0x00, // name, zero-terminated (empty)
                    ]);
                });
                write_box!(buf, b"minf", {
                    write_box!(buf, b"smhd", {
                        buf.extend_from_slice(&[
                            0x00, 0x00, 0x00, 0x00, // version + flags
                            0x00, 0x00, // balance
                            0x00, 0x00, // reserved
                        ]);
                    });
                    write_box!(buf, b"dinf", {
                        write_box!(buf, b"dref", {
                            buf.put_u32(0);
                            buf.put_u32(1); // entry_count
                            write_box!(buf, b"url ", {
                                buf.put_u32(1); // version, flags=self-contained
                            });
                        });
                    });
                    write_box!(buf, b"stbl", {
                        write_box!(buf, b"stsd", {
                            buf.put_u32(0); // version
                            buf.put_u32(1); // entry_count
                            buf.extend_from_slice(
                                parameters
                                    .sample_entry()
                                    .expect("all added streams have sample entries"),
                            );
                        });
                        self.audio_trak.write_common_stbl_parts(buf)?;

                        // AAC requires two samples (really, each is a set of 960 or 1024 samples)
                        // to decode accurately. See
                        // https://developer.apple.com/library/archive/documentation/QuickTime/QTFF/QTFFAppenG/QTFFAppenG.html .
                        write_box!(buf, b"sgpd", {
                            // BMFF section 8.9.3: SampleGroupDescriptionBox
                            buf.put_u32(0); // version
                            buf.extend_from_slice(b"roll"); // grouping type
                            buf.put_u32(1); // entry_count
                                            // BMFF section 10.1: AudioRollRecoveryEntry
                            buf.put_i16(-1); // roll_distance
                        });
                        write_box!(buf, b"sbgp", {
                            // BMFF section 8.9.2: SampleToGroupBox
                            buf.put_u32(0); // version
                            buf.extend_from_slice(b"roll"); // grouping type
                            buf.put_u32(1); // entry_count
                            buf.put_u32(self.audio_trak.samples);
                            buf.put_u32(1); // group_description_index
                        });
                    });
                });
            });
        });
        Ok(())
    }

    fn write_video_sample_entry(
        &self,
        buf: &mut BytesMut,
        parameters: &VideoParameters,
    ) -> Result<(), Error> {
        // TODO: this should move to client::VideoParameters::sample_entry() or some such.
        write_box!(buf, b"avc1", {
            buf.put_u32(0);
            buf.put_u32(1); // data_reference_index = 1
            buf.extend_from_slice(&[0; 16]);
            buf.put_u16(u16::try_from(parameters.pixel_dimensions().0)?);
            buf.put_u16(u16::try_from(parameters.pixel_dimensions().1)?);
            buf.extend_from_slice(&[
                0x00, 0x48, 0x00, 0x00, // horizresolution
                0x00, 0x48, 0x00, 0x00, // vertresolution
                0x00, 0x00, 0x00, 0x00, // reserved
                0x00, 0x01, // frame count
                0x00, 0x00, 0x00, 0x00, // compressorname
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x00, 0x00, 0x00, //
                0x00, 0x18, 0xff, 0xff, // depth + pre_defined
            ]);
            write_box!(buf, b"avcC", {
                buf.extend_from_slice(parameters.extra_data());
            });
        });
        Ok(())
    }

    pub async fn video(
        &mut self,
        stream: &retina::client::Stream,
        frame: &retina::codec::VideoFrame,
    ) -> Result<(), Error> {
        let sample_description_index = if let (Some(i), false) = (
            self.cur_video_params_sample_description_index,
            frame.has_new_parameters(),
        ) {
            // Use the most recent sample description index for most frames, without having to
            // scan through self.video_sample_index.
            i
        } else {
            match stream.parameters() {
                Some(ParametersRef::Video(params)) => {
                    let pos = self.video_params.iter().position(|p| p == params);
                    if let Some(pos) = pos {
                        u32::try_from(pos + 1)?
                    } else {
                        self.video_params.push(params.clone());
                        u32::try_from(self.video_params.len())?
                    }
                }
                None => {
                    return Ok(());
                }
                _ => unreachable!(),
            }
        };
        self.cur_video_params_sample_description_index = Some(sample_description_index);
        let size = u32::try_from(frame.data().remaining())?;
        self.video_trak.add_sample(
            sample_description_index,
            self.mdat_pos,
            size,
            frame.timestamp(),
            frame.loss(),
            self.allow_loss,
        )?;
        self.mdat_pos = self
            .mdat_pos
            .checked_add(size)
            .ok_or_else(|| anyhow!("mdat_pos overflow"))?;
        if frame.is_random_access_point() {
            self.video_sync_sample_nums.push(self.video_trak.samples);
        }
        self.inner.write_all(frame.data()).await?;
        Ok(())
    }

    pub async fn audio(&mut self, frame: retina::codec::AudioFrame) -> Result<(), Error> {
        println!(
            "{}: {}-byte audio frame",
            frame.timestamp(),
            frame.data().remaining()
        );
        let size = u32::try_from(frame.data().remaining())?;
        self.audio_trak.add_sample(
            /* sample_description_index */ 1,
            self.mdat_pos,
            size,
            frame.timestamp(),
            frame.loss(),
            self.allow_loss,
        )?;
        self.mdat_pos = self
            .mdat_pos
            .checked_add(size)
            .ok_or_else(|| anyhow!("mdat_pos overflow"))?;
        self.inner.write_all(frame.data()).await?;
        Ok(())
    }
}

use a default 'stream loading' texture

// TODO: use a default 'stream loading' texture

    pub fn new(
        descriptor: StreamDescriptor,
        id: StreamId,
        images: &mut Assets<Image>,
    ) -> Self {
        let size = Extent3d {
            width: 32,
            height: 32,
            ..default()
        };

        // TODO: use a default 'stream loading' texture

        let mut image = Image {
            asset_usage: RenderAssetUsages::all(),
            texture_descriptor: TextureDescriptor {
                label: None,
                size,
                dimension: TextureDimension::D2,
                format: TextureFormat::Rgba8UnormSrgb,
                mip_level_count: 1,
                sample_count: 1,
                usage: TextureUsages::COPY_DST
                    | TextureUsages::TEXTURE_BINDING
                    | TextureUsages::RENDER_ATTACHMENT,
                view_formats: &[TextureFormat::Rgba8UnormSrgb],
            },
            ..default()
        };
        image.resize(size);

        let image = images.add(image);

        Self {
            descriptor,
            id,

support loading maskframes -> images into a pipeline mask viewer

// TODO: support loading maskframes -> images into a pipeline mask viewer

}


fn generate_rotated_frames(
    mut commands: Commands,
    descriptors: Res<StreamDescriptors>,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<RotatedFrames>,
    >,
) {
    // TODO: create a caching/loading system wrapper over run_node interior
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        // TODO: get stream descriptor rotation

        if config.rotate_raw_frames {
            let run_node = !RotatedFrames::exists(session);
            let mut rotated_frames = RotatedFrames::load_from_session(session);

            if run_node {
                let rotations: HashMap<StreamId, f32> = descriptors.0.iter()
                    .enumerate()
                    .map(|(id, descriptor)| (StreamId(id), descriptor.rotation.unwrap_or_default()))
                    .collect();

                info!("generating rotated frames for session {}", session.id);

                raw_frames.frames.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", rotated_frames.directory, stream_id.0);
                        std::fs::create_dir_all(&output_directory).unwrap();

                        let frames = frames.par_iter()
                            .map(|frame| {
                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();
                                let output_path = format!("{}/{}.png", output_directory, frame_idx);

                                rotate_image(
                                    std::path::Path::new(frame),
                                    std::path::Path::new(&output_path),
                                    rotations[stream_id],
                                ).unwrap();

                                output_path
                            })
                            .collect::<Vec<_>>();

                            rotated_frames.frames.insert(*stream_id, frames);
                    });
            } else {
                info!("rotated frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(rotated_frames);
        }
    }
}


fn generate_mask_frames(
    mut commands: Commands,
    frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RotatedFrames,
            &Session,
        ),
        Without<MaskFrames>,
    >,
    modnet: Res<Modnet>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        frames,
        session,
    ) in frames.iter() {
        if config.mask_frames {
            if onnx_assets.get(&modnet.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&modnet.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !MaskFrames::exists(session);
            let mut mask_frames = MaskFrames::load_from_session(session);

            if run_node {
                info!("generating mask frames for session {}", session.id);

                frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                let mask_images = frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    modnet_inference(
                                        onnx_session,
                                        &[&image],
                                        Some((512, 512)),
                                    ).pop().unwrap(),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                mask_images.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", mask_frames.directory, stream_id.0);
                        let mask_paths = frames.iter()
                            .map(|(frame_idx, frame)| {
                                let path = format!("{}/{}.png", output_directory, frame_idx);

                                let buffer = ImageBuffer::<Luma<u8>, Vec<u8>>::from_raw(
                                    frame.width(),
                                    frame.height(),
                                    frame.data.clone(),
                                ).unwrap();

                                let _ = buffer.save(&path);

                                path
                            })
                            .collect::<Vec<_>>();

                        mask_frames.frames.insert(**stream_id, mask_paths);
                    });
            } else {
                info!("mask frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(mask_frames);
        }
    }
}


fn generate_yolo_frames(
    mut commands: Commands,
    raw_frames: Query<
        (
            Entity,
            &PipelineConfig,
            &RawFrames,
            &Session,
        ),
        Without<YoloFrames>,
    >,
    yolo: Res<Yolo>,
    onnx_assets: Res<Assets<Onnx>>,
) {
    for (
        entity,
        config,
        raw_frames,
        session,
    ) in raw_frames.iter() {
        if config.yolo {
            if onnx_assets.get(&yolo.onnx).is_none() {
                return;
            }

            let onnx = onnx_assets.get(&yolo.onnx).unwrap();
            let onnx_session_arc = onnx.session.clone();
            let onnx_session_lock = onnx_session_arc.lock().map_err(|e| e.to_string()).unwrap();
            let onnx_session = onnx_session_lock.as_ref().ok_or("failed to get session from ONNX asset").unwrap();

            let run_node = !YoloFrames::exists(session);
            let mut yolo_frames = YoloFrames::load_from_session(session);

            if run_node {
                info!("generating yolo frames for session {}", session.id);

                raw_frames.frames.keys()
                    .for_each(|stream_id| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        std::fs::create_dir_all(output_directory).unwrap();
                    });

                // TODO: support async ort inference (re. progress bars)
                let bounding_box_streams = raw_frames.frames.iter()
                    .map(|(stream_id, frames)| {
                        let frames = frames.iter()
                            .map(|frame| {
                                let mut decoder = png::Decoder::new(std::fs::File::open(frame).unwrap());
                                decoder.set_transformations(Transformations::EXPAND | Transformations::ALPHA);
                                let mut reader = decoder.read_info().unwrap();
                                let mut img_data = vec![0; reader.output_buffer_size()];
                                let _ = reader.next_frame(&mut img_data).unwrap();

                                assert_eq!(reader.info().bytes_per_pixel(), 3);

                                let width = reader.info().width;
                                let height = reader.info().height;

                                // TODO: separate image loading and onnx inference (so the image loading result can be viewed in the pipeline grid view)
                                let image = Image::new(
                                    Extent3d {
                                        width,
                                        height,
                                        depth_or_array_layers: 1,
                                    },
                                    bevy::render::render_resource::TextureDimension::D2,
                                    img_data,
                                    bevy::render::render_resource::TextureFormat::Rgba8UnormSrgb,
                                    RenderAssetUsages::all(),
                                );

                                let frame_idx = std::path::Path::new(frame).file_stem().unwrap().to_str().unwrap();

                                (
                                    frame_idx,
                                    yolo_inference(
                                        onnx_session,
                                        &image,
                                        0.5,
                                    ),
                                )
                            })
                            .collect::<Vec<_>>();

                        (stream_id, frames)
                    })
                    .collect::<Vec<_>>();

                bounding_box_streams.iter()
                    .for_each(|(stream_id, frames)| {
                        let output_directory = format!("{}/{}", yolo_frames.directory, stream_id.0);
                        let bounding_boxes = frames.iter()
                            .map(|(frame_idx, bounding_boxes)| {
                                let path = format!("{}/{}.json", output_directory, frame_idx);

                                let _ = serde_json::to_writer(std::fs::File::create(path).unwrap(), bounding_boxes);

                                bounding_boxes.clone()
                            })
                            .collect::<Vec<_>>();

                        yolo_frames.frames.insert(**stream_id, bounding_boxes);
                    });
            } else {
                info!("yolo frames already exist for session {}", session.id);
            }

            commands.entity(entity).insert(yolo_frames);
        }
    }
}


// TODO: alphablend frames
#[derive(Component, Default)]
pub struct AlphablendFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl AlphablendFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/alphablend", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut alphablend_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        alphablend_frames.reload();

        alphablend_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(
        session: &Session,
    ) -> bool {
        let output_directory = format!("{}/alphablend", session.directory);
        std::fs::metadata(output_directory).is_ok()
    }

    pub fn image(&self, _camera: usize, _frame: usize) -> Option<Image> {
        todo!()
    }
}



// TODO: support loading maskframes -> images into a pipeline mask viewer


#[derive(Component, Default)]
pub struct RawFrames {
    pub frames: HashMap<StreamId, Vec<String>>,
    pub directory: String,
}
impl RawFrames {
    pub fn load_from_session(
        session: &Session,
    ) -> Self {
        let directory = format!("{}/frames", session.directory);
        std::fs::create_dir_all(&directory).unwrap();

        let mut raw_frames = Self {
            frames: HashMap::new(),
            directory,
        };
        raw_frames.reload();

        raw_frames
    }

    pub fn reload(&mut self) {
        std::fs::read_dir(&self.directory)
            .unwrap()
            .filter_map(|entry| entry.ok())
            .filter(|entry| entry.path().is_dir())
            .map(|stream_dir| {
                let stream_id = StreamId(stream_dir.path().file_name().unwrap().to_str().unwrap().parse::<usize>().unwrap());

                let frames = std::fs::read_dir(stream_dir.path()).unwrap()
                    .filter_map(|entry| entry.ok())
                    .filter(|entry| entry.path().is_file() && entry.path().extension().and_then(|s| s.to_str()) == Some("png"))
                    .map(|entry| entry.path().to_str().unwrap().to_string())
                    .collect::<Vec<_>>();

                (stream_id, frames)
            })
            .for_each(|(stream_id, frames)| {
                self.frames.insert(stream_id, frames);
            });
    }

    pub fn exists(

check the segmentation mask labeled for automatic recording detection

// TODO: check the segmentation mask labeled for automatic recording detection

    }
}


// TODO: add system to detect person mask in camera 0 for automatic recording
fn automatic_recording(
    mut commands: Commands,
    stream_manager: Res<RtspStreamManager>,
    mut live_session: ResMut<LiveSession>,
) {
    if live_session.0.is_some() {
        return;
    }

    // TODO: check the segmentation mask labeled for automatic recording detection
}


#[derive(Resource, Default)]
pub struct LiveSession(Option<Entity>);

fn press_r_start_recording(
    mut commands: Commands,
    keys: Res<ButtonInput<KeyCode>>,
    stream_manager: Res<RtspStreamManager>,
    mut live_session: ResMut<LiveSession>,
) {
    if keys.just_pressed(KeyCode::KeyR) {
        if live_session.0.is_some() {
            return;
        }

        let session = Session {
            directory: "capture".to_string(),
            ..default()
        };

        stream_manager.start_recording(
            &session.directory,
        );

        let entity = commands.spawn(
            StreamSessionBundle {
                session: session,
                raw_streams: RawStreams {
                    streams: vec![],
                },
                config: PipelineConfig::default(),
            },
        ).id();
        live_session.0 = Some(entity);
    }
}

fn press_s_stop_recording(
    mut commands: Commands,
    keys: Res<ButtonInput<KeyCode>>,
    stream_manager: Res<RtspStreamManager>,
    mut live_session: ResMut<LiveSession>,
) {
    if keys.just_pressed(KeyCode::KeyS) && live_session.0.is_some() {
        let session_entity = live_session.0.take().unwrap();

        let raw_streams = stream_manager.stop_recording();

        commands.entity(session_entity)
            .insert(RawStreams {
                streams: raw_streams,
            });
    }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.