Deep Learning-Based Saliency Assessment Model for Product Placement in Video Advertisements
DOI:
https://doi.org/10.69987/JACS.2024.40503Keywords:
Deep Learning, Product Placement Assessment, Visual Saliency Detection, Video Advertisement AnalysisAbstract
This paper proposes a novel deep learning-based saliency assessment model for product placement evaluation in video advertisements. The model incorporates a multi-scale feature extraction mechanism and temporal integration capabilities to analyze placement effectiveness across diverse advertising contexts. The architecture utilizes attention mechanisms to capture complex spatial-temporal relationships while maintaining computational efficiency. The system processes input video streams through parallel analysis paths, integrating information across multiple scales to generate accurate saliency predictions. The research introduces specialized evaluation metrics combining spatial accuracy and temporal consistency measurements. Experimental results demonstrate superior performance compared to existing methods, achieving 94.8% accuracy in saliency prediction tasks and processing capabilities of 42.3 frames per second. The model was evaluated on a comprehensive dataset of 10,000 video sequences spanning multiple product categories and placement strategies. Ablation studies validate the contribution of individual architectural components, with the multi-scale feature extraction module providing a 15.2% improvement in accuracy and temporal integration enhancing performance by 12.8%. The proposed system establishes new benchmarks in automated product placement assessment, offering practical solutions for large-scale advertising analysis while maintaining high accuracy and computational efficiency.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 © 2024 Journal of Advanced Computing Systems (JACS). All rights reserved. Authors retain copyright and grant JACS right of first publication under a Creative Commons Attribution License.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.