BART_Summarization_Rating

Review Summarization and Rating with BART

This project leverages a fine-tuned BART model to perform multitask learning for Amazon review summarization and rating prediction. The model is designed to generate concise summaries of review texts and predict ratings based on the provided content. To enhance training efficiency, Hugging Face’s Accelerate is used for distributed training, enabling multi-GPU support and mixed-precision training.

Project Components

Features

Gradio Interface Demo

Here’s a demonstration of the Gradio app in action:

Getting Started

Training the Model

  1. Get Dataset: Please review this and then update your data configuration file.

  2. Configuration Files: Create and update your configuration files for data and training settings. Example files can be found in the config/ directory.

  3. Run Training Script:

    python train.py
    

    This script will use the configurations specified in your YAML files to train the BART model and save the best-performing model.

Running the Application

  1. Start the Flask Server:
    python app.py
    

    This command will start the Flask server, which includes the Gradio interface for interacting with your model.

  2. Access the Web Interface: Open your web browser and navigate to http://localhost:5000 to view the Gradio interface embedded within the Flask application.

API Usage

You can also interact with the model via the Flask API. Here is an example of how to use cURL to get a summary and rating:

curl -X POST http://localhost:5000/predict -H "Content-Type: application/json" -d '{"review_text": "Your review text here"}'

Response Example:

{
  "summary": "Generated summary of the review text.",
  "rating": 4
}

Files

Dependencies

License

This project is licensed under the MIT License. See the LICENSE file for details.