← Back to blog

Building a Sentiment Analysis API with Hugging Face and FastAPI

aipythonfastapihugging-facetransformersnlp

With a few lines of Python, I'll show you the power of Hugging Face and FastAPI. Together, they allow you to create a fully functional sentiment analysis API that could power a customer service bot showing the sentiment of customer feedback.

Setting Up the Project

Start by creating your project structure:

  1. Create a folder named api in the root of your project
  2. Inside the api folder, create index.py which will house the bulk of our API code
  3. Create requirements.txt in the root folder for our dependencies

Populate requirements.txt with the following:

fastapi>=0.115.0
transformers>=4.44.0
torch>=2.4.0
uvicorn>=0.30.6

What These Libraries Do

  • fastapi - Web framework for building APIs
  • transformers - The Hugging Face library to work with pre-trained models
  • torch - PyTorch deep learning framework
  • uvicorn - A lightweight ASGI server to run and test the API

Building the FastAPI Application

Here's the full contents of index.py:

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from transformers import pipeline

app = FastAPI(title="Sentiment Analysis API")

# Load model at module level so it persists between requests
classifier = pipeline("sentiment-analysis")

class TextRequest(BaseModel):
    text: str

class SentimentResponse(BaseModel):
    text: str
    label: str
    score: float

@app.get("/")
def root():
    return {"message": "Sentiment Analysis API", "docs": "/docs"}

@app.get("/health")
def health():
    return {"status": "healthy"}

@app.post("/analyze", response_model=SentimentResponse)
def analyze(request: TextRequest):
    if not request.text.strip():
        raise HTTPException(status_code=400, detail="Text cannot be empty")

    result = classifier(request.text)[0]

    return SentimentResponse(
        text=request.text,
        label=result["label"],
        score=round(result["score"], 4)
    )

Breaking It Down

The most important piece of code in the whole file:

classifier = pipeline("sentiment-analysis")

This loads and initializes the Hugging Face Transformers pipeline. We're creating a new pipeline and specifying that it will be the type of "sentiment analysis".

The default model used is distilbert-base-uncased-finetuned-sst-2-english. You can override the pipeline function with the name of any model you'd like to use.

To analyze text, pass it to the classifier:

result = classifier(request.text)[0]

The result object contains the sentiment score and whether the text is positive or negative.

Local Testing

To test the API locally, we'll use uvicorn. First, create a Python virtual environment:

python -m venv venv

Activate it:

# Windows
.\venv\Scripts\Activate

# macOS/Linux
source venv/bin/activate

Install the required packages:

pip install -r requirements.txt

Start uvicorn to serve the API:

uvicorn api.index:app --reload

Once it's built, you'll see the URL where the API is being hosted (typically http://127.0.0.1:8000).

Visit the root URL and you should see:

{"message": "Sentiment Analysis API", "docs": "/docs"}

If you see this, the API has started successfully.

Testing with Swagger

Visit the /docs endpoint to bring up the Swagger interface. Expand the /analyze section and click "Try it out".

In the request body, replace "string" with something like "I love pizza" and click "Execute". You should get a response like:

{
  "text": "I love pizza",
  "label": "POSITIVE",
  "score": 0.9998
}

It shows that our text is "Positive" with a confidence score of 0.9998.

You can also use curl or Postman to interface with the API instead of the built-in Swagger interface.

Conclusion

With just a few lines of code, we created a fully functioning sentiment analysis API. You can use this foundation to analyze user comments, customer reviews, or any text-based feedback in your applications.

The beauty of Hugging Face's pipeline abstraction is that you can swap out models easily—try different sentiment models or even switch to other NLP tasks like text classification, named entity recognition, or question answering with minimal code changes.