20.68.131.221

As of: Apr 27, 2025 7:47pm UTC | Latest

Basic Information

Forward DNS
services.mavistech.cloud, mavistech.cloud, ocr.mavistech.cloud, assistant.mavistech.cloud, gps.mavistech.cloud, ...
Routing
20.64.0.0/10  via MICROSOFT-CORP-MSN-AS-BLOCK, US (AS8075)
OS
Ubuntu Linux
Services (8)
22/SSH, 80/HTTP, 443/HTTP, 5000/HTTP, 5001/HTTP, 5002/HTTP, 5003/HTTP, 5004/HTTP
Labels
Bootstrap Default Landing Page Remote Access

SSH 22/TCP
04/27/2025 19:47 UTC

Remote Access

Software

Ubuntu Linux
OpenBSD OpenSSH 9.6p1

Details

Host Key
Algorithm
ecdsa-sha2-nistp256
Fingerprint
bdbd4015cfb8d48b42180bcbac017ca3c0958b14837b78943f23dd331428dbc9
Negotiated
Key Exchange
[email protected]
Symmetric Cipher
aes128-ctr [] aes128-ctr []
MAC
hmac-sha2-256 [] hmac-sha2-256 []

HTTP 80/TCP
04/27/2025 14:08 UTC

Default Landing Page

Software

linux
nginx 1.24.0

Details

http://20.68.131.221/
Status
200  OK
Body Hash
sha1:c51a3f0e6de4eb802d5630941c3fd9e1d0efae4b
HTML Title
Welcome to nginx!
Response Body
      # Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
[nginx.org](http://nginx.org/).  
Commercial support is available at [nginx.com](http://nginx.com/).

_Thank you for using nginx._
    

HTTP 443/TCP
04/27/2025 18:25 UTC


Software

nginx 1.24.0

Details

https://20.68.131.221/
Status
200  OK
Body Hash
sha1:476bf213e0150bf3cc16e497a4884bdeb79bf883
HTML Title
Voice Assistant API Server (Hybrid)
Response Body
      # Voice Assistant API Server (Hybrid Command Handling)

This server acts as the backend for a voice-enabled smart assistant, designed
to interact with client devices (like ESP32-based smart glasses). It processes
user input (voice or text), identifies specific device commands, or provides
general AI-powered assistance using Azure services.

**Authentication:** All API endpoints (except `/health`) require an
`Authentication` HTTP header containing the valid device **UID** (Unique
Identifier). Requests lacking this header or providing an invalid/unverifiable
UID will receive a `401 Unauthorized` or `403 Forbidden` error response.

**Hybrid Command Handling:** The server employs a two-stage process to
interpret user requests:

  1. **Hardcoded Command Detection:** Simple, critical, or time-sensitive commands (like emergency calls, scene descriptions, OCR triggers) are detected first using direct keyword matching within the server. This ensures maximum speed and reliability for essential functions.
  2. **AI-Powered Command Detection:** If no hardcoded command matches, the user's request is sent to Azure OpenAI (specifically prompted for this task) to detect more complex commands requiring better natural language understanding (like navigation, calls with names, text messages with content). The AI attempts to extract necessary parameters (e.g., destination, contact name, message body).
  3. **General AI Fallback:** If neither detection method identifies a specific command, the user's input is treated as a general question or statement for the conversational AI assistant.

**Important:** When a specific command (either hardcoded or AI-detected) is
identified, the server's response is **always** a JSON object detailing the
feature to activate, sent with a text-based content type. This provides a
consistent, structured instruction for the client device (ESP32), regardless
of the user's general preference for text or audio responses.

## API Endpoints

### POST /assistant

Accepts an audio file containing the user's voice query. It transcribes the
audio, performs hybrid command detection, and responds appropriately.

#### Request

  * **Headers:**
    * `Authentication: ` (Required)
    * `Content-Type: multipart/form-data` (Required)
  * **Body (multipart/form-data):**
    * `audio`: The audio file (WAV, MP3, OGG, FLAC) containing the query (Required).

#### Response (200 OK)

The structure depends on whether a command was detected.

##### Case 1: Command Detected

If the input matches a known command (either hardcoded or AI-detected).

  * **Headers:**
    * `Content-Type: application/json`
    * `Result: 0` (Indicates the body is text-based JSON)
    * `X-Response-Type: command`
  * **Body (JSON):** A JSON object specifying the feature and parameters.

    
    
    {
      "feature": 13,
      "parameters": {}
    }
    _(Example: Emergency Call triggered by "Help me now")_
    
    
    {
      "feature": 7,
      "parameters": {
        "who": "Mom"
      }
    }
    _(Example: Audio Call triggered by "Call Mom")_
    
    
    {
      "feature": 9,
      "parameters": {
        "who": "Dr Smith",
        "message": "I will be 10 minutes late for my appointment"
      }
    }
    _(Example: Text Message triggered by "Text Dr Smith saying I will be 10 minutes late for my appointment")_
    
    
    {
      "feature": 6,
      "parameters": {
        "where": "nearest coffee shop"
      }
    }
    _(Example: Navigation triggered by "Take me to the nearest coffee shop")_

##### Case 2: No Command Detected (General AI Response)

If the input is treated as a general query. The response format depends on the
user's preference setting (fetched via the Preference API).

  * **If User Preference is Text (0):**
    * Headers: `Content-Type: text/plain`, `Result: 0`, `X-Response-Type: ai_text`
    * Body: Plain text string containing the AI's conversational response.
  * **If User Preference is Audio (1):**
    * Headers: `Content-Type: audio/x-raw`, `Result: 1`, `X-Response-Type: ai_audio`, `Transfer-Encoding: chunked`, `X-Response-Text: ` (Optional reference)
    * Body: Streamed raw PCM audio data (16kHz, 16-bit Mono) synthesized from the AI's text response using Azure TTS.
  * **If User Preference is Audio (1) but TTS fails:**
    * Headers: `Content-Type: text/plain`, `Result: 0`, `X-Response-Type: ai_text_fallback`
    * Body: Plain text string containing the AI's response, possibly prefixed like "(Audio unavailable) ...".

#### Error Responses

  * `400 Bad Request`: Missing/empty audio file, invalid file format, request too large.
  * `401 Unauthorized`: Missing 'Authentication' header.
  * `403 Forbidden`: Invalid or unverifiable UID in 'Authentication' header.
  * `500 Internal Server Error`: Server-side error during transcription, AI call, TTS, command processing, or unexpected issues. Check server logs.

### POST /api/text

Accepts a text query directly. Performs hybrid command detection and responds
with JSON.

#### Request

  * **Headers:**
    * `Authentication: ` (Required)
    * `Content-Type: application/json` (Required)
  * **Body (JSON):**
    
        {
      "text": "What's the weather like today?"
    }
    
        {
      "text": "Send a message to Sarah saying 'On my way!'"
    }

#### Response (200 OK)

Content-Type is always `application/json`.

##### Case 1: Command Detected

If the input text matches a known command.

  * **Body (JSON):** The command object with feature and parameters.

    
    
    {
      "feature": 9,
      "parameters": {
        "who": "Sarah",
        "message": "On my way!"
      }
    }

##### Case 2: No Command Detected (General AI Response)

If the input is treated as a general query.

  * **Body (JSON):** A JSON object containing the AI's text response.

    
    
    {
      "response": "Currently, it's partly cloudy with a temperature of 22 degrees Celsius.",
      "is_command": false,
      "timestamp": "2024-08-16T10:30:00.123Z"
    }

#### Error Responses

  * `400 Bad Request`: Invalid JSON format, missing or empty 'text' field.
  * `401 Unauthorized`: Missing 'Authentication' header.
  * `403 Forbidden`: Invalid or unverifiable UID in 'Authentication' header.
  * `500 Internal Server Error`: Server-side error during AI call, command processing, or unexpected issues. Check server logs.

### GET /health

Provides a health check of the API service and its dependencies (No
Authentication Required).

#### Response (200 OK)

Content-Type: `application/json`

    
    
    {
      "status": "healthy",
      "service": "Voice Assistant Server (Hybrid)",
      "timestamp": "2024-08-16T10:35:00.456Z",
      "dependencies": {
        "azure_speech_sdk": "Configured",
        "azure_openai_client": "Initialized"
        /* Add other dependency checks here if implemented */
      },
      "command_mode": "hybrid"
    }

The `dependencies` section indicates if core components like Azure SDKs are
initialized correctly.

## Command Feature Summary

The server attempts to map user requests to the following features using the
described hybrid approach:

Feature Name | Number | Handling Type | Requires Parameters? | Example Trigger
Phrases  
---|---|---|---|---  
Short Scene Description| 1| Hardcoded| No| "describe scene", "what's around me
briefly"  
Detailed Scene Description| 2| Hardcoded| No| "detailed description", "tell me
more about the scene"  
Continuous Scene Description| 3| Hardcoded| No| "keep describing", "continuous
mode"  
Voice Assistance (AI)| 4| Fallback| N/A| (Default for general queries like
"what's the time?")  
OCR (Read Text)| 5| Hardcoded| No| "read this sign", "what does this text say"  
GPS Navigation| 6| AI Powered| Yes (where)| "navigate to...", "directions
to...", "find the nearest..."  
Audio Call| 7| AI Powered| Yes (who)| "call mom", "phone Dr. Anya Sharma"  
Video Call| 8| AI Powered| Yes (who)| "video call the office", "make a video
chat with Ben"  
Text Message| 9| AI Powered| Yes (who, message)| "text...", "message...
saying...", "send SMS to..."  
Audio Feature| 10| N/A| N/A| (Currently unused)  
Face Recognition| 11| Hardcoded| No| "who is this?", "recognize faces", "do I
know her?"  
Obstacle Detection| 12| Hardcoded| No| "obstacle detection on", "clear path
mode"  
Emergency Help| 13| Hardcoded| No| "emergency", "call center help", "SOS"  
  
_Note: While the AI can understand varied phrasing for its commands, the
hardcoded commands rely on specific keywords being present. Parameter
extraction by the AI aims for accuracy but might occasionally misinterpret
complex names or destinations._
    

TLS

Handshake
Version Selected
TLSv1_3
Cipher Selected
TLS_CHACHA20_POLY1305_SHA256
Certificate
Fingerprint
cb53cc4146943fdfd9d03dc400748783165c63960178bb3b8c6804c03db587b8
Subject
CN=assistant.mavistech.cloud
Issuer
C=US, O=Let's Encrypt, CN=E5
Names
assistant.mavistech.cloud
Fingerprint
JARM
27d40d40d00040d00042d43d000000d2e61cae37a985f75ecafb81b33ca523
JA3S
475c9302dc42b2751db9edcac3b74891
JA4S
t130200_1303_a56c5b993250

HTTP 5000/TCP
04/27/2025 08:46 UTC


Software

PalletsProjects Werkzeug 3.1.3

Details

http://20.68.131.221:5000/
Status
200  OK
Body Hash
sha1:a8b380da283c9caf9e499a8fc49a9db4e7761df7
HTML Title
Stateful Audio Navigation API
Response Body
      # Stateful Audio Navigation API

API for audio-driven, turn-by-turn navigation suitable for smart glasses. Uses
a stateful model for navigation progress.

## Core Details

  * **Base URL:** `http://20.68.131.221:5000`
  * **Authentication:** All endpoints require an `Authentication` header. 
    
        Authentication: YOUR_USER_ID

  * **Audio Format:** Input audio should be WAV/MP3 etc. Output audio is Base64 encoded WAV (16Khz, 16-bit Mono PCM).

## Audio Navigation Flow

The primary flow uses 3 endpoints:

  1. `/process_audio_query`: Start (handles query, intent detection).
  2. `/process_audio_selection`: (Optional) Select a POI if presented.
  3. `/get_next_steps`: Repeatedly call to get subsequent turn instructions.

### 1\. Start Navigation / POI Search

`POST /process_audio_query`

Sends initial voice command, location, and heading.

#### Input: `multipart/form-data`

  * `audio`: Audio file (e.g., 'query.wav'). (Required)
  * `latitude`: Current Latitude (float). (Required)
  * `longitude`: Current Longitude (float). (Required)
  * `heading`: Current Heading (float, 0-360, North=0). (Optional)

#### Key Responses (200 OK):

  * Status: "AwaitingSelection"
    * POI options found. Listen to `audio_response_b64` (prompt).
    * Use `state_id` in the next call to `/process_audio_selection`.
    * `poi_options` contains details for display.
  * Status: "NavigationStarted"
    * Direct navigation identified or POI selected implicitly.
    * Listen to `initial_audio_steps_b64` (list, audio for first few steps).
    * Store `navigation_session_id`. Use this in calls to `/get_next_steps`.
    * `initial_steps_text` contains text for first few steps.
  * Status: "Error" (4xx/5xx HTTP Status) 
    * Listen to `audio_response_b64` (error message) or read `error_message`.
    * `error_code` may provide details (e.g., STT failure).

### (Optional) Select POI

`POST /process_audio_selection`

Sends voice selection after POI options were presented.

#### Input: `multipart/form-data`

  * `audio`: Audio file containing spoken option number (e.g., 'selection_3.wav'). (Required)
  * `state_id`: The `state_id` received from the "AwaitingSelection" response. (Required)

#### Key Responses (200 OK):

  * Status: "NavigationStarted"
    * POI successfully selected, navigation session created.
    * Listen to `initial_audio_steps_b64` (list, audio for first few steps).
    * Store `navigation_session_id`. Use this in calls to `/get_next_steps`.
    * `initial_steps_text` contains text for first few steps.
  * Status: "Error" (4xx/5xx HTTP Status) 
    * Listen to `audio_response_b64` (error message) or read `error_message`.
    * If selection failed (e.g., couldn't understand number), server returns `state_id` again to allow retry.

### 3\. Get Next Steps

`POST /get_next_steps`

Call repeatedly during active navigation with updated location.

#### Input: `application/json`

    
    
    {
        "navigation_session_id": "nav_xxxxxxxx-xxxx-...",
        "latitude": 51.5074,
        "longitude": -0.1278,
        "heading": 180.0  // Optional, improves relative turns
    }

#### Key Responses (200 OK):

  * Status: "NextSteps"
    * Listen to `audio_steps_b64` (list, audio for next few steps).
    * `steps_text` contains text for next few steps.
    * Continue sending updates to this endpoint.
  * Status: "ApproachingDestination"
    * Listen to `audio_steps_b64` / read `steps_text` (final steps including arrival).
  * Status: "Arrived"
    * Navigation complete. Session terminated on server.
    * Listen to `audio_message_b64` or read `message` (arrival confirmation).
  * Status: "Error" (4xx/5xx HTTP Status) 
    * Read `error_message` / `error`.
    * Examples: 404 if session ID invalid/expired, 500 internal error.

## Interactive Demo: Text POI Search

### Test `/search_poi` (Text-Based)

Enter details and click Search to test the text POI endpoint.

Authentication UID:

Latitude:

Longitude:

Search Query:

Search POIs

    
    
    Response JSON will appear here...
    

HTTP 5001/TCP
04/27/2025 06:18 UTC


Software

PalletsProjects Werkzeug 3.1.3

Details

http://20.68.131.221:5001/
Status
200  OK
Body Hash
sha1:476bf213e0150bf3cc16e497a4884bdeb79bf883
HTML Title
Voice Assistant API Server (Hybrid)
Response Body
      # Voice Assistant API Server (Hybrid Command Handling)

This server acts as the backend for a voice-enabled smart assistant, designed
to interact with client devices (like ESP32-based smart glasses). It processes
user input (voice or text), identifies specific device commands, or provides
general AI-powered assistance using Azure services.

**Authentication:** All API endpoints (except `/health`) require an
`Authentication` HTTP header containing the valid device **UID** (Unique
Identifier). Requests lacking this header or providing an invalid/unverifiable
UID will receive a `401 Unauthorized` or `403 Forbidden` error response.

**Hybrid Command Handling:** The server employs a two-stage process to
interpret user requests:

  1. **Hardcoded Command Detection:** Simple, critical, or time-sensitive commands (like emergency calls, scene descriptions, OCR triggers) are detected first using direct keyword matching within the server. This ensures maximum speed and reliability for essential functions.
  2. **AI-Powered Command Detection:** If no hardcoded command matches, the user's request is sent to Azure OpenAI (specifically prompted for this task) to detect more complex commands requiring better natural language understanding (like navigation, calls with names, text messages with content). The AI attempts to extract necessary parameters (e.g., destination, contact name, message body).
  3. **General AI Fallback:** If neither detection method identifies a specific command, the user's input is treated as a general question or statement for the conversational AI assistant.

**Important:** When a specific command (either hardcoded or AI-detected) is
identified, the server's response is **always** a JSON object detailing the
feature to activate, sent with a text-based content type. This provides a
consistent, structured instruction for the client device (ESP32), regardless
of the user's general preference for text or audio responses.

## API Endpoints

### POST /assistant

Accepts an audio file containing the user's voice query. It transcribes the
audio, performs hybrid command detection, and responds appropriately.

#### Request

  * **Headers:**
    * `Authentication: ` (Required)
    * `Content-Type: multipart/form-data` (Required)
  * **Body (multipart/form-data):**
    * `audio`: The audio file (WAV, MP3, OGG, FLAC) containing the query (Required).

#### Response (200 OK)

The structure depends on whether a command was detected.

##### Case 1: Command Detected

If the input matches a known command (either hardcoded or AI-detected).

  * **Headers:**
    * `Content-Type: application/json`
    * `Result: 0` (Indicates the body is text-based JSON)
    * `X-Response-Type: command`
  * **Body (JSON):** A JSON object specifying the feature and parameters.

    
    
    {
      "feature": 13,
      "parameters": {}
    }
    _(Example: Emergency Call triggered by "Help me now")_
    
    
    {
      "feature": 7,
      "parameters": {
        "who": "Mom"
      }
    }
    _(Example: Audio Call triggered by "Call Mom")_
    
    
    {
      "feature": 9,
      "parameters": {
        "who": "Dr Smith",
        "message": "I will be 10 minutes late for my appointment"
      }
    }
    _(Example: Text Message triggered by "Text Dr Smith saying I will be 10 minutes late for my appointment")_
    
    
    {
      "feature": 6,
      "parameters": {
        "where": "nearest coffee shop"
      }
    }
    _(Example: Navigation triggered by "Take me to the nearest coffee shop")_

##### Case 2: No Command Detected (General AI Response)

If the input is treated as a general query. The response format depends on the
user's preference setting (fetched via the Preference API).

  * **If User Preference is Text (0):**
    * Headers: `Content-Type: text/plain`, `Result: 0`, `X-Response-Type: ai_text`
    * Body: Plain text string containing the AI's conversational response.
  * **If User Preference is Audio (1):**
    * Headers: `Content-Type: audio/x-raw`, `Result: 1`, `X-Response-Type: ai_audio`, `Transfer-Encoding: chunked`, `X-Response-Text: ` (Optional reference)
    * Body: Streamed raw PCM audio data (16kHz, 16-bit Mono) synthesized from the AI's text response using Azure TTS.
  * **If User Preference is Audio (1) but TTS fails:**
    * Headers: `Content-Type: text/plain`, `Result: 0`, `X-Response-Type: ai_text_fallback`
    * Body: Plain text string containing the AI's response, possibly prefixed like "(Audio unavailable) ...".

#### Error Responses

  * `400 Bad Request`: Missing/empty audio file, invalid file format, request too large.
  * `401 Unauthorized`: Missing 'Authentication' header.
  * `403 Forbidden`: Invalid or unverifiable UID in 'Authentication' header.
  * `500 Internal Server Error`: Server-side error during transcription, AI call, TTS, command processing, or unexpected issues. Check server logs.

### POST /api/text

Accepts a text query directly. Performs hybrid command detection and responds
with JSON.

#### Request

  * **Headers:**
    * `Authentication: ` (Required)
    * `Content-Type: application/json` (Required)
  * **Body (JSON):**
    
        {
      "text": "What's the weather like today?"
    }
    
        {
      "text": "Send a message to Sarah saying 'On my way!'"
    }

#### Response (200 OK)

Content-Type is always `application/json`.

##### Case 1: Command Detected

If the input text matches a known command.

  * **Body (JSON):** The command object with feature and parameters.

    
    
    {
      "feature": 9,
      "parameters": {
        "who": "Sarah",
        "message": "On my way!"
      }
    }

##### Case 2: No Command Detected (General AI Response)

If the input is treated as a general query.

  * **Body (JSON):** A JSON object containing the AI's text response.

    
    
    {
      "response": "Currently, it's partly cloudy with a temperature of 22 degrees Celsius.",
      "is_command": false,
      "timestamp": "2024-08-16T10:30:00.123Z"
    }

#### Error Responses

  * `400 Bad Request`: Invalid JSON format, missing or empty 'text' field.
  * `401 Unauthorized`: Missing 'Authentication' header.
  * `403 Forbidden`: Invalid or unverifiable UID in 'Authentication' header.
  * `500 Internal Server Error`: Server-side error during AI call, command processing, or unexpected issues. Check server logs.

### GET /health

Provides a health check of the API service and its dependencies (No
Authentication Required).

#### Response (200 OK)

Content-Type: `application/json`

    
    
    {
      "status": "healthy",
      "service": "Voice Assistant Server (Hybrid)",
      "timestamp": "2024-08-16T10:35:00.456Z",
      "dependencies": {
        "azure_speech_sdk": "Configured",
        "azure_openai_client": "Initialized"
        /* Add other dependency checks here if implemented */
      },
      "command_mode": "hybrid"
    }

The `dependencies` section indicates if core components like Azure SDKs are
initialized correctly.

## Command Feature Summary

The server attempts to map user requests to the following features using the
described hybrid approach:

Feature Name | Number | Handling Type | Requires Parameters? | Example Trigger
Phrases  
---|---|---|---|---  
Short Scene Description| 1| Hardcoded| No| "describe scene", "what's around me
briefly"  
Detailed Scene Description| 2| Hardcoded| No| "detailed description", "tell me
more about the scene"  
Continuous Scene Description| 3| Hardcoded| No| "keep describing", "continuous
mode"  
Voice Assistance (AI)| 4| Fallback| N/A| (Default for general queries like
"what's the time?")  
OCR (Read Text)| 5| Hardcoded| No| "read this sign", "what does this text say"  
GPS Navigation| 6| AI Powered| Yes (where)| "navigate to...", "directions
to...", "find the nearest..."  
Audio Call| 7| AI Powered| Yes (who)| "call mom", "phone Dr. Anya Sharma"  
Video Call| 8| AI Powered| Yes (who)| "video call the office", "make a video
chat with Ben"  
Text Message| 9| AI Powered| Yes (who, message)| "text...", "message...
saying...", "send SMS to..."  
Audio Feature| 10| N/A| N/A| (Currently unused)  
Face Recognition| 11| Hardcoded| No| "who is this?", "recognize faces", "do I
know her?"  
Obstacle Detection| 12| Hardcoded| No| "obstacle detection on", "clear path
mode"  
Emergency Help| 13| Hardcoded| No| "emergency", "call center help", "SOS"  
  
_Note: While the AI can understand varied phrasing for its commands, the
hardcoded commands rely on specific keywords being present. Parameter
extraction by the AI aims for accuracy but might occasionally misinterpret
complex names or destinations._
    

HTTP 5002/TCP
04/27/2025 06:34 UTC

Bootstrap

Software

PalletsProjects Werkzeug 3.1.3

Details

http://20.68.131.221:5002/
Status
200  OK
Body Hash
sha1:21adad2c12dc0226a7bc5f73629c8ac6e0c23bc8
HTML Title
Mavis Tech Service Status
Response Body
      ![Mavis Tech Logo](https://mavistech.uk/wp-content/uploads/2024/02/logo.png)

Last checked: 2025-04-27 06:34:08

## APIs

__GPS API

<https://gps.mavistech.cloud>

__Online (200)

__OCR API

<https://ocr.mavistech.cloud>

__Online (404)

__Scene API

<https://scene.mavistech.cloud>

__Online (404)

__Voice Assistant API

<https://assistant.mavistech.cloud>

__Online (200)

## Device Management

__ESP32 Management

<http://20.117.120.208:8001>

__Online (200)

## Websites & Portals

__Admin Panel

<http://20.117.120.208:8000/admin/>

__Online (200)

__Customer Portal

<http://20.117.120.208:80>

__Offline (404)

__OTA Update Portal

<https://ota.mavistech.uk>

__Online (200)
    

HTTP 5003/TCP
04/27/2025 06:40 UTC


Software

PalletsProjects Werkzeug 3.1.3

Details

http://20.68.131.221:5003/
Status
404  NOT FOUND
Body Hash
sha1:d767b3cb0ad66544c649e4165fc4b37e3c17e370
HTML Title
404 Not Found
Response Body
      404 Not Found

# Not Found

The requested URL was not found on the server. If you entered the URL manually
please check your spelling and try again.
    

HTTP 5004/TCP
04/27/2025 06:43 UTC


Software

PalletsProjects Werkzeug 3.1.3

Details

http://20.68.131.221:5004/
Status
404  NOT FOUND
Body Hash
sha1:3f8231d4133116a067fe2f3f485c0e05b0f6093c
Response Body
      { "available_endpoints": [ "/analyze (POST) - Analyze image and get audio
description", "/images/list (GET) - List all saved images", "/images/view/
(GET) - View specific image", "/images/download/ (GET) - Download specific
image", "/gallery (GET) - View image gallery", "/health (GET) - Service health
check" ], "documentation": "See API documentation for more details", "error":
"Endpoint not found: /" }
    

Geographic Location

City
London
Province
England
Country
United Kingdom (GB)
Coordinates
51.50853, -0.12574
Timezone
Europe/London