# Bulk sending

Send many SMS in a single request via POST /sms/bulk — the Transactional API's throughput endpoint for campaigns, notification fan-outs and wholesale dispatch. Covers the request and response shape, per-request limits, the queue-based throughput model, partial-success handling and when to prefer SMPP.

The maximum throughput you can get out of the Transactional API over HTTP is through bulk. One `POST /sms/bulk` request carries up to 100 messages; Instasent validates each item, queues the accepted ones on the platform and dispatches them to the carrier network as fast as the routes allow. From the caller's point of view the HTTP request returns as soon as the batch is staged — the delivery loop is ours to run. That is why an API-driven campaign sent through bulk will out-perform the same campaign fired as thousands of individual `POST /sms` calls, regardless of how much concurrency the caller puts on the client side.

This page covers the request and response shape, the per-request limits, how the queue behaves under load, DLRs, partial-success handling, and when to reach for SMPP instead.

## How a bulk request flows

```
 Your backend  ──POST /sms/bulk──▶  Instasent API  ──validate──▶  queue  ──▶  carrier  ──▶  handset
                      ◀─── 201 {entities, errors} ───                                        │
                                                                                             ▼
 Your webhook ◀────────────────────── DLR per message ──────────────────────────────── carrier
```

- **Submit** a single `POST` with an array of up to 100 SMS items.
- **Validate**: the server checks each item independently. Valid ones are accepted; invalid ones are rejected with field-level errors, and the rest of the batch still goes through.
- **Queue**: accepted messages are staged on the platform's dispatch queue. The HTTP request returns immediately with the list of `entities` (accepted, each with an `id`) and the list of `errors` (rejected, with per-field messages). It does **not** block on carrier delivery.
- **Dispatch**: the platform drains the queue concurrently against the carrier routes. Throughput is bounded by route capacity, not by your HTTP client.
- **DLR**: each accepted message produces its own DLR webhook events, exactly like a single send. Match them by `id`.

The practical consequence for campaigns: submit as many full batches of 100 as you can afford on your rate-limit window; the queue absorbs the burst and drains at carrier speed. Trying to match that throughput through `POST /sms` one at a time will hit the rate-limit wall long before the queue does.

## Request

Send a `POST` to `/sms/bulk` with an `api_sms` bearer token and a JSON **array** of items as the body — there is no wrapper object.

#### curl

```bash
curl -X POST https://api.instasent.com/transactional/v1/sms/bulk \
  -H "Authorization: Bearer $INSTASENT_TOKEN" \
  -H "Content-Type: application/json" \
  -d '[
    { "from": "Instasent", "to": "+34600000001", "text": "Hello one" },
    { "from": "Instasent", "to": "+34600000002", "text": "Hello two" },
    { "from": "Instasent", "to": "+34600000003", "text": "Hello three" }
  ]'
```

#### node

```js
const messages = [
  { from: "Instasent", to: "+34600000001", text: "Hello one" },
  { from: "Instasent", to: "+34600000002", text: "Hello two" },
  { from: "Instasent", to: "+34600000003", text: "Hello three" },
];

const res = await fetch("https://api.instasent.com/transactional/v1/sms/bulk", {
  method: "POST",
  headers: {
    Authorization: `Bearer ${process.env.INSTASENT_TOKEN}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify(messages),
});
const { entities, errors } = await res.json();
```

#### python

```python
import os, requests

messages = [
    {"from": "Instasent", "to": "+34600000001", "text": "Hello one"},
    {"from": "Instasent", "to": "+34600000002", "text": "Hello two"},
    {"from": "Instasent", "to": "+34600000003", "text": "Hello three"},
]

res = requests.post(
    "https://api.instasent.com/transactional/v1/sms/bulk",
    headers={"Authorization": f"Bearer {os.environ['INSTASENT_TOKEN']}"},
    json=messages,
)
data = res.json()
```

Each item accepts the same fields as `POST /sms`:

| Field          | Required | Notes                                                                                                                                                   |
| -------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `from`         | yes      | Sender ID. 3–14 characters — up to 11 alphanumeric chars or up to 14 digits.                                                                            |
| `to`           | yes      | Destination MSISDN in E.164 format (e.g. `+34600000000`).                                                                                               |
| `text`         | yes      | Message body.                                                                                                                                           |
| `allowUnicode` | no       | Defaults to `false`. Set to `true` if the text may contain non-GSM-7 characters.                                                                        |
| `clientId`     | no       | Your own reference (≤40 chars). Must be unique per SMS. Echoed back on the response and on DLRs — use it to correlate without storing Instasent's `id`. |

## Response

`201 Created` with a JSON object containing two arrays:

```json
{
  "entities": [
    { "id": "...", "clientId": null, "from": "Instasent", "to": "+34600000001", "text": "Hello one", "status": "...", "..." : "..." }
  ],
  "errors": [
    { "fields": { "to": ["This value is not valid"] } }
  ]
}
```

- `entities` — one object per **accepted** message, with its own Instasent `id`. Track that `id` (or the `clientId` you sent) against the DLRs you will receive.
- `errors` — one object per **rejected** item, with per-field error messages under `fields`. The order mirrors the input array position of the rejected items.

A 201 means the batch was processed — not that every item was accepted. Always inspect `errors` before assuming the whole batch went through.

## Limits per request

| Limit                  | Value                                                  | What happens when you exceed it                                                                                                                                                |
| ---------------------- | ------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Items per request      | **1 – 100**                                            | `413 Request Entity Too Large` or `422` — split the batch and retry.                                                                                                           |
| Per-item validation    | See field table above                                  | The invalid item lands in `errors`; the rest of the batch still goes through.                                                                                                  |
| Per-account rate limit | See [Rate limits](/transactional-api/http/rate-limits) | `429 Too Many Requests`. One bulk call counts as **one** request against the window — that is what makes bulk so much more rate-limit-efficient than looping over `POST /sms`. |

If you need to dispatch more than 100 messages, split the payload across multiple bulk requests and fire them concurrently. Throughput scales linearly with the number of parallel batches until you reach your account's rate limit.

> **Tip**: Batching only makes the HTTP side efficient. The dispatch queue is shared across all your traffic — so once your batches are in the queue, there is no ordering guarantee between them. If two messages in the same batch must go out in a fixed order, send them as separate calls.

## Throughput, queueing and back-pressure

The dispatch queue is sized to absorb large campaigns. In practice this means:

- **Bursts are absorbed**. A sudden wave of bulk submissions does not translate into rejections downstream — messages wait in the queue and drain as the carrier routes accept them.
- **Throughput is carrier-bound**, not HTTP-bound. The limiting factor is the route capacity and the destination country, not your HTTP client's concurrency.
- **Higher ceilings are a conversation**. If the default rate limit gets in the way of a campaign's peak, open a ticket with the expected volume and we will raise the window on your account.

> **Tip**: For sustained carrier-grade throughput (aggregators, messaging platforms, resellers), consider [SMPP](/transactional-api/smpp/integration) instead. Same underlying pipeline, persistent TCP sessions, binary protocol — no HTTP overhead per message.

## Delivery reports

DLRs work the same as for a single send: one webhook POST per status transition, keyed by the message `id`. See [Receiving DLRs](/transactional-api/http/dlrs) for the payload shape, the status list and how the same message can produce multiple DLRs during its lifecycle.

If you sent a `clientId`, it is echoed on every DLR for that message — useful when you would rather not store Instasent's `id` on your side.

## Handling partial failures

A pragmatic pattern in your caller:

#### 1. Build the batch with your own correlation id

Include `clientId` on every item so you can match responses and DLRs back to records on your side without an extra lookup.

#### 2. Fire the bulk request

One HTTP call per batch of up to 100 messages. Check the HTTP status — `201` is the happy path.

#### 3. Reconcile entities and errors

Iterate both arrays. For each item in `entities`, record the returned `id` (or your `clientId`) and mark the message as accepted. For each item in `errors`, log the per-field message and retry only if it is a transient issue.

#### 4. Process DLRs asynchronously

Delivery status arrives on the webhook, not on the HTTP response. Update the final state as DLRs come in.

## HTTP bulk vs SMPP

|                      | HTTP `/sms/bulk`                                                                   | SMPP                                                                 |
| -------------------- | ---------------------------------------------------------------------------------- | -------------------------------------------------------------------- |
| Setup cost           | `api_sms` token + one HTTP call                                                    | Persistent TCP session, binary protocol, more ops overhead           |
| Best for             | API-driven campaigns, notification fan-outs, periodic dispatch, embedded use cases | Aggregators, messaging platforms, carrier-grade sustained throughput |
| Per-message overhead | HTTP request / JSON                                                                | Binary PDU on an open session                                        |
| Throughput ceiling   | Bounded by rate limit on your account; excellent for bursts                        | Highest — designed for continuous streaming                          |
| Engineering lift     | Low                                                                                | Higher (session management, retries, pluggable TLVs)                 |

Start on HTTP bulk. Move to SMPP only when the sustained traffic profile — not a one-off burst — justifies the extra engineering lift.

## Pitfalls

- **Wrapping the body in a `messages` object.** The body is a JSON array at the top level. Wrapping it breaks validation.
- **Treating 201 as "all accepted".** Always iterate `errors` — per-item rejections live there, not in the HTTP status.
- **Parallel `POST /sms` instead of bulk.** You will burn the rate-limit window without raising the ceiling. The queue is what gives bulk its throughput, not HTTP concurrency.
- **Forgetting `clientId`.** Without it you have to store Instasent's `id` to correlate DLRs — an avoidable dependency.
- **Batching items that need strict ordering.** The queue does not preserve submission order across the platform. If ordering matters between two messages, send them separately.

## What's next

- **[Receiving DLRs](/transactional-api/http/dlrs)** — webhook payload, status list, per-message matching.
- **[Rate limits](/transactional-api/http/rate-limits)** — how the window is counted and how to ask for more.
- **[Errors](/transactional-api/http/errors)** — status codes and retry guidance, including `413` and `422`.
- **[SMPP integration](/transactional-api/smpp/integration)** — the next step up when HTTP is not enough.
- **[API Reference](/transactional-api/http/reference)** — full shape of `POST /sms/bulk` and every other endpoint.
