Implementing AI Image Generation (Midjourney API) in a Mobile App
Midjourney has no official public API — first thing to understand. All "Midjourney API" on the market is either unofficial wrappers over the Discord bot (MidJourney API proxy), or alternative providers with comparable quality (Ideogram, Leonardo.ai). The choice depends on stability and legal clarity requirements.
Integration options
Unofficial proxies
Services like useapi.net, midjourney-api.thenextleg.io, imaginesoftware.io provide REST API over their own Discord accounts with Midjourney. Technically works, but:
- Violates Midjourney ToS (account ban risk)
- Instability with Discord/Midjourney updates
- No SLA
- Results depend on provider's bot version
Fine for prototypes and internal tools. Risky for production.
Alternatives with comparable quality
Ideogram v2 — quality comparable to Midjourney for artistic styles, plus excellent text inside images (weak spot for MJ). Official API.
Leonardo.ai — rich style library, ControlNet, motion (video). Official API.
Flux (FAL.ai) — Flux Pro/Ultra from Black Forest Labs, quality on par with MJ v6, official API via FAL.
For most product tasks, Flux Pro on FAL is the best choice: stable API, high quality, reasonable pricing.
Integration via proxy API (useapi.net)
If client insists on Midjourney:
class MidjourneyProxyService(private val apiKey: String) {
private val client = OkHttpClient.Builder()
.readTimeout(300, TimeUnit.SECONDS) // MJ generates up to 3–4 minutes
.build()
suspend fun imagine(prompt: String): String = withContext(Dispatchers.IO) {
val body = JSONObject().apply {
put("prompt", prompt)
}.toString().toRequestBody("application/json".toMediaType())
val request = Request.Builder()
.url("https://api.useapi.net/v2/jobs/imagine")
.header("Authorization", "Bearer $apiKey")
.post(body)
.build()
val response = client.newCall(request).execute()
val json = JSONObject(response.body!!.string())
json.getString("jobid") // Get jobid for polling
}
suspend fun getResult(jobId: String): MidjourneyResult? = withContext(Dispatchers.IO) {
val request = Request.Builder()
.url("https://api.useapi.net/v2/jobs/?jobid=$jobId")
.header("Authorization", "Bearer $apiKey")
.get()
.build()
val response = client.newCall(request).execute()
val json = JSONObject(response.body!!.string())
when (json.optString("status")) {
"completed" -> {
val attachments = json.getJSONArray("attachments")
MidjourneyResult.Success(attachments.getJSONObject(0).getString("url"))
}
"failed" -> MidjourneyResult.Failed(json.optString("error"))
else -> null // still processing
}
}
}
Midjourney generates a 2x2 grid of images. After getting results, user can select one (upscale) or request variations. Additional API calls (/v2/jobs/button with action U1–U4 or V1–V4).
Flux integration via FAL.ai (recommended approach)
struct FalFluxService {
private let baseURL = "https://fal.run/fal-ai/flux-pro"
func generate(prompt: String) async throws -> URL {
var request = URLRequest(url: URL(string: baseURL)!)
request.httpMethod = "POST"
request.setValue("Key \(apiKey)", forHTTPHeaderField: "Authorization")
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
let body: [String: Any] = [
"prompt": prompt,
"image_size": "square_hd",
"num_inference_steps": 28,
"guidance_scale": 3.5,
"num_images": 1,
"enable_safety_checker": true
]
request.httpBody = try JSONSerialization.data(withJSONObject: body)
let (data, _) = try await URLSession.shared.data(for: request)
let response = try JSONDecoder().decode(FalResponse.self, from: data)
return URL(string: response.images[0].url)!
}
}
FAL returns results synchronously (or via queue under load). Flux Pro latency — 5–10 seconds for square_hd.
Prompt parameters for Midjourney
Working with MJ via proxy, prompts have specific syntax:
portrait photo of astronaut in forest --ar 3:4 --v 6 --style raw --stylize 100
--ar — aspect ratio (16:9, 3:4, 1:1)
--v 6 — model version
--style raw — less stylized result
--stylize 0–1000 — degree of stylization (250 default)
--no text, watermark — negative prompt
--seed 12345 — reproducibility
With standard APIs (Ideogram, FAL), these parameters pass as native request fields.
UX for long generation
Midjourney via proxy: 60–180 seconds. Flux: 5–15 seconds.
For extended wait — poll every 5 seconds with animated indicator. Not more frequent: providers may throttle. Show elapsed time ("Generating... 45 sec"), reduces user anxiety.
Timeline
Proxy API integration (useapi.net or similar) with polling and basic UI — 4–6 days. Flux/Ideogram with native API, upscale/variations, gallery — 10–14 days.







