Domain 10: Integration Architecture
Domain Owner: Backend (ASP.NET Core) / Mobile (Flutter + Dio) / DevOps (Docker + Coolify)
Last Updated: 2026-03-10
Workflows: WF-INTEG-01 through WF-INTEG-08
Domain Introduction
The Integration Architecture domain documents every external service that the Almafrica platform connects to, how those connections are configured, and how data flows between the system and third-party providers. This domain serves as the single reference for understanding the platform's external dependencies and their failure modes.
Key architectural principles:
- Graceful degradation: Every external service integration is designed to fail safely. Missing configuration logs a warning and disables the feature rather than crashing the application.
- S3-compatible abstraction: File storage uses the AWS SDK's S3 interface against DigitalOcean Spaces, allowing provider portability.
- Development/production parity: OTP and push notification services support a development mode that logs actions instead of sending real messages, ensuring safe local testing.
- Offline-resilient mobile: The mobile app detects connectivity with DNS probes and captive portal checks, queues operations when offline, and resumes uploads with chunked/resumable transfers.
- Container-first deployment: All services are containerized with Docker and orchestrated through Coolify, using environment variable injection for secrets.
External Services Overview
graph TB
subgraph "Mobile App (Flutter)"
MOBILE[Mon Jardin App]
CONN[ConnectivityService]
CHUNK_UP[ChunkedUploadService]
SIG_CLIENT[SignalR Client]
DIO[Dio HTTP Client]
end
subgraph "Backend API (.NET)"
API[ASP.NET Core API]
DO_SVC[DigitalOceanSpacesService]
MULTI_SVC[MultipartUploadService]
OTP_SVC[OtpService]
PUSH_SVC[PushNotificationService]
PERM_BCAST[PermissionBroadcastService]
SIG_HUB[PermissionHub - SignalR]
MEDIA_SVC[MediaUploadService]
end
subgraph "Data Stores"
PG[(PostgreSQL 16)]
REDIS[(Redis 7 - disabled)]
SQLITE[(SQLite - Mobile)]
end
subgraph "External Services"
DO_SPACES[DigitalOcean Spaces\nS3-Compatible Storage]
AT_SMS[Africa's Talking\nSMS/OTP Provider]
FCM[Firebase Cloud Messaging\nPush Notifications]
CDN[DigitalOcean CDN\nAsset Delivery]
GOOGLE_204[Google 204 Probe\nCaptive Portal Check]
end
subgraph "Deployment"
COOLIFY[Coolify PaaS]
GHCR[GitHub Container Registry]
DOCKER[Docker Containers]
end
MOBILE --> DIO -->|REST API| API
MOBILE --> SIG_CLIENT -->|WebSocket| SIG_HUB
MOBILE --> CONN -->|DNS + HTTP probe| GOOGLE_204
MOBILE --> CHUNK_UP -->|Presigned PUT| DO_SPACES
API --> DO_SVC -->|AWS S3 SDK| DO_SPACES
API --> MULTI_SVC -->|AWS S3 Multipart| DO_SPACES
API --> OTP_SVC -->|SMS API| AT_SMS
API --> PUSH_SVC -->|FCM HTTP v1| FCM
API --> PERM_BCAST --> SIG_HUB
DO_SVC -->|Public URL via| CDN
MEDIA_SVC --> DO_SVC
API -->|EF Core / Npgsql| PG
MOBILE --> SQLITE
DOCKER --> COOLIFY
GHCR -->|Pull images| COOLIFY
Workflows
WF-INTEG-01: DigitalOcean Spaces — Regular Image Upload
Trigger: Backend receives an image file via API (farmer photo, quality assessment image, client document)
Frequency: Multiple times daily per active agent
Offline Support: No (requires network; see WF-INTEG-02 for chunked/resumable alternative)
Cross-references: WF-FARM-01 (farmer registration photo), WF-QA-01 (assessment photos), WF-CAMP-03 (survey attachments)
Workflow Diagram
graph TD
A((API receives file\nvia IFormFile)) --> B{DigitalOcean Spaces\nconfigured?}
B -->|No - missing keys| C[Log warning\nReturn null]
B -->|Yes| D{File stream\nempty?}
D -->|Yes| E[Log warning\nReturn null]
D -->|No| F[TryProcessImageAsync]
F --> G{Image format\nrecognized?}
G -->|Yes| H{Width or Height\n> 500px?}
H -->|Yes| I[Resize to max 500x500\nResizeMode.Max]
H -->|No| J[Keep original dimensions]
I --> K{File extension?}
J --> K
K -->|.png| L[SaveAsPng]
K -->|.gif| M[SaveAsPng - convert]
K -->|.webp/.heic/.avif| N[SaveAsJpeg - convert]
K -->|.jpg/.jpeg/other| O[SaveAsJpeg Quality=85]
G -->|No - UnknownImageFormat| P[Fall back to raw upload]
L & M & N & O --> Q[UploadToSpacesAsync]
P --> Q
Q --> R[Generate unique key:\nfolder/GUID.ext]
R --> S[PutObjectRequest\nBucket + Key + Stream\nCannedACL = PublicRead]
S --> T[S3Client.PutObjectAsync]
T --> U{HTTP 200?}
U -->|Yes| V[BuildPublicUrl]
V --> W{CDN endpoint\nconfigured?}
W -->|Yes| X>Return CDN URL:\ncdn.example.com/folder/GUID.ext]
W -->|No| Y>Return direct URL:\nbucket.region.digitaloceanspaces.com/folder/GUID.ext]
U -->|No| Z[Log error\nReturn null]
T -.->|Exception| AA[Log error\nReturn null]
Q -.->|Upload failed| AB[Retry with raw stream\nSkip image processing]
AB --> T
Configuration
| Setting | Env Variable | Default | Description |
|---|
DigitalOceanSpaces:AccessKey | DO_SPACES_ACCESS_KEY | (required) | S3 access key |
DigitalOceanSpaces:SecretKey | DO_SPACES_SECRET_KEY | (required) | S3 secret key |
DigitalOceanSpaces:SpaceName | DO_SPACES_NAME | (required) | Bucket name |
DigitalOceanSpaces:Region | DO_SPACES_REGION | nyc3 | S3 region |
DigitalOceanSpaces:Endpoint | DO_SPACES_ENDPOINT | https://nyc3.digitaloceanspaces.com | S3 endpoint URL |
DigitalOceanSpaces:CdnEndpoint | DO_SPACES_CDN_ENDPOINT | (empty) | Optional CDN URL prefix |
Key Implementation Details
- S3 Client: Uses
AmazonS3Client from the AWS SDK with ForcePathStyle = true (required by DigitalOcean Spaces).
- Image Processing: Uses SixLabors.ImageSharp for resize and format conversion. Max dimensions: 500x500. JPEG quality: 85.
- ACL: All uploaded objects are set to
S3CannedACL.PublicRead for direct URL access.
- Unique Naming: Every file gets a GUID-based name to prevent collisions:
{folder}/{Guid.NewGuid()}{extension}.
- Fallback Strategy: If image processing fails (unsupported format, corrupted stream), the service retries with the original raw stream.
Source Files
| File | Purpose |
|---|
backend/Almafrica.Infrastructure/Services/DigitalOceanSpacesService.cs | S3 upload/delete with image processing |
backend/Almafrica.Infrastructure/Services/MediaUploadService.cs | Validation layer (5MB limit, extension whitelist) |
backend/Almafrica.Application/Interfaces/IObjectStorageService.cs | Abstraction interface |
WF-INTEG-02: DigitalOcean Spaces — Chunked/Resumable Upload
Trigger: Mobile app needs to upload a file larger than a single HTTP request can reliably handle, or network conditions are unstable
Frequency: On-demand (large documents, batch photo uploads)
Offline Support: Partial (upload state persisted to SQLite; resumes when connectivity returns)
Cross-references: WF-SYNC-03 (offline queue processing), WF-CLI-02 (client document uploads)
Workflow Diagram
graph TD
A((Mobile: Upload file)) --> B[ChunkedUploadService.initiateUpload]
B --> C{File exists?}
C -->|No| D[Throw: File not found]
C -->|Yes| E[Calculate file size\nDetermine content type]
E --> F[POST /api/multipartupload/initiate]
F --> G[Backend: MultipartUploadService.InitiateAsync]
G --> H{Folder in\nallowlist?}
H -->|No| I[Throw: Invalid folder]
H -->|Yes| J{File size\n<= 50MB?}
J -->|No| K[Throw: File too large]
J -->|Yes| L[Clamp chunk size\n100KB - 5MB]
L --> M[S3: InitiateMultipartUploadAsync]
M --> N[Generate presigned PUT URLs\nfor each part - 60 min expiry]
N --> O>Return uploadId + objectKey\n+ partUrls + totalParts]
O --> P[Mobile: Save state to SQLite\nStatus = pending]
P --> Q[uploadParts loop]
Q --> R{Has connectivity?}
R -->|No| S[Status = paused\nSave to SQLite]
R -->|Yes| T{URLs expired?}
T -->|Yes| U[Throw: URLs expired\nRe-initiate required]
T -->|No| V[Read chunk bytes\nfrom file at offset]
V --> W[PUT chunk to presigned S3 URL\nDirect to DigitalOcean Spaces]
W --> X{Upload success?}
X -->|Yes| Y[Extract ETag from response\nMark part uploaded]
Y --> Z[Save part progress to SQLite]
Z --> AA{More parts?}
AA -->|Yes| Q
AA -->|No| AB[All parts uploaded]
X -->|No - retry| AC{Retries < 3?}
AC -->|Yes| AD[Exponential backoff\n2^attempt seconds]
AD --> V
AC -->|No| AE{Consecutive\nfailures >= 10?}
AE -->|Yes| AF[Status = failed\nSave to SQLite]
AE -->|No| AG[Check connectivity\nMaybe pause]
AB --> AH[POST /api/multipartupload/complete]
AH --> AI[Backend: CompleteMultipartUploadAsync]
AI --> AJ[Sort parts by partNumber\nFormat ETags with quotes]
AJ --> AK[S3: CompleteMultipartUploadAsync]
AK --> AL[BuildPublicUrl]
AL --> AM>Return final file URL]
AM --> AN[Mobile: Status = completed\nStore finalUrl in SQLite]
Chunk Configuration
| Parameter | Value | Description |
|---|
| Default chunk size | 512 KB | Mobile-side default per part |
| Min chunk size | 100 KB | Backend floor clamp |
| Max chunk size | 5 MB | Backend ceiling clamp |
| Max file size | 50 MB | Total file size limit |
| Part timeout | 60 seconds | Per-chunk upload timeout |
| Max retries per part | 3 | With exponential backoff |
| Max consecutive failures | 10 | Before marking upload as failed |
| Presigned URL expiry | 60 minutes | Configurable via PresignedUrlExpirationMinutes |
| Allowed folders | clients, client-documents, crop-demands, farmers, uploads | Security whitelist |
Resume Flow
graph TD
A((Resume upload)) --> B[Load state from SQLite by ID]
B --> C{State found?}
C -->|No| D>Return null]
C -->|Yes| E{Status = completed?}
E -->|Yes| F>Return existing state]
E -->|No| G{URLs expired?}
G -->|Yes| H[Status = failed\nError: URLs expired]
G -->|No| I[Continue uploadParts\nSkip already-uploaded parts]
I --> J{All parts done?}
J -->|Yes| K[completeUpload]
J -->|No| L[Wait for retry/connectivity]
Source Files
| File | Purpose |
|---|
mobile/mon_jardin/lib/data/services/chunked_upload_service.dart | Mobile chunked upload orchestrator |
backend/Almafrica.Infrastructure/Services/MultipartUploadService.cs | Backend S3 multipart management |
backend/Almafrica.Application/Interfaces/IMultipartUploadService.cs | Abstraction interface |
WF-INTEG-03: Africa's Talking SMS/OTP Integration
Trigger: User requests OTP for phone verification (login 2FA, phone number change)
Frequency: On-demand (each OTP request)
Offline Support: No (requires network)
Cross-references: WF-AUTH-01 (login flow), WF-AUTH-04 (two-factor authentication)
Workflow Diagram
graph TD
A((OTP request)) --> B[OtpService.SendOtpAsync]
B --> C{Phone number\nstarts with +?}
C -->|No| D>Return false\nInvalid format]
C -->|Yes| E[Invalidate existing OTPs\nfor same phone + purpose]
E --> F[[UPDATE otp_codes\nSET is_used = true\nWHERE phone = X AND !used AND !expired]]
F --> G{Development\nmode?}
G -->|Yes| H[Code = 000000\nFixed bypass code]
G -->|No| I[Code = Random 6-digit\nRandom.Next 100000-999999]
H & I --> J[Create OtpCode entity]
J --> K[[INSERT otp_codes\nphone, code, purpose\nexpires_at = now + 10 min\nattempt_count = 0]]
K --> L{Development\nmode?}
L -->|Yes| M[Log OTP to console\nNo SMS sent]
L -->|No| N[TODO: Send via Africa's Talking API\nCurrently logged only]
M & N --> O>Return true]
B -.->|Exception| P>Return false\nLog error]
OTP Verification Flow
graph TD
A((Verify OTP)) --> B[OtpService.VerifyOtpAsync]
B --> C[[SELECT most recent valid OTP\nWHERE phone = X AND purpose = Y\nAND !used AND expires_at > now\nORDER BY created_at DESC]]
C --> D{OTP record\nfound?}
D -->|No| E>Return false\nNo valid OTP]
D -->|Yes| F{Attempt count\n>= 5?}
F -->|Yes| G[Mark as used\n- max attempts exceeded]
G --> H>Return false]
F -->|No| I[Increment attempt_count]
I --> J{Code matches?}
J -->|Yes| K[Mark as used\nLog success]
K --> L>Return true]
J -->|No| M[Log failed attempt\nAttempt X/5]
M --> N>Return false]
Configuration
| Setting | Env Variable | Default | Description |
|---|
Otp:DevelopmentMode | OTP_DEV_MODE | true (dev) / false (prod) | When true, OTP is logged, not sent |
Otp:SmsProvider | SMS_PROVIDER | AfricasTalking | SMS provider identifier |
Otp:ExpiryMinutes | — | 10 | OTP validity window |
Otp:MaxAttempts | — | 5 | Max verification attempts per OTP |
Current Implementation Status
The OTP service is fully implemented for generation, storage, and verification. The SMS sending integration with Africa's Talking is stubbed with a TODO comment. In development mode, OTPs are logged to the console with the fixed bypass code 000000. In production mode, OTPs are generated but the actual SMS send call is commented out pending provider integration.
Cleanup
The CleanupExpiredOtpsAsync method removes OTP records older than 7 days (used or expired). This can be invoked periodically via a background service or scheduled task.
Source Files
| File | Purpose |
|---|
backend/Almafrica.Infrastructure/Services/OtpService.cs | OTP generation, verification, and cleanup |
backend/Almafrica.Application/Interfaces/IOtpService.cs | Abstraction interface |
WF-INTEG-04: SignalR Real-Time Communication
Trigger: Backend needs to push data to connected mobile/web clients (permission changes, entity updates)
Frequency: On-demand (whenever permissions or data change)
Offline Support: N/A (WebSocket requires active connection; mobile reconnects automatically)
Cross-references: WF-AUTH-05 (permission broadcast), WF-AUTH-06 (permission revocation handling), WF-SYNC-01 (sync triggers)
Hub Architecture
graph TD
A((Backend event:\npermission or data change)) --> B{Event type?}
B -->|Permission change| C[PermissionBroadcastService\n.BroadcastPermissionChangeAsync]
B -->|Data change| D[PermissionBroadcastService\n.BroadcastDataChangedAsync]
C --> E[IHubContext of PermissionHub]
E --> F[Clients.User - userId\n.SendAsync - PermissionsUpdated]
F --> G[PermissionUpdateMessage:\nUserId + Permissions list + Timestamp]
D --> H[Clients.All\n.SendAsync - DataChanged]
H --> I[DataChangedMessage:\nEntityType + EntityId + Action + Timestamp]
G --> J((Mobile: PermissionSyncService\nreceives PermissionsUpdated))
I --> K((All clients:\nreceive DataChanged))
J --> L[Compare with cached permissions]
L --> M{Permissions\nrevoked?}
M -->|Yes| N[Emit PermissionRevocationEvent\nNotify UI - respect unsaved forms]
M -->|No| O[Update local permission cache]
K --> P[Trigger relevant data refresh]
Connection Lifecycle (Mobile)
graph TD
A((App starts or\nuser logs in)) --> B[PermissionSyncService.connect]
B --> C{Already connected\nor connecting?}
C -->|Yes| D[Skip - return]
C -->|No| E[State = connecting]
E --> F[Build HubConnection\nURL: baseUrl/hubs/permissions\naccessTokenFactory: JWT from SecureStorage]
F --> G[Register handler:\non PermissionsUpdated]
G --> H[hubConnection.start]
H --> I{Connected?}
I -->|Yes| J[State = connected\nReset reconnect attempts]
I -->|No| K[State = disconnected\nSchedule reconnect]
J --> L((Listening for\nserver messages))
L --> M{Connection\nlost?}
M -->|Yes| N[State = reconnecting\nExponential backoff:\n1s to 30s max]
N --> O[Reconnect attempt]
O --> I
P((User logs out)) --> Q[Stop hub connection\nState = disconnected\nCancel reconnect timer]
R((Connectivity change\nvia ConnectivityService)) --> S{Now online?}
S -->|Yes| T[Attempt reconnect\nif was disconnected]
S -->|No| U[Let connection timeout\nnaturally]
SignalR Server Configuration
| Setting | Value | Description |
|---|
EnableDetailedErrors | true | Detailed error messages in development |
KeepAliveInterval | 15 seconds | Server ping interval |
ClientTimeoutInterval | 30 seconds | Time before considering client disconnected |
| Authentication | [Authorize(AuthenticationSchemes = "Bearer")] | JWT required for hub connection |
| Hub path | /hubs/permissions | WebSocket endpoint |
| Health check | GET /health/signalr | Returns { status: "Healthy", service: "SignalR" } |
Message Types
| Method | Direction | Payload | Purpose |
|---|
PermissionsUpdated | Server to User | { userId, permissions[], timestamp } | Targeted permission update |
DataChanged | Server to All | { entityType, entityId, action, timestamp } | Broadcast data change notification |
GetConnectionId | Client invoke | Returns string | Client can query its own connection ID |
Mobile Reconnect Strategy
| Attempt | Delay | Max |
|---|
| 1 | 1 second | — |
| 2 | 2 seconds | — |
| 3 | 4 seconds | — |
| 4 | 8 seconds | — |
| 5 | 16 seconds | — |
| 6+ | 30 seconds | Capped |
Source Files
| File | Purpose |
|---|
backend/Almafrica.API/Hubs/PermissionHub.cs | SignalR hub with JWT auth, connection logging |
backend/Almafrica.API/Services/PermissionBroadcastService.cs | Server-side broadcast logic |
backend/Almafrica.Application/Interfaces/IPermissionBroadcastService.cs | Abstraction interface |
mobile/mon_jardin/lib/data/services/permission_sync_service.dart | Mobile SignalR client |
backend/Almafrica.API/Extensions/ServiceCollectionExtensions.cs | SignalR DI registration |
backend/Almafrica.API/Extensions/WebApplicationExtensions.cs | Hub endpoint mapping |
WF-INTEG-05: PostgreSQL + EF Core Data Persistence
Trigger: Any API request that reads or writes data
Frequency: Every API call
Offline Support: N/A (backend requires database connectivity)
Cross-references: All backend workflows depend on this integration
Connection Architecture
graph TD
A((API request)) --> B[EF Core DbContext\nAlmafricaDbContext]
B --> C[NpgsqlDataSourceBuilder]
C --> D{Connection string\nformat?}
D -->|postgres:// URI| E[Convert URI to\nNpgsql format:\nHost;Port;Database;Username;Password]
D -->|Standard Npgsql| F[Use as-is]
E & F --> G[EnableDynamicJson\nfor List to jsonb]
G --> H[Build NpgsqlDataSource]
H --> I[UseNpgsql with options:\nSnakeCase history table]
I --> J[Add AuditSaveChangesInterceptor]
J --> K[[PostgreSQL 16\nAlmafrica Database]]
K --> L[Startup Pipeline]
L --> M{RunMigrations\n= true?}
M -->|Yes| N[MigrateAsync\nwith retry: 10 attempts\n3s delay between]
M -->|No| O[Skip migrations]
N & O --> P[Seed pipeline:\nLanguages > MasterData\n> Crops > QA Questions\n> Checklists > Config]
Connection String Resolution
The backend resolves connection strings in the following priority order:
ConnectionStrings:almafricadb (standard .NET config)
ConnectionStrings__almafricadb (Coolify double-underscore format)
- Automatic conversion of
postgres:// URIs to Npgsql format (for Coolify-managed PostgreSQL)
EF Core Configuration
| Setting | Value | Description |
|---|
| Provider | Npgsql (PostgreSQL) | Primary database |
| Dynamic JSON | Enabled | List<T> serialized as jsonb columns |
| History table | __efmigrations_history | Snake_case naming |
| Interceptor | AuditSaveChangesInterceptor | Immutable audit trail on every save |
| Naming | Snake_case | Custom SnakeCaseHistoryRepository |
Migration Strategy
| Setting | Value | Description |
|---|
| Auto-migrate on startup | Configurable via RunMigrations | Default: true |
| Retry attempts | 10 | For container startup race conditions |
| Retry delay | 3 seconds | Between attempts |
| Production override | RunMigrations: false | Disabled in production docker-compose |
WF-INTEG-06: Redis Cache (Currently Disabled)
Trigger: N/A (disabled in current deployment)
Frequency: N/A
Cross-references: WF-AUTH-01 (potential session caching), WF-SYNC-01 (potential sync state caching)
Current Status
graph TD
A((Application Startup)) --> B{Redis configured?}
B -->|Disabled| C[Log: Redis disabled\nUsing in-memory caching]
C --> D[AddMemoryCache\nIMemoryCache registered]
D --> E[AddResponseCaching\nHTTP response cache]
B -.->|Future: enabled| F[AddRedisClient\nfrom Aspire config]
F -.-> G[[Redis 7 Alpine\nAppend-only mode]]
Redis is provisioned in Docker Compose (local profile) but the client registration in the application is commented out with the note: "Redis disabled temporarily - will add back once connection is fixed." The application currently uses IMemoryCache (in-process) and HTTP response caching as alternatives.
Docker Compose Provisioning
| Environment | Image | Port | Volume | Profile |
|---|
| Local | redis:7-alpine | 6379 | redis_dev_data | local-cache (opt-in) |
| Dev/Staging | Coolify-managed | Injected via REDIS_URL | Managed | Always available |
| Production | Coolify-managed | Injected via REDIS_URL | Managed | Always available |
Intended Architecture (When Re-enabled)
- Session token blacklist (for logout invalidation)
- API response caching for master data endpoints
- Rate limiting counters
- SignalR backplane for multi-instance scaling
WF-INTEG-07: Firebase Cloud Messaging (Push Notifications)
Trigger: Backend event requiring user notification (stock loss approval, order status change, expiry alert)
Frequency: On-demand + scheduled (ExpiryAlertBackgroundService)
Offline Support: N/A (server-side; FCM handles device delivery when device comes online)
Cross-references: WF-NOTIF-01 (push notification workflow), WF-STOCK-05 (stock loss approval notification)
Workflow Diagram
graph TD
A((Notification event:\nstock loss, expiry alert,\norder update)) --> B[PushNotificationService]
B --> C{Firebase\nenabled?}
C -->|No| D[Log: Would send notification\nTitle + Body\nReturn Success - no-op]
C -->|Yes| E{Send to user\nor users?}
E -->|Single user| F[SendToUserAsync]
E -->|Multiple users| G[SendToUsersAsync]
F --> H[[SELECT user\nWHERE id = userId]]
H --> I{User found?}
I -->|No| J>NotFound: User not found]
I -->|Yes| K{FCM device\ntoken set?}
K -->|No| L>Failure: No device token]
K -->|Yes| M[SendToDeviceAsync]
G --> N[[SELECT users\nWHERE id IN list\nAND FcmDeviceToken != null]]
N --> O[Loop: SendToDeviceAsync\nfor each user]
O --> P>Success count / total count]
M --> Q[Build FCM v1 message:\ntoken + notification + data\n+ android channel + apns sound]
Q --> R[POST https://fcm.googleapis.com/v1/\nprojects/PROJECT_ID/messages:send]
R --> S{HTTP success?}
S -->|Yes| T>Success]
S -->|No| U>Failure: FCM status + error body]
R -.->|Exception| V>Failure: exception message]
Device Token Management
graph TD
A((Mobile app registers\nor refreshes FCM token)) --> B[POST /api/notifications/device-token]
B --> C[UpdateDeviceTokenAsync]
C --> D[[UPDATE users\nSET fcm_device_token = token\nfcm_token_updated_at = now\ndevice_platform = platform]]
D --> E>Success]
Configuration
| Setting | Env Variable | Default | Description |
|---|
Firebase:ProjectId | — | (empty) | GCP project ID |
Firebase:CredentialsPath | — | (empty) | Path to service account JSON |
Firebase:Enabled | — | false | Master switch for FCM |
Current Status
Firebase is fully coded but disabled (Enabled: false). When disabled, the service logs what it would send and returns Success (no-op). The FCM message payload includes:
- Android: High priority, custom channel
stock_loss_approval, default sound
- iOS (APNs): Default sound, badge count = 1
Source Files
| File | Purpose |
|---|
backend/Almafrica.Infrastructure/Services/PushNotificationService.cs | FCM HTTP v1 integration |
backend/Almafrica.Application/Interfaces/IPushNotificationService.cs | Abstraction interface |
WF-INTEG-08: Mobile Connectivity Detection
Trigger: Device network state change or periodic recheck
Frequency: Continuous monitoring + 60-second offline recheck interval
Offline Support: This IS the offline detection system
Cross-references: WF-SYNC-01 (sync trigger on reconnect), WF-INTEG-02 (chunked upload pause/resume)
Workflow Diagram
graph TD
A((App startup)) --> B[ConnectivityService.initialize]
B --> C[Initial checkConnectivity]
C --> D[Listen to connectivity_plus\nonConnectivityChanged stream]
D --> E((Network state\nchange event))
E --> F{Has wifi/mobile/\nethernet?}
F -->|No| G[isConnected = false\nnetworkType = none]
F -->|Yes| H[Verify actual internet access]
H --> I[Cancel any pending\nDNS operation]
I --> J[DNS lookup race:\n1. API host\n2. google.com\n3. cloudflare.com]
J --> K{Any DNS\nsucceeded?}
K -->|No - timeout 8s| L[isConnected = false]
K -->|Yes| M[Captive portal check]
M --> N[HEAD http://connectivitycheck\n.gstatic.com/generate_204]
N --> O{HTTP 204?}
O -->|Yes| P[isConnected = true]
O -->|No - 405/403| Q[GET fallback\nsame URL]
Q --> R{HTTP 204?}
R -->|Yes| P
R -->|No| S[Try second probe URL:\nclients3.google.com/generate_204]
S --> T{Either probe\nreturns 204?}
T -->|Yes| P
T -->|No| U[isConnected = false\nCaptive portal detected]
O -->|Timeout/Error| S
P --> V[Notify connectionStream\nNotify networkTypeStream]
G & L & U --> W[Start offline recheck timer\n60-second interval]
W --> X((60s timer fires))
X --> C
Network Type Detection
| Priority | ConnectivityResult | NetworkType | Suitable for large uploads |
|---|
| 1 (highest) | wifi | NetworkType.wifi | Yes |
| 2 | ethernet | NetworkType.wifi (treated as wifi) | Yes |
| 3 | mobile | NetworkType.cellular | Caution |
| — | none | NetworkType.none | No |
DNS Probe Hosts
| Priority | Host | Source |
|---|
| 1 | API server host (parsed from AppConstants.API_BASE_URL) | Dynamic |
| 2 | google.com | Static fallback |
| 3 | cloudflare.com | Static fallback |
Captive Portal Probe URLs
| URL | Method | Expected Response |
|---|
http://connectivitycheck.gstatic.com/generate_204 | HEAD (fallback: GET) | HTTP 204 No Content |
http://clients3.google.com/generate_204 | HEAD (fallback: GET) | HTTP 204 No Content |
Key Implementation Details
- CancelableOperation: DNS lookups use
async/CancelableOperation to prevent orphaned futures when a new connectivity check starts before the previous one completes.
- Future.any race: DNS lookups are raced in parallel; the first successful lookup short-circuits the rest.
- Offline recheck timer: When offline, a 60-second periodic timer rechecks connectivity. The timer is cancelled when connectivity is restored.
- Guard against reentrancy: A
_isPeriodicCheckRunning flag prevents overlapping periodic checks.
Source Files
| File | Purpose |
|---|
mobile/mon_jardin/lib/data/services/connectivity_service.dart | Full connectivity detection service |
mobile/mon_jardin/lib/core/enums/network_type.dart | Network type enum |
Service Dependency Matrix
| Service | PostgreSQL | Redis | DO Spaces | Africa's Talking | FCM | SignalR | Internet |
|---|
| Auth (login/register) | Required | — | — | OTP only | — | — | Required |
| Farmer Management | Required | — | Photo upload | — | — | Data broadcast | Required (sync) |
| Client Management | Required | — | Photo + docs | — | — | Data broadcast | Required (sync) |
| Quality Assessment | Required | — | Assessment photos | — | — | — | Required (sync) |
| Warehouse/Stock | Required | — | — | — | Loss alerts | Data broadcast | Required (sync) |
| Campaigns/Surveys | Required | — | Survey attachments | — | — | — | Required (sync) |
| Production Cycles | Required | — | — | — | — | — | Required (sync) |
| Permission System | Required | — | — | — | — | Permission push | Required |
| Mobile Offline | — | — | — | — | — | Reconnect trigger | Optional |
| Push Notifications | Required (tokens) | — | — | — | Required | — | Required |
| Chunked Upload | — | — | Required | — | — | — | Required |
Docker/Deployment Architecture
Container Topology
graph TB
subgraph "Coolify PaaS (DigitalOcean Droplet)"
subgraph "almafrica-network (bridge)"
API_C[almafrica-api\n.NET 10 / Port 5010\nNon-root user]
WEB_C[almafrica-web\nNext.js / Port 3000\nNode.js]
end
subgraph "local-db profile (optional)"
PG_C[(postgres:16-alpine\nPort 5432\nVolume: postgres_dev_data)]
REDIS_C[(redis:7-alpine\nPort 6379\nAppend-only mode\nVolume: redis_dev_data)]
end
subgraph "coolify network (external)"
PROXY[Coolify Reverse Proxy\nHTTPS termination]
end
end
subgraph "GitHub Container Registry"
API_IMG[ghcr.io/almafrica/almafrica-api:prod-latest]
WEB_IMG[ghcr.io/almafrica/almafrica-web:prod-latest]
end
API_IMG -->|Pull| API_C
WEB_IMG -->|Pull| WEB_C
PROXY -->|api.almafrica.com:443 -> :5010| API_C
PROXY -->|app.almafrica.com:443 -> :3000| WEB_C
API_C -->|ConnectionStrings__almafricadb| PG_C
API_C -->|ConnectionStrings__Redis| REDIS_C
Environment Matrix
| Environment | API Domain | Web Domain | Database | Compose File | Build |
|---|
| Local | localhost:8080 | localhost:3000 | Local or Coolify | docker-compose.local.yml | Source build |
| Dev | api-dev.almafrica.com | dev.almafrica.com | Coolify-managed | docker-compose.dev.yml | Source build |
| Staging | api-staging.almafrica.com | staging.almafrica.com | Coolify-managed | docker-compose.staging.yml | Source build |
| Production | api.almafrica.com | app.almafrica.com | Coolify-managed | docker-compose.production.yml | Pre-built GHCR images |
API Container Details
| Property | Value |
|---|
| Base image (build) | mcr.microsoft.com/dotnet/sdk:10.0 |
| Base image (runtime) | mcr.microsoft.com/dotnet/aspnet:10.0 |
| Exposed port | 5010 |
| Health check | curl -f http://localhost:5010/health every 30s |
| Health check start period | 120s (allows for migration + seeding) |
| Run as | Non-root ($APP_UID) |
| Restart policy | unless-stopped |
Secret Injection
All secrets are injected as environment variables by Coolify. The docker-compose files reference them via ${VARIABLE} syntax:
| Secret | Variable | Injected By |
|---|
| Database URL | DATABASE_URL | Coolify managed PostgreSQL |
| Redis URL | REDIS_URL | Coolify managed Redis |
| JWT signing key | JWT_SECRET_KEY | Coolify environment |
| DO Spaces access key | DO_SPACES_ACCESS_KEY | Coolify environment |
| DO Spaces secret key | DO_SPACES_SECRET_KEY | Coolify environment |
| DO Spaces bucket name | DO_SPACES_NAME | Coolify environment |
| SMS provider | SMS_PROVIDER | Coolify environment |
Integration Health Monitoring
The backend exposes three health endpoints:
| Endpoint | Checks | Response |
|---|
GET /health | PostgreSQL connectivity via EF Core DbContextCheck | Standard ASP.NET health response |
GET /health/version | None (informational) | { status, version, environment, buildDate, commitSha } |
GET /health/signalr | None (informational) | { status: "Healthy", service: "SignalR" } |
Workflow Cross-Reference Index
This table maps each integration workflow to the domain workflows that depend on it:
| Integration Workflow | Dependent Workflows |
|---|
| WF-INTEG-01 (DO Spaces regular upload) | WF-FARM-01, WF-FARM-02, WF-CLI-01, WF-CLI-02, WF-QA-01, WF-QA-02, WF-CAMP-03 |
| WF-INTEG-02 (DO Spaces chunked upload) | WF-CLI-02, WF-SYNC-03 |
| WF-INTEG-03 (Africa's Talking OTP) | WF-AUTH-01, WF-AUTH-04 |
| WF-INTEG-04 (SignalR real-time) | WF-AUTH-05, WF-AUTH-06, WF-SYNC-01, WF-STOCK-01, WF-FARM-01 |
| WF-INTEG-05 (PostgreSQL) | All WF-AUTH-, WF-FARM-, WF-CLI-, WF-STOCK-, WF-PROD-, WF-CAMP-, WF-QA-* |
| WF-INTEG-06 (Redis cache) | Currently none (disabled); intended for WF-AUTH-01, WF-SYNC-01 |
| WF-INTEG-07 (FCM push) | WF-NOTIF-01, WF-STOCK-05 |
| WF-INTEG-08 (Connectivity detection) | WF-SYNC-01, WF-INTEG-02, WF-INTEG-04 |