02 - Data Sync Engine
Domain: Offline-First Bidirectional Synchronization Workflows: WF-SYNC-01 through WF-SYNC-17 Primary Source:
mobile/mon_jardin/lib/data/services/sync_manager.dartPattern: Kobo-style event-driven sync with exponential backoff
Domain Introduction
The Data Sync Engine is the backbone of the Almafrica mobile application's offline-first architecture. It ensures that field agents operating in areas with intermittent connectivity can capture farmer registrations, client data, quality assessments, production cycles, survey responses, and stock operations without data loss. The engine synchronizes all data bidirectionally with the backend when connectivity is restored.
The sync engine is built around three core principles:
- Never lose data -- All local writes are queued and retried with exponential backoff (unlimited retries, capped at 5-hour intervals)
- Event-driven, not polling -- Sync triggers on connectivity restore, form submission, and foreground heartbeat rather than continuous polling (Kobo pattern)
- Strategy-based orchestration -- Three distinct sync strategies (
FullSyncStrategy,InitialSyncStrategy,SubmitPrioritySyncStrategy) control which phases execute, allowing the engine to optimize for different scenarios
Architecture Summary
graph TD
subgraph Triggers
T1((App Startup))
T2((Connectivity Restore))
T3((Form Submit))
T4((Manual Sync))
T5((Foreground Heartbeat))
T6((SignalR DataChanged))
T7((First Login))
end
subgraph Strategy Selection
SS{Select Strategy}
FS[FullSyncStrategy]
IS[InitialSyncStrategy]
SP[SubmitPrioritySyncStrategy]
end
subgraph Sync Manager
SM[SyncManager._performSync]
AP[requestAutoPull]
SQ[requestSubmitSync]
end
subgraph Sync Phases
P0[Step 0: Agent Status Refresh]
P1[Step 1: Master Data Sync<br/>26 entity types]
P15[Step 1.5: Assessment Questions]
P16[Step 1.6: Orphan Cleanup]
P2[Step 2: Push Farmers/Clients]
P25[Step 2.5: Push Production Cycles]
P3[Step 3: Pull Scope-Aware Data]
P4[Step 4: Push Crop Demands/Photos/Docs]
P5[Step 5: Push Assessments]
P6[Step 6: Push Stock Losses]
P7[Step 7: Push Stock Transfers]
P76[Step 7.6: Push Survey Responses]
P77[Step 7.7: Push Survey Files]
P8[Step 8: Retry Failed Items]
end
subgraph Background Services
OQP[OfflineQueueProcessor<br/>15s timer / batch 5]
RES[RetryExecutorService<br/>30s polling]
EBS[ExponentialBackoffService<br/>30s to 5h]
end
T1 --> AP
T2 --> SM
T3 --> SQ
T4 --> SM
T5 --> AP
T6 --> AP
T7 --> SM
SM --> SS
SS --> FS
SS --> IS
SS --> SP
FS --> P0 --> P1 --> P15 --> P16 --> P2 --> P25 --> P3 --> P4 --> P5 --> P6 --> P7 --> P76 --> P77 --> P8
IS --> P1 --> P15
SP --> P2 --> P25 --> P3
SM -.-> OQP
SM -.-> RES
RES -.-> EBS
Strategy Comparison Matrix
| Capability | FullSyncStrategy | InitialSyncStrategy | SubmitPrioritySyncStrategy |
|---|---|---|---|
| Refresh Agent Status | Yes | No | No |
| Sync Master Data | Yes | Yes | No |
| Sync Assessment Questions | Yes | Yes | No |
| Cleanup Orphans | Yes | No | No |
| Bidirectional Sync | Yes | No | Yes |
| Run Retries | Yes | No | No |
| Push Priority | No | No | Yes |
Source: mobile/mon_jardin/lib/data/services/sync/strategies/sync_strategy.dart
Master Data Entity Types (26 total)
The master data layer syncs the following reference entities using a "fetch then replace" strategy:
| # | Entity Type | Category |
|---|---|---|
| 1 | provinces | Geography |
| 2 | territories | Geography |
| 3 | villages | Geography |
| 4 | crops | Agriculture |
| 5 | cropCategories | Agriculture |
| 6 | seedSources | Agriculture |
| 7 | fertilizerTypes | Agriculture |
| 8 | pesticideTypes | Agriculture |
| 9 | measurementUnits | Agriculture |
| 10 | irrigationSources | Agriculture |
| 11 | soilTypes | Agriculture |
| 12 | waterAccessTypes | Agriculture |
| 13 | farmTools | Agriculture |
| 14 | terrainTypes | Agriculture |
| 15 | landOwnershipTypes | Agriculture |
| 16 | storageFacilityTypes | Agriculture |
| 17 | transportMeans | Infrastructure |
| 18 | electricitySources | Infrastructure |
| 19 | electricityReliabilities | Infrastructure |
| 20 | incomeSources | Demographics |
| 21 | monthlyIncomeRanges | Demographics |
| 22 | maritalStatuses | Demographics |
| 23 | educationLevels | Demographics |
| 24 | genders | Demographics |
| 25 | contactMethods | Communication |
| 26 | financialServiceTypes | Financial |
Source: mobile/mon_jardin/lib/data/models/master_data_sync_event.dart (enum MasterDataEntityType)
Workflow Catalog
WF-SYNC-01: Sync Manager Initialization
Trigger: App startup (SyncManager.initialize())
Frequency: Once per app session
Offline Support: Yes -- initializes background services that activate when connectivity returns
Workflow Diagram
graph LR
A((App Startup)) --> B[Reset In-Progress<br/>Queue Items]
B --> C[Migrate Production Cycles<br/>SharedPreferences to SQLite]
C --> D[Register Retry Executors<br/>8 entity types]
D --> E[Start RetryExecutorService<br/>30s auto-retry]
E --> F[Initialize OfflineQueueProcessor<br/>15s interval / batch 5]
F --> G[Subscribe to<br/>Connectivity Stream]
G --> H{Currently Online?}
H -->|Yes| I[Schedule Initial<br/>Auto-Pull after 5s]
H -->|No| J[Wait for<br/>Connectivity Event]
I --> K[Start Foreground<br/>Heartbeat Timer 15m]
J --> K
K --> L[Bind SignalR<br/>DataChanged Handler]
L --> M[Mark Initialized]
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Trigger | SyncManager.initialize() | Entry point on app startup | Guard: returns early if already initialized |
| 2 | Function | SyncQueueService.resetInProgressItems() | Recovers stuck items from prior crash | Logs warning, continues on failure |
| 3 | Function | ProductionCycleService.migrateFromSharedPreferences() | One-time migration to SQLite | Logs warning, continues on failure |
| 4 | Function | _initializeRetryExecutors() | Registers executors for stockLoss, stockTransfer, farmer, client, cropDemand, clientDocument, productionCycle, surveyResponse | N/A -- registration only |
| 5 | Function | RetryExecutorService.startAutoRetry() | Starts 30s polling for retryable items | N/A |
| 6 | Function | _initializeQueueProcessor() | Registers 12 processors, initializes with OfflineQueueConfig(processingInterval: 15s, batchSize: 5) | N/A |
| 7 | Listener | ConnectivityService.connectionStream.listen() | Fires on connectivity change; applies 5s stabilization delay before sync | Checks cooldown, verifies still connected after delay |
| 8 | Timer | Timer.periodic(autoPullForegroundInterval) | 15-minute heartbeat for pull-only refresh | Cooldown-gated |
| 9 | Listener | PermissionSyncService.onDataChanged | SignalR real-time push notification triggers immediate pull | Force bypasses cooldown |
Data Transformations
- Input: Raw app startup state with potentially stale queue items
- Processing: Crash recovery (reset in-progress to pending), migration, service registration
- Output: Fully initialized sync engine with all background processors running
Error Handling
- Queue reset failure is logged but does not block initialization
- Production cycle migration failure is logged but does not block initialization
- Connectivity subscription handles rapid on/off toggling via stabilization delay
Cross-References
- Triggers: WF-SYNC-02, WF-SYNC-04, WF-SYNC-08, WF-SYNC-09, WF-SYNC-17
- Triggered by: App lifecycle (main.dart)
WF-SYNC-02: Full Sync Strategy
Trigger: Manual sync request / connectivity restore with pending local changes Frequency: On-demand Offline Support: No -- requires connectivity (skips sync if offline)
Workflow Diagram
graph TD
A((Manual Sync /<br/>Connectivity Restore)) --> B[Acquire _syncLock]
B --> C{Connected?}
C -->|No| D[Emit localDataChanged<br/>and return]
C -->|Yes| E[Set SyncState.syncing]
E --> F[Select FullSyncStrategy]
F --> G[Step 0: Refresh<br/>Agent Status]
G --> H[Step 1: Sync Master Data<br/>via MasterDataSyncOrchestrator]
H --> I[Step 1.5: Sync Assessment<br/>Questions]
I --> J[Step 1.6: Cleanup<br/>Orphaned Pending Records]
J --> K[Resolve PullScope<br/>via PullScopeResolver]
K --> L[Ensure Pull Scope Safety<br/>user-switch detection]
L --> M[Step 2: Push Farmers<br/>then Push Clients]
M --> N[Step 2.5: Push<br/>Production Cycles]
N --> O{Yield to Submit<br/>Priority Sync?}
O -->|Yes| P[Skip Pull Phase]
O -->|No| Q[Step 3: Pull Farmers<br/>Clients / CropDemands /<br/>Production / Orders /<br/>Campaigns / Assessments / Stock]
Q --> R[Step 4: Push Crop Demands<br/>Photos / Documents]
P --> R
R --> S[Step 5: Push Assessments]
S --> T[Step 6: Push Stock Losses<br/>+ Photos]
T --> U[Step 7: Push Stock Transfers]
U --> V[Step 7.6: Push Survey<br/>Responses]
V --> W[Step 7.7: Push Survey Files]
W --> X[Step 8: Execute Ready<br/>Retries]
X --> Y[Update Agent<br/>Last Sync Timestamp]
Y --> Z[Set SyncState.completed]
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Gate | _syncLock.synchronized() | Prevents concurrent sync operations | Queues next sync if already running |
| 2 | Switch | ConnectivityService.isConnected | Connectivity check | Returns early if offline |
| 3 | HTTP | AgentStatusService.checkAgentStatus(forceRefresh: true) | Detects agent deactivation | Logs warning, continues |
| 4 | Function | MasterDataSyncOrchestrator.sync() | Version-checked master data pull | Continues with cached data on failure |
| 5 | Function | AssessmentQuestionsSyncHelper.syncAllQuestions() | Freshness-gated assessment pull | Isolated; failure does not block other phases |
| 6 | Function | SyncService.cleanupOrphanedPendingFarmers/Clients() | Reconciles records already on server | Logs warning, continues |
| 7 | Function | PullScopeResolver.resolve(forceRefresh: true) | Determines role-based data scope | Falls back to PullRole.unknown |
| 8 | Function | SyncCoordinator.pushFarmers() / pushClients() | Uploads pending local records | Per-entity error isolation with analytics |
| 9 | Function | ProductionCycleSyncService.pushPendingCycles() | Pushes cycles before pull to avoid confusion | Logs warning, continues |
| 10 | Function | SyncCoordinator.pullFarmersForScope() / pullClientsForScope() | Downloads scoped records from backend | Per-entity error isolation |
| 11 | Function | RetryExecutorService.executeReadyRetries() | Processes items whose backoff has elapsed | Logs warning, continues |
Data Transformations
- Input: Local pending records + stale cached data
- Processing: Push local changes first, then pull fresh server data (push-before-pull pattern)
- Output: Synchronized local database with analytics history entry
Error Handling
- Each sync phase is individually try/caught -- one failure does not block subsequent phases
_shouldYieldToSubmitPrioritySync()allows a full sync's pull phase to be interrupted if a user submits a form mid-sync- History tracking records
SyncHistoryResult.partialSuccessif any phase has failures - On critical error, state transitions to
SyncState.failedwith error details persisted
Cross-References
- Triggers: WF-SYNC-05 (master data), WF-SYNC-06 (farmer sync), WF-SYNC-07 (client sync), WF-SYNC-08 (queue processor paused during), WF-SYNC-09 (retries at Step 8), WF-SYNC-13 (stock pull), WF-SYNC-14 (production cycle), WF-SYNC-15 (survey sync), WF-SYNC-16 (assessment sync)
- Triggered by: WF-SYNC-01 (connectivity listener), manual user action
WF-SYNC-03: Initial Sync Strategy
Trigger: First login (SyncRunMode.initialSync)
Frequency: Once per user per device
Offline Support: No -- requires connectivity for master data download
Workflow Diagram
graph LR
A((First Login)) --> B[Select InitialSyncStrategy]
B --> C[Acquire _syncLock]
C --> D{Connected?}
D -->|No| E[Return - use empty cache]
D -->|Yes| F[Step 1: Sync Master Data<br/>includeClientMasterData: false]
F --> G[Step 1.5: Sync Assessment<br/>Questions]
G --> H[Mark Initial Sync<br/>Complete]
H --> I[Set SyncState.completed]
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Trigger | requestSync(runMode: SyncRunMode.initialSync) | Called after first authentication | N/A |
| 2 | Function | MasterDataSyncOrchestrator.sync(includeClientMasterData: false) | Syncs only core configs (provinces, crops, etc.) to minimize download | Continues with empty cache on failure |
| 3 | Function | AssessmentQuestionsSyncHelper.syncAllQuestions() | Pulls assessment question templates | Isolated from master data |
| 4 | Return | Early return before bidirectional sync | Skips agent status, orphan cleanup, push/pull, retries | N/A |
Data Transformations
- Input: Empty local database after first login
- Processing: Download reference data only (no entity sync)
- Output: Populated dropdown data (provinces, territories, crops, etc.) for offline form filling
Error Handling
- Master data failure leaves user with empty dropdowns; subsequent full sync will populate
- Assessment question failure is non-blocking
Cross-References
- Triggers: WF-SYNC-05 (master data subset)
- Triggered by: Login flow (splash screen routing)
WF-SYNC-04: Submit-Priority Sync Strategy
Trigger: Form submission (farmer registration, client creation, assessment, etc.) Frequency: On each form submission Offline Support: Partial -- queues locally if offline, pushes when connectivity returns
Workflow Diagram
graph TD
A((Form Submission)) --> B[requestSubmitSync]
B --> C{Already Draining?}
C -->|Yes| D[Coalesce: set<br/>_submitSyncQueued = true]
C -->|No| E[_drainSubmitSyncQueue]
E --> F[Select SubmitPrioritySyncStrategy]
F --> G[Set _submitSyncUrgent = true]
G --> H[Acquire _syncLock]
H --> I[Step 2: Push Farmers<br/>Clients / Production Cycles]
I --> J[Step 3: Pull Scope-Aware Data<br/>farmers/clients/cropDemands/<br/>production/orders/campaigns/<br/>assessments/stock]
J --> K[Step 4-7.7: Push<br/>CropDemands/Photos/Docs/<br/>Assessments/StockLoss/<br/>Transfers/Surveys/Files]
K --> L{More Queued?}
L -->|Yes| E
L -->|No| M[Schedule Auto-Pull<br/>after 1s delay]
M --> N[Clear _submitSyncUrgent]
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Trigger | requestSubmitSync(source: ...) | Fire-and-forget sync trigger from UI | Coalesces multiple rapid calls |
| 2 | Gate | _submitSyncDrainScheduled | Prevents concurrent drain loops | Coalesces into next loop iteration |
| 3 | Loop | _drainSubmitSyncQueue() | While loop processes all coalesced requests | Finally block resets drain flag |
| 4 | Function | _performSync(runMode: SyncRunMode.submitPriority) | Push-first strategy, skips master data and agent status | Same error handling as WF-SYNC-02 |
| 5 | Timer | requestAutoPull(trigger: SyncTrigger.formSubmit, force: true) | 1-second delayed pull after push completes | Force bypasses cooldown |
Data Transformations
- Input: Newly submitted form data saved to SQLite with
sync_status = 'pending' - Processing: Push pending records immediately, then pull latest data
- Output: Submitted records confirmed by server; fresh pull data cached locally
Error Handling
- Coalescing prevents sync flooding from rapid multiple submissions
_shouldYieldToSubmitPrioritySync()in full sync and auto-pull contexts yields to this strategy- Auto-pull scheduled after drain completes to catch any server-side changes
Cross-References
- Triggers: WF-SYNC-06, WF-SYNC-07, WF-SYNC-14, WF-SYNC-15, WF-SYNC-17 (auto-pull after drain)
- Triggered by: UI form submission handlers
WF-SYNC-05: Master Data Sync Orchestration
Trigger: Called by SyncManager.syncMasterData() during full or initial sync
Frequency: Per sync cycle (freshness-gated at 15 minutes, or version-gated)
Offline Support: No -- requires connectivity; preserves cached data on failure
Workflow Diagram
graph TD
A((syncMasterData)) --> B{Connected?}
B -->|No| C[Return false]
B -->|Yes| D{Force Sync?}
D -->|Yes| G[Start Sync]
D -->|No| E[Fetch Remote Version<br/>GET /api/masterdata/sync/version]
E --> F{Version Changed?}
F -->|No| C
F -->|Yes| G
G --> H{Include Client<br/>Master Data?}
H -->|Yes| I[Sync Client Master Data<br/>currencies / businessTypes /<br/>paymentMethods / etc.]
H -->|No| J[Skip Client Master Data]
I --> K[Sync Core Master Data<br/>26 entity types]
J --> K
K --> L{Bulk Delta<br/>Endpoint Available?}
L -->|Yes| M[Single Bulk API Call<br/>with modifiedSince]
L -->|No| N[Per-Entity Sync<br/>provinces -> territories -><br/>villages -> crops -> ...]
M --> O[Persist Global Version]
N --> O
O --> P[Update SyncEntityType.masterData<br/>timestamp]
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | HTTP | MasterDataSyncOrchestrator._fetchGlobalVersion() | GET /api/masterdata/sync/version -- checks for version change | Returns null on failure; falls back to freshness check |
| 2 | Switch | Version comparison | Compares localVersion from SharedPreferences to remote | Skips sync if unchanged |
| 3 | Function | ClientMasterDataRepositoryImpl.syncClientMasterData() | Syncs ~10 client-facing reference tables | Records partial failure, continues |
| 4 | Function | MasterDataRepositoryImpl.syncMasterData() -> MasterDataSyncHelper.syncAll() | Syncs ~26 core reference tables | Records partial failures per entity (FR6) |
| 5 | Switch | _tryBulkDeltaSync() | Prefers single bulk endpoint; falls back to per-entity | Transparent fallback |
Data Transformations
- Input: Locally cached reference data (potentially stale or empty)
- Processing: Fetch-then-replace strategy per entity type; delta sync using
modifiedSincetimestamps - Output: Fresh reference data in SQLite; global version hash persisted in SharedPreferences
Error Handling
- Empty provinces table forces sync regardless of freshness (prevents stuck dropdowns on first install)
- Per-entity failure isolation (FR6): one entity's failure does not block others
MasterDataSyncNotifieremits progress events for UI consumption- Analytics tracking via
SyncAnalyticsService
Cross-References
- Triggers: N/A (leaf workflow)
- Triggered by: WF-SYNC-02 (Step 1), WF-SYNC-03 (Step 1), WF-SYNC-17 (auto-pull)
WF-SYNC-06: Bidirectional Farmer Sync
Trigger: Called during Step 2 (push) and Step 3 (pull) of full/submit-priority sync
Frequency: Per sync cycle
Offline Support: Yes -- farmers are created locally with sync_status = 'pending'
Workflow Diagram
graph TD
A((Farmer Sync Phase)) --> B[Get Farmer Summary<br/>pending + failed counts]
B --> C{Pending Farmers > 0?}
C -->|No| F[Skip Push]
C -->|Yes| D[SyncCoordinator.pushFarmers<br/>via FarmersSync.pushPendingFarmers]
D --> E[Track Analytics:<br/>entity=farmers, type=push]
E --> F
F --> G{Pull Phase<br/>Active?}
G -->|No| H[End]
G -->|Yes| I[PullScopeResolver<br/>determines scope]
I --> J[SyncCoordinator.pullFarmersForScope<br/>delta sync via modifiedSince]
J --> K[Track Analytics:<br/>entity=farmers, type=pull]
K --> L[Upload Pending<br/>Farmer Photos]
L --> H
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Function | SyncCoordinator.farmerSummary() | Counts pending + failed farmers | N/A |
| 2 | Function | FarmersSync.pushPendingFarmers() | Pushes each pending farmer with retry priority ordering | Returns SyncResult with success/failure counts |
| 3 | Function | FarmersSync.pullFarmersForScope(scope) | Delta pull using modifiedSince with role-based scope | Returns SyncResult |
| 4 | Function | FarmersSync.uploadPendingPhotos() | Uploads profile and crop photos for synced farmers | Returns map of uploaded/failed counts |
Data Transformations
- Push: Local SQLite
farmerstable rows withsync_status IN ('pending', 'failed')-> APIPOST /farmers - Pull: API
GET /farmers?modifiedSince=...&agentId=...-> local SQLite upsert with conflict resolution - Photos: Local file paths -> cloud storage URLs -> update farmer record
Error Handling
- Push uses
getRetryableFarmersByPriority()which orders by retry eligibility - Photo upload failure does not block farmer sync
- Analytics tracked per push/pull operation
Cross-References
- Triggers: N/A (leaf workflow)
- Triggered by: WF-SYNC-02 (Steps 2, 3), WF-SYNC-04, WF-SYNC-12 (login sync)
WF-SYNC-07: Bidirectional Client Sync
Trigger: Called during Step 2 (push) and Step 3 (pull) of full/submit-priority sync
Frequency: Per sync cycle
Offline Support: Yes -- clients are created locally with sync_status = 'pending'
Workflow Diagram
graph TD
A((Client Sync Phase)) --> B[Count Pending Clients]
B --> C{Pending > 0<br/>AND Online?}
C -->|No| F[Skip Push]
C -->|Yes| D[SyncCoordinator.pushClients<br/>via ClientsSync.pushPendingClients]
D --> E[Track Analytics:<br/>entity=clients, type=push]
E --> F
F --> G{Pull Phase<br/>Active?}
G -->|No| H[Continue to Documents]
G -->|Yes| I[SyncCoordinator.pullClientsForScope<br/>delta sync with scope]
I --> J[Track Analytics:<br/>entity=clients, type=pull]
J --> K[Pull Client Crop Demands<br/>for Scope]
K --> H
H --> L[Step 4: Push Pending<br/>Crop Demands]
L --> M[Upload Crop Demand<br/>Photos]
M --> N[Sync Pending Client<br/>Documents]
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Function | SyncCoordinator.clientsPendingCount() | Counts pending clients | N/A |
| 2 | Function | ClientsSync.pushPendingClients() | Pushes each pending client | Skipped if offline to avoid DNS freezes |
| 3 | Function | ClientsSync.pullClientsForScope(scope) | Delta pull with role-based filtering | Returns SyncResult |
| 4 | Function | ClientsSync.pullClientCropDemandsForScope(scope) | Pulls crop demand assignments | Returns count pulled |
| 5 | Function | CropDemandsSync.syncPendingCropDemands() | Pushes local crop demand drafts | Blocked if client push had failures |
| 6 | Function | CropDemandsSync.uploadPendingPhotos() | Uploads crop demand photos | Returns uploaded/failed counts |
| 7 | Function | ClientsSync.syncPendingDocuments() | Uploads pending client documents | Returns count synced |
Data Transformations
- Push: Local
clientstable withsync_status = 'pending'-> API - Pull: API -> local SQLite upsert
- Crop demand push is gated on client push success (FK dependency)
- Document sync uploads file attachments to cloud storage
Error Handling
- Client push is skipped when offline to prevent DNS lookup freezes
- Crop demand push is blocked if client push had any failures (prevents orphaned demands)
- Document sync errors are logged but do not block the sync cycle
Cross-References
- Triggers: N/A (leaf workflow)
- Triggered by: WF-SYNC-02 (Steps 2, 3, 4), WF-SYNC-04, WF-SYNC-12 (login sync)
WF-SYNC-08: Offline Queue Processor
Trigger: Connectivity available + 15-second timer tick
Frequency: Every 15 seconds when online (configurable via OfflineQueueConfig.processingInterval)
Offline Support: Yes -- this IS the offline support mechanism; queues items when offline, drains when online
Workflow Diagram
graph TD
A((Timer Tick / <br/>Connectivity Restore)) --> B{Connected?}
B -->|No| C[State: waiting]
B -->|Yes| D{Already Processing?}
D -->|Yes| E[Skip]
D -->|No| F[State: processing]
F --> G[Get Items To Process<br/>limit: 5, priority-ordered]
G --> H{Items Empty?}
H -->|Yes| I[State: idle]
H -->|No| J[For Each Item]
J --> K{Processor<br/>Registered?}
K -->|No| L[Skip Item]
K -->|Yes| M[Mark In-Progress]
M --> N[Execute Processor]
N --> O{Success?}
O -->|Yes| P[Mark Completed<br/>with Server Ack ID]
O -->|No| Q[Mark Failed<br/>with Backoff]
Q --> R[Log Next Retry Time]
P --> S{More Items?}
R --> S
L --> S
S -->|Yes| T{Still Connected?}
T -->|Yes| J
T -->|No| U[Stop Batch]
S -->|No| V[Emit Progress:<br/>batch complete]
U --> V
V --> I
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Timer | Timer.periodic(_config.processingInterval) | 15-second polling when online | Paused during main sync |
| 2 | Function | SyncQueueService.getItemsToProcess(limit: 5) | Fetches batch ordered by priority then FIFO | Returns empty list if nothing ready |
| 3 | Function | _processors[item.entityType] | Entity-specific processor callback | Skips item if no processor registered |
| 4 | Function | SyncQueueService.markInProgress(id) | Sets status to in_progress | N/A |
| 5 | Function | Registered processor | Executes actual sync operation | Throws on failure |
| 6 | Function | SyncQueueService.markCompleted(id, serverAckId) | Marks item synced with server ack | N/A |
| 7 | Function | SyncQueueService.markFailed(id, error) | Increments retry count, calculates next backoff | Uses ExponentialBackoffService |
Data Transformations
- Input:
sync_queueSQLite table rows withstatus = 'pending'andnext_retry_at <= now - Processing: Entity-specific sync via registered processor callbacks
- Output: Items marked
completed(with server ack) orfailed(with next retry timestamp)
Error Handling
- Connectivity check between each item prevents hanging on lost connection
- Exponential backoff prevents thundering herd on repeated failures
- Processor is paused during main
SyncManager._performSync()to prevent conflicts - Crash recovery:
resetInProgressItems()at startup moves stuck items back to pending
Cross-References
- Triggers: Entity-specific sync services (farmer, client, stock loss, etc.)
- Triggered by: WF-SYNC-01 (initialization), connectivity restore
WF-SYNC-09: Exponential Backoff Retry
Trigger: Queue item failure Frequency: Per failure event Offline Support: Yes -- backoff timers are persisted to SQLite; survive app restarts
Workflow Diagram
graph LR
A((Item Failure)) --> B[Get Retry Count]
B --> C[Calculate Delay<br/>min 30s * 2^retryCount, 5h]
C --> D[Add Jitter<br/>+/- 10%]
D --> E[Set next_retry_at<br/>= now + delay]
E --> F[Persist to<br/>sync_queue table]
F --> G[RetryExecutorService<br/>polls every 30s]
G --> H{next_retry_at<br/><= now?}
H -->|No| I[Skip - not ready]
H -->|Yes| J[Execute Retry]
J --> K{Success?}
K -->|Yes| L[Mark Completed]
K -->|No| M[Increment retryCount<br/>Recalculate Delay]
M --> F
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Function | ExponentialBackoffService.calculateDelay(retryCount) | min(30s * 2^retryCount, 5h) | N/A -- pure calculation |
| 2 | Function | ExponentialBackoffService.calculateDelayWithJitter(retryCount) | Adds +/-10% random jitter to prevent thundering herd | N/A |
| 3 | Function | ExponentialBackoffService.isReadyForRetry(nextRetryAt) | Checks if current time exceeds scheduled retry | Returns true if nextRetryAt is null |
| 4 | Timer | RetryExecutorService._retryTimer | 30-second polling interval | Prevents concurrent cycles via _isExecuting flag |
Data Transformations
- Backoff schedule: 30s -> 1m -> 2m -> 4m -> 8m -> 16m -> 32m -> 64m -> 5h (capped)
- Jitter range: +/-10% of calculated delay
- Unlimited retries (
SyncConfig.maxRetryAttempts = -1) -- never gives up on data
Error Handling
- Jitter prevents all devices retrying simultaneously after outage
RetryExecutorServiceis stopped during main sync to prevent conflicts- Each entity type has its own registered executor callback
Cross-References
- Triggers: N/A (leaf workflow)
- Triggered by: WF-SYNC-08 (queue processor failures), WF-SYNC-02 (Step 8 retry phase)
WF-SYNC-10: Conflict Resolution
Trigger: Pull operation finds local record that was also modified on server Frequency: Per conflicting record during pull Offline Support: N/A -- conflict resolution only occurs during active sync
Workflow Diagram
graph TD
A((Pull Finds<br/>Existing Record)) --> B[Detect Conflicting Fields<br/>skip metadata fields]
B --> C{Any Conflicts?}
C -->|No| D[Use Server Data<br/>no-conflict path]
C -->|Yes| E{Resolution<br/>Strategy?}
E -->|serverWins| F[Server Data Wins]
E -->|localWins| G[Local Data Wins]
E -->|latestTimestamp| H{Compare<br/>Timestamps}
H -->|Server Newer| F
H -->|Local Newer| G
H -->|Both Null| F
F --> I[Return ConflictResolutionResult<br/>winner=server]
G --> J[Return ConflictResolutionResult<br/>winner=local]
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Function | ConflictResolver.detectConflictingFields() | Compares all non-metadata fields between local and server | Skips: id, serverId, createdAt, updatedAt, syncStatus, syncError, etc. |
| 2 | Switch | ConflictResolutionStrategy enum | Routes to serverWins / localWins / latestTimestamp | Default: serverWins |
| 3 | Function | ConflictResolver._resolveByTimestamp() | Compares updatedAt timestamps | Falls back to server-wins if timestamps unavailable |
| 4 | Function | ConflictResolver.isServerNewer() | Convenience method for simple timestamp check | Assumes server authoritative if no timestamps |
Data Transformations
- Input: Two maps (
localData,serverData) with optional timestamps - Processing: Field-by-field diff excluding metadata; strategy-based resolution
- Output:
ConflictResolutionResult<Map<String, dynamic>>with winning data, strategy used, winner source, and conflicting field list
Error Handling
- Default strategy is
serverWins-- safest for data integrity - Handles type mismatches via string comparison fallback
- Handles nested maps and lists recursively
- When both timestamps are null, defaults to server-wins
Cross-References
- Triggers: N/A (leaf workflow)
- Triggered by: WF-SYNC-06 (farmer pull), WF-SYNC-07 (client pull)
WF-SYNC-11: Pull Scope Resolution
Trigger: Before any pull operation during sync
Frequency: Per sync cycle (cached for duration of cycle)
Offline Support: Uses cached identity from AgentContext
Workflow Diagram
graph TD
A((Resolve Pull Scope)) --> B[Get Identity<br/>from AgentContext]
B --> C[Normalize Roles<br/>lowercase, strip symbols]
C --> D[Resolve Agent ID<br/>identity -> cached fallback]
D --> E[Get Active Dashboard Role<br/>from ActiveRoleManager]
E --> F{Explicit Active<br/>Role Set?}
F -->|Admin Role| G[PullRole.admin<br/>full data access]
F -->|Agent Role| H{Has Agent ID?}
H -->|Yes| I[PullRole.agent<br/>scoped to agentId]
H -->|No| K[Continue fallback]
F -->|Warehouse Role| J[PullRole.warehouse<br/>scoped to centerId]
F -->|No| K[Capability Fallback]
K --> L{Is Admin?}
L -->|Yes| G
L -->|No| M{Is Warehouse?}
M -->|Yes| N{Has Center ID?}
N -->|Yes| J
N -->|No| O{Also Agent?}
O -->|Yes| I
O -->|No| J
M -->|No| P{Is Agent?}
P -->|Yes| I
P -->|No| Q[PullRole.unknown]
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Function | AgentContext.getIdentity(forceRefresh) | Resolves current user identity with roles | Cached from last successful fetch |
| 2 | Function | _normalizeRole(role) | toLowerCase().replaceAll(RegExp(r'[^a-z0-9]'), '') | Handles all casing and symbol variations |
| 3 | Function | ActiveRoleManager.getPersistedActiveRole() | Gets explicitly selected dashboard role | Returns null if not set (multi-role hub) |
| 4 | Function | _resolveAgentId() | Identity agentId -> cached getStoredAgentId() | Returns null if both unavailable |
| 5 | Function | _resolveWarehouseScope() | Fetches assignedCollectionCenterId from auth | Returns scope even if centerId is null |
Data Transformations
- Input: JWT roles, agent identity, active dashboard selection
- Processing: Role normalization and priority-based resolution
- Output:
PullScopewithrole,roles,agentId,assignedCenterId, andsignature
Error Handling
- Multi-role users without explicit active role follow fallback chain: admin > warehouse > agent > unknown
- Warehouse without center assignment falls back to agent scope if user also has agent role
PullScope.signatureenables scope change detection between sessions
Cross-References
- Triggers: N/A (leaf workflow)
- Triggered by: WF-SYNC-02 (Step 3), WF-SYNC-04, WF-SYNC-17 (auto-pull)
WF-SYNC-12: Login Sync / Post-Login Data Fetch
Trigger: Successful authentication Frequency: Once per login Offline Support: Partial -- detects fresh install and forces full pull; times out gracefully
Workflow Diagram
graph TD
A((Successful Login)) --> B{Login Sync<br/>Enabled?}
B -->|No| C[Return success]
B -->|Yes| D[Resolve Agent ID]
D --> E{Fresh Install?<br/>0 local farmers}
E -->|Yes| F[Clear Sync Metadata<br/>force full pull]
E -->|No| G[Keep Metadata<br/>delta sync]
F --> H[Start LoginSyncNotifier]
G --> H
H --> I[Race: Sync vs<br/>3-minute Timeout]
I --> J{Timeout?}
J -->|Yes| K[Return timedOut<br/>use cached data]
J -->|No| L[Execute Sync Sequence]
L --> M[Phase 1-3: Push in Parallel<br/>farmers + clients + cropDemands]
M --> N[Phase 4-5: Pull in Parallel<br/>farmers + clients]
N --> O[Phase 6: Cleanup<br/>Duplicate Clients]
O --> P[Return LoginSyncResult]
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Switch | SyncConfig.loginSyncEnabled | Feature toggle for login sync | Returns success if disabled |
| 2 | Function | FarmerLocalService.getAllFarmers() | Detects fresh install (0 farmers) | On error, clears all metadata for safety |
| 3 | Function | SyncMetadataService.clearSyncMetadata() | Forces full pull by removing delta timestamps | N/A |
| 4 | Parallel | Future.wait([pushFarmers, pushClients, pushCropDemands]) | Pushes all pending data in parallel | Per-phase error isolation |
| 5 | Parallel | Future.wait([pullFarmers, pullClients]) | Pulls fresh data in parallel | Per-phase error isolation |
| 6 | Function | ClientDraftService.cleanupDuplicateClients() | Removes duplicates from past race conditions | Logs warning on error |
| 7 | Race | Future.any([syncSequence, timeout]) | 3-minute overall timeout (SyncConfig.loginSyncTimeout) | Returns partial results on timeout |
Data Transformations
- Input: Authenticated user with potential pending local data
- Processing: Parallel push-then-pull with timeout boundary
- Output:
LoginSyncResultwith per-phase summaries, error list, and timeout indicator
Error Handling
- 3-minute timeout prevents indefinite login blocking
- Fresh install detection ensures first-time users get all data
- Each phase (push farmers, push clients, pull farmers, pull clients) is independently try/caught
LoginSyncNotifierprovides real-time progress for login UI
Cross-References
- Triggers: WF-SYNC-06 (farmer push/pull), WF-SYNC-07 (client push/pull)
- Triggered by: Authentication flow (jwt_auth_service.dart)
WF-SYNC-13: Stock Sync
Trigger: Pull phase of full sync or auto-pull Frequency: Per sync cycle (when stock pull is not yielded to submit-priority) Offline Support: Yes -- stock data is cached in SQLite for offline viewing
Workflow Diagram
graph TD
A((Stock Sync Phase)) --> B[Phase 1: Sync Centers<br/>GET /stock/centers]
B --> C[Cache center summaries<br/>to center_stock_cache]
C --> D[Phase 2: Sync Aggregation<br/>categories and crop types]
D --> E[Cache aggregation data<br/>to stock_aggregation_cache]
E --> F[Phase 3: Sync Batches<br/>individual batch details]
F --> G[Cache batch data<br/>to stock_batch_cache]
G --> H[Update SyncEntityType.stock<br/>timestamp]
H --> I[Emit StockSyncProgress<br/>phase: completed]
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | HTTP | StockRepository.getAllCentersStock() | Fetches center-level stock summaries | Returns cached data on failure |
| 2 | Database | center_stock_cache table | Caches center stock for offline | SQLite transaction |
| 3 | HTTP | Stock aggregation endpoints | Hierarchical: categories -> crop types | Per-center error isolation |
| 4 | Database | stock_aggregation_cache table | Caches aggregation data | SQLite transaction |
| 5 | HTTP | Batch detail endpoints | Individual batch records | Per-batch error isolation |
| 6 | Database | stock_batch_cache table | Caches batch details | SQLite transaction |
Data Transformations
- Input: Server stock data (centers, aggregation, batches)
- Processing: Three-phase progressive caching (center -> aggregation -> batches)
- Output: Locally cached stock data for offline warehouse operations
Error Handling
- Phase-based progress tracking via
StockSyncProgressstream - Each phase can fail independently
StockSyncPhaseenum tracks: idle -> syncingCenters -> syncingAggregation -> syncingBatches -> completed/failed
Cross-References
- Triggers: N/A (leaf workflow)
- Triggered by: WF-SYNC-02 (Step 3 pull phase), WF-SYNC-17 (auto-pull)
WF-SYNC-14: Production Cycle Sync
Trigger: Step 2.5 (push) and Step 3 (pull) of full sync
Frequency: Per sync cycle
Offline Support: Yes -- production cycles are created locally with sync_status = 'pending'
Workflow Diagram
graph TD
A((Production Cycle<br/>Sync Phase)) --> B[Push Phase: Step 2.5]
B --> C[ProductionCycleSyncService<br/>.pushPendingCycles]
C --> D[For Each Pending Cycle]
D --> E[Normalize Status<br/>aliases to canonical values]
E --> F[Upload Photo<br/>if present]
F --> G[POST to API<br/>with inputs + harvests]
G --> H{Success?}
H -->|Yes| I[Mark synced locally]
H -->|No| J[Mark failed,<br/>increment retry]
I --> K[Pull Phase: Step 3]
J --> K
K --> L[SyncService.pullProductionCycles<br/>FromBackend scope]
L --> M[Delta pull using<br/>modifiedSince + scope]
M --> N[Upsert to local<br/>production_cycles table]
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Function | ProductionCycleSyncService.pushPendingCycles() | Pushes locally created cycles | Returns ProductionCycleSyncResult |
| 2 | Function | Status normalization | Maps aliases (planned/planifie/planifie to Planned, etc.) | Extensive alias table for FR/EN variants |
| 3 | HTTP | ChunkedUploadService / UploadService | Photo upload for production evidence | Skipped if no photo |
| 4 | HTTP | Pull endpoint with scope | Delta sync with role-based filtering | Returns count pulled |
Data Transformations
- Push: Local
production_cycles+production_inputs+production_harvests-> API - Pull: API -> local SQLite upsert (preserves unsynced local records)
- Status normalization handles 30+ aliases across EN/FR and enum formats
Error Handling
- Push before pull to prevent confusion with locally-pending cycles
- GUID validation on cycle IDs
- Unsynced production work is preserved across user/scope transitions
Cross-References
- Triggers: N/A (leaf workflow)
- Triggered by: WF-SYNC-02 (Steps 2.5, 3), WF-SYNC-04
WF-SYNC-15: Survey Response & File Sync
Trigger: Steps 7.6 and 7.7 of full sync, or via queue processor Frequency: Per sync cycle Offline Support: Yes -- responses saved locally, files queued for upload
Workflow Diagram
graph TD
A((Survey Sync Phase)) --> B[Step 7.6: Push Responses]
B --> C[Get Pending Responses<br/>from campaign_local_datasource]
C --> D{Any Pending?}
D -->|No| E[Skip to Files]
D -->|Yes| F[Batch Submit<br/>POST /api/surveys/responses/batch]
F --> G[For Each Result]
G --> H{Item Success?}
H -->|Yes| I[Mark Response Synced]
H -->|No| J[Mark Response Failed]
I --> E
J --> E
E --> K[Step 7.7: Push Files]
K --> L[Query survey_file_queue<br/>status = pending]
L --> M{Any Pending Files?}
M -->|No| N[End]
M -->|Yes| O[For Each File]
O --> P[Upload to Cloud Storage<br/>folder: survey-files]
P --> Q{Upload Success?}
Q -->|Yes| R[Mark File Uploaded<br/>update response answer URL]
Q -->|No| S[Log failure]
R --> N
S --> N
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Function | SurveyResponseSyncService.pushPendingResponses() | Batch-submits all pending responses in single API call | Returns SurveyResponseSyncResult |
| 2 | HTTP | CampaignRemoteDataSource.batchSubmitResponses() | POST /api/surveys/responses/batch | Per-response status in batch result |
| 3 | Function | SurveyFileSyncService.pushPendingFiles() | Uploads pending file attachments | Returns SurveyFileSyncResult |
| 4 | HTTP | CampaignRemoteDataSource.uploadFile() | Uploads file to cloud storage | Returns remote URL on success |
| 5 | Function | _updateResponseFileAnswer() | Replaces local path with remote URL in response data | N/A |
Data Transformations
- Responses: Local
survey_responsestable -> batch API payload -> server; local records marked synced - Files: Local file path -> cloud upload -> remote URL; response answer field updated with URL
- Files MUST be uploaded AFTER responses (files reference server-side response IDs)
Error Handling
- Batch submission provides per-item success/failure status
- File upload failures do not affect response sync status
- Queue item processing (
syncQueueItem) supports both create and update operations
Cross-References
- Triggers: N/A (leaf workflow)
- Triggered by: WF-SYNC-02 (Steps 7.6, 7.7), WF-SYNC-08 (queue processor for
surveyResponseentity type)
WF-SYNC-16: Assessment Sync
Trigger: Step 1.5 (question pull) and Step 5 (assessment push) of full sync Frequency: Per sync cycle (questions freshness-gated at 24 hours) Offline Support: Yes -- questions cached for offline use; assessments queued locally
Workflow Diagram
graph TD
A((Assessment Sync)) --> B[Step 1.5: Pull Questions]
B --> C{Cache Fresh?<br/>< 24h since last sync}
C -->|Yes| D[Skip Pull]
C -->|No| E[Ensure Tables Exist]
E --> F[Get Last Sync Time]
F --> G{First Sync?}
G -->|Yes| H[Full Pull: GET all questions<br/>clear + replace cache]
G -->|No| I[Delta Pull: GET questions<br/>modifiedSince=lastSync]
H --> J[Save Questions Locally]
I --> J
J --> K[Update sync_metadata<br/>assessment_questions timestamp]
K --> L[End Pull Phase]
L --> M[Step 5: Push Assessments]
M --> N[AssessmentSyncIntegration<br/>.syncPendingAssessments]
N --> O[For Each Pending]
O --> P[POST to API]
P --> Q{Success?}
Q -->|Yes| R[Mark Synced]
Q -->|No| S[Mark Failed]
R --> T[Update SyncEntityType<br/>.assessments timestamp]
S --> T
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Switch | SyncConfig.isAssessmentQuestionsFresh(lastSync) | 24-hour freshness check | Forces sync if forceRefresh |
| 2 | HTTP | AssessmentRemoteDataSource.getAllQuestions(modifiedSince) | Bulk question fetch with optional delta | N/A |
| 3 | Database | AssessmentLocalDataSource.saveQuestions() | Upsert for delta, clear+save for full | Ensures tables exist first |
| 4 | Function | AssessmentSyncIntegration.syncPendingAssessments() | Pushes completed quality assessments | Returns success/failure counts |
Data Transformations
- Question pull: API -> local cache with delta sync support
- Assessment push: Local pending assessments -> API POST
- Questions are used offline for quality assessment forms
Error Handling
- Question sync failures are fully isolated -- do not block farmer/client sync
ensureTablesExist()handles first-run or migration scenarios- Push pending assessments also run during auto-pull to upload captures from offline sessions
Cross-References
- Triggers: N/A (leaf workflow)
- Triggered by: WF-SYNC-02 (Steps 1.5, 5), WF-SYNC-03 (Step 1.5), WF-SYNC-17 (auto-pull)
WF-SYNC-17: Auto-Pull Heartbeat
Trigger: Connectivity restore, app startup, foreground heartbeat timer, SignalR DataChanged, app resume Frequency: Cooldown-gated (30s for connectivity, 45s for resume, 15m for heartbeat) Offline Support: N/A -- only runs when online
Workflow Diagram
graph TD
A((Auto-Pull Trigger)) --> B{Initialized?}
B -->|No| C[Skip]
B -->|Yes| D{Submit Sync<br/>Pending?}
D -->|Yes| C
D -->|No| E{Already<br/>Auto-Pulling?}
E -->|Yes| C
E -->|No| F{On Cooldown?<br/>per-trigger tracking}
F -->|Yes| C
F -->|No| G[Acquire _syncLock]
G --> H{Full Sync<br/>Running?}
H -->|Yes| C
H -->|No| I[Check Connectivity]
I --> J{Connected?}
J -->|No| C
J -->|Yes| K[Set _isAutoPulling = true]
K --> L[Resolve PullScope]
L --> M[Ensure Pull Scope Safety]
M --> N{Yield to<br/>Submit Sync?}
N -->|Yes| O[Stop Early]
N -->|No| P[Push Pending Assessments]
P --> Q[Pull Master Data<br/>freshness-gated]
Q --> R[Pull Assessment Questions<br/>freshness-gated]
R --> S[Pull Farmers for Scope]
S --> T[Pull Clients for Scope]
T --> U[Pull Crop Demands]
U --> V[Pull Production Cycles]
V --> W[Pull Orders]
W --> X[Pull Campaigns]
X --> Y[Pull Assessments]
Y --> Z[Pull Stock Data]
Z --> AA[Update Agent Last Sync]
AA --> AB{Failures?}
AB -->|0| AC[Reset failure counter]
AB -->|>0| AD[Increment<br/>_consecutiveAutoPullFailures]
AD --> AE{>= 3 consecutive?}
AE -->|Yes| AF[Emit UX Warning]
AE -->|No| AG[End]
AC --> AG
AF --> AG
O --> AG
Node Descriptions
| # | n8n Node Type | Component | Function | Error Handling |
|---|---|---|---|---|
| 1 | Gate | Cooldown check via _lastAutoPullByTrigger map | Per-trigger cooldown enforcement | Skip if on cooldown |
| 2 | Gate | _syncLock.synchronized() | Prevents concurrent sync | Skip if full sync running |
| 3 | Function | PullScopeResolver.resolve(forceRefresh: true) | Fresh scope for each auto-pull | Falls back to unknown |
| 4 | Function | _ensurePullScopeSafety(scope) | Detects user/scope changes; clears stale caches | Clears entity caches on user switch |
| 5 | Function | syncMasterData(force: false) | Freshness-gated master data pull | Non-blocking |
| 6 | Function | Entity-specific pull operations | Pull farmers, clients, demands, cycles, orders, campaigns, assessments, stock | Each independently try/caught |
| 7 | Counter | _consecutiveAutoPullFailures | Tracks repeated failures | Warns after 3 consecutive failures (SyncConfig.autoPullFailureAlertThreshold) |
Data Transformations
- Input: Stale local cache from last sync
- Processing: Pull-only refresh (no push) with freshness gating per entity
- Output: Updated local cache; failure counter for UX warnings
Error Handling
- Each pull operation is independently try/caught -- partial success is tracked
_shouldYieldToSubmitPrioritySync()allows auto-pull to yield immediately to user-submitted data- After 3 consecutive failures, emits UX warning via
UnifiedSyncStatusService - Failure counter resets on first successful pull
- Per-trigger cooldown prevents excessive auto-pulls:
- Connectivity restore: 30s cooldown
- App resume: 45s cooldown
- Foreground heartbeat: 15m interval
- Connectivity restore with pending local changes triggers
requestSubmitSync()instead
Cross-References
- Triggers: WF-SYNC-05, WF-SYNC-06 (pull only), WF-SYNC-07 (pull only), WF-SYNC-13, WF-SYNC-14, WF-SYNC-16
- Triggered by: WF-SYNC-01 (connectivity listener, heartbeat timer, SignalR handler, startup)
Cross-Workflow Dependency Graph
graph TD
WF01[WF-SYNC-01<br/>Initialization] --> WF02[WF-SYNC-02<br/>Full Sync]
WF01 --> WF04[WF-SYNC-04<br/>Submit Priority]
WF01 --> WF08[WF-SYNC-08<br/>Queue Processor]
WF01 --> WF17[WF-SYNC-17<br/>Auto-Pull]
WF02 --> WF05[WF-SYNC-05<br/>Master Data]
WF02 --> WF06[WF-SYNC-06<br/>Farmer Sync]
WF02 --> WF07[WF-SYNC-07<br/>Client Sync]
WF02 --> WF09[WF-SYNC-09<br/>Backoff Retry]
WF02 --> WF10[WF-SYNC-10<br/>Conflict Resolution]
WF02 --> WF11[WF-SYNC-11<br/>Pull Scope]
WF02 --> WF13[WF-SYNC-13<br/>Stock Sync]
WF02 --> WF14[WF-SYNC-14<br/>Production Cycle]
WF02 --> WF15[WF-SYNC-15<br/>Survey Sync]
WF02 --> WF16[WF-SYNC-16<br/>Assessment Sync]
WF03[WF-SYNC-03<br/>Initial Sync] --> WF05
WF04 --> WF06
WF04 --> WF07
WF04 --> WF14
WF04 --> WF17
WF08 --> WF09
WF12[WF-SYNC-12<br/>Login Sync] --> WF06
WF12 --> WF07
WF17 --> WF05
WF17 --> WF06
WF17 --> WF07
WF17 --> WF11
WF17 --> WF13
WF17 --> WF14
WF17 --> WF16
Key Source Files Reference
| File | Purpose |
|---|---|
mobile/mon_jardin/lib/data/services/sync_manager.dart | Central orchestrator -- SyncManager singleton, strategy selection, all sync phases |
mobile/mon_jardin/lib/data/services/sync/sync_coordinator.dart | Delegates to FarmersSync, ClientsSync, CropDemandsSync |
mobile/mon_jardin/lib/data/services/sync/strategies/sync_strategy.dart | Abstract strategy interface (7 boolean flags) |
mobile/mon_jardin/lib/data/services/sync/strategies/full_sync_strategy.dart | All phases enabled |
mobile/mon_jardin/lib/data/services/sync/strategies/initial_sync_strategy.dart | Master data + assessment questions only |
mobile/mon_jardin/lib/data/services/sync/strategies/submit_priority_strategy.dart | Push-priority with bidirectional, skips master data |
mobile/mon_jardin/lib/data/services/sync/master_data_sync_orchestrator.dart | Version-gated master data sync with analytics |
mobile/mon_jardin/lib/data/services/sync/master_data_sync.dart | MasterDataSyncHelper -- 26 entity types, fetch-then-replace |
mobile/mon_jardin/lib/data/services/sync/offline_queue_processor.dart | OfflineQueueProcessor -- 15s timer, batch 5, 12 entity processors |
mobile/mon_jardin/lib/data/services/sync/exponential_backoff_service.dart | 30s-5h backoff with +/-10% jitter |
mobile/mon_jardin/lib/data/services/sync/conflict_resolver.dart | Server-wins / local-wins / timestamp strategies |
mobile/mon_jardin/lib/data/services/sync/pull_scope_resolver.dart | Role-based scope: admin / agent / warehouse / unknown |
mobile/mon_jardin/lib/data/services/sync/pull_scope.dart | PullScope value object with signature for change detection |
mobile/mon_jardin/lib/data/services/sync/login_sync_service.dart | Post-login sync with 3-minute timeout and parallel phases |
mobile/mon_jardin/lib/data/services/sync/farmers_sync.dart | Farmer push/pull with photo upload |
mobile/mon_jardin/lib/data/services/sync/clients_sync.dart | Client push/pull with document sync |
mobile/mon_jardin/lib/data/services/sync/crop_demands_sync.dart | Crop demand draft sync with photo upload |
mobile/mon_jardin/lib/data/services/sync/stock_sync.dart | Three-phase stock sync: centers -> aggregation -> batches |
mobile/mon_jardin/lib/data/services/sync/production_cycle_sync.dart | Production cycle push/pull with status normalization |
mobile/mon_jardin/lib/data/services/sync/survey_response_sync.dart | Batch survey response push |
mobile/mon_jardin/lib/data/services/sync/survey_file_sync.dart | Survey file upload with URL replacement |
mobile/mon_jardin/lib/data/services/sync/assessment_questions_sync.dart | Bulk question pull with 24h freshness cache |
mobile/mon_jardin/lib/data/services/sync/retry_executor_service.dart | 30s polling, entity-specific executors |
mobile/mon_jardin/lib/data/services/connectivity_service.dart | DNS-verified connectivity with cancelable operations |
mobile/mon_jardin/lib/core/config/sync_config.dart | All timing constants, freshness durations, feature toggles |
mobile/mon_jardin/lib/data/models/master_data_sync_event.dart | 26-member MasterDataEntityType enum |
mobile/mon_jardin/lib/data/local/models/sync_history_entry.dart | SyncTrigger enum: manual, connectivity, startup, resume, foregroundHeartbeat, formSubmit, scheduled, retry |