Skip to main content

02 - Data Sync Engine

Domain: Offline-First Bidirectional Synchronization Workflows: WF-SYNC-01 through WF-SYNC-17 Primary Source: mobile/mon_jardin/lib/data/services/sync_manager.dart Pattern: Kobo-style event-driven sync with exponential backoff


Domain Introduction

The Data Sync Engine is the backbone of the Almafrica mobile application's offline-first architecture. It ensures that field agents operating in areas with intermittent connectivity can capture farmer registrations, client data, quality assessments, production cycles, survey responses, and stock operations without data loss. The engine synchronizes all data bidirectionally with the backend when connectivity is restored.

The sync engine is built around three core principles:

  1. Never lose data -- All local writes are queued and retried with exponential backoff (unlimited retries, capped at 5-hour intervals)
  2. Event-driven, not polling -- Sync triggers on connectivity restore, form submission, and foreground heartbeat rather than continuous polling (Kobo pattern)
  3. Strategy-based orchestration -- Three distinct sync strategies (FullSyncStrategy, InitialSyncStrategy, SubmitPrioritySyncStrategy) control which phases execute, allowing the engine to optimize for different scenarios

Architecture Summary

graph TD
subgraph Triggers
T1((App Startup))
T2((Connectivity Restore))
T3((Form Submit))
T4((Manual Sync))
T5((Foreground Heartbeat))
T6((SignalR DataChanged))
T7((First Login))
end

subgraph Strategy Selection
SS{Select Strategy}
FS[FullSyncStrategy]
IS[InitialSyncStrategy]
SP[SubmitPrioritySyncStrategy]
end

subgraph Sync Manager
SM[SyncManager._performSync]
AP[requestAutoPull]
SQ[requestSubmitSync]
end

subgraph Sync Phases
P0[Step 0: Agent Status Refresh]
P1[Step 1: Master Data Sync<br/>26 entity types]
P15[Step 1.5: Assessment Questions]
P16[Step 1.6: Orphan Cleanup]
P2[Step 2: Push Farmers/Clients]
P25[Step 2.5: Push Production Cycles]
P3[Step 3: Pull Scope-Aware Data]
P4[Step 4: Push Crop Demands/Photos/Docs]
P5[Step 5: Push Assessments]
P6[Step 6: Push Stock Losses]
P7[Step 7: Push Stock Transfers]
P76[Step 7.6: Push Survey Responses]
P77[Step 7.7: Push Survey Files]
P8[Step 8: Retry Failed Items]
end

subgraph Background Services
OQP[OfflineQueueProcessor<br/>15s timer / batch 5]
RES[RetryExecutorService<br/>30s polling]
EBS[ExponentialBackoffService<br/>30s to 5h]
end

T1 --> AP
T2 --> SM
T3 --> SQ
T4 --> SM
T5 --> AP
T6 --> AP
T7 --> SM

SM --> SS
SS --> FS
SS --> IS
SS --> SP

FS --> P0 --> P1 --> P15 --> P16 --> P2 --> P25 --> P3 --> P4 --> P5 --> P6 --> P7 --> P76 --> P77 --> P8
IS --> P1 --> P15
SP --> P2 --> P25 --> P3

SM -.-> OQP
SM -.-> RES
RES -.-> EBS

Strategy Comparison Matrix

CapabilityFullSyncStrategyInitialSyncStrategySubmitPrioritySyncStrategy
Refresh Agent StatusYesNoNo
Sync Master DataYesYesNo
Sync Assessment QuestionsYesYesNo
Cleanup OrphansYesNoNo
Bidirectional SyncYesNoYes
Run RetriesYesNoNo
Push PriorityNoNoYes

Source: mobile/mon_jardin/lib/data/services/sync/strategies/sync_strategy.dart

Master Data Entity Types (26 total)

The master data layer syncs the following reference entities using a "fetch then replace" strategy:

#Entity TypeCategory
1provincesGeography
2territoriesGeography
3villagesGeography
4cropsAgriculture
5cropCategoriesAgriculture
6seedSourcesAgriculture
7fertilizerTypesAgriculture
8pesticideTypesAgriculture
9measurementUnitsAgriculture
10irrigationSourcesAgriculture
11soilTypesAgriculture
12waterAccessTypesAgriculture
13farmToolsAgriculture
14terrainTypesAgriculture
15landOwnershipTypesAgriculture
16storageFacilityTypesAgriculture
17transportMeansInfrastructure
18electricitySourcesInfrastructure
19electricityReliabilitiesInfrastructure
20incomeSourcesDemographics
21monthlyIncomeRangesDemographics
22maritalStatusesDemographics
23educationLevelsDemographics
24gendersDemographics
25contactMethodsCommunication
26financialServiceTypesFinancial

Source: mobile/mon_jardin/lib/data/models/master_data_sync_event.dart (enum MasterDataEntityType)


Workflow Catalog


WF-SYNC-01: Sync Manager Initialization

Trigger: App startup (SyncManager.initialize()) Frequency: Once per app session Offline Support: Yes -- initializes background services that activate when connectivity returns

Workflow Diagram

graph LR
A((App Startup)) --> B[Reset In-Progress<br/>Queue Items]
B --> C[Migrate Production Cycles<br/>SharedPreferences to SQLite]
C --> D[Register Retry Executors<br/>8 entity types]
D --> E[Start RetryExecutorService<br/>30s auto-retry]
E --> F[Initialize OfflineQueueProcessor<br/>15s interval / batch 5]
F --> G[Subscribe to<br/>Connectivity Stream]
G --> H{Currently Online?}
H -->|Yes| I[Schedule Initial<br/>Auto-Pull after 5s]
H -->|No| J[Wait for<br/>Connectivity Event]
I --> K[Start Foreground<br/>Heartbeat Timer 15m]
J --> K
K --> L[Bind SignalR<br/>DataChanged Handler]
L --> M[Mark Initialized]

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1TriggerSyncManager.initialize()Entry point on app startupGuard: returns early if already initialized
2FunctionSyncQueueService.resetInProgressItems()Recovers stuck items from prior crashLogs warning, continues on failure
3FunctionProductionCycleService.migrateFromSharedPreferences()One-time migration to SQLiteLogs warning, continues on failure
4Function_initializeRetryExecutors()Registers executors for stockLoss, stockTransfer, farmer, client, cropDemand, clientDocument, productionCycle, surveyResponseN/A -- registration only
5FunctionRetryExecutorService.startAutoRetry()Starts 30s polling for retryable itemsN/A
6Function_initializeQueueProcessor()Registers 12 processors, initializes with OfflineQueueConfig(processingInterval: 15s, batchSize: 5)N/A
7ListenerConnectivityService.connectionStream.listen()Fires on connectivity change; applies 5s stabilization delay before syncChecks cooldown, verifies still connected after delay
8TimerTimer.periodic(autoPullForegroundInterval)15-minute heartbeat for pull-only refreshCooldown-gated
9ListenerPermissionSyncService.onDataChangedSignalR real-time push notification triggers immediate pullForce bypasses cooldown

Data Transformations

  • Input: Raw app startup state with potentially stale queue items
  • Processing: Crash recovery (reset in-progress to pending), migration, service registration
  • Output: Fully initialized sync engine with all background processors running

Error Handling

  • Queue reset failure is logged but does not block initialization
  • Production cycle migration failure is logged but does not block initialization
  • Connectivity subscription handles rapid on/off toggling via stabilization delay

Cross-References

  • Triggers: WF-SYNC-02, WF-SYNC-04, WF-SYNC-08, WF-SYNC-09, WF-SYNC-17
  • Triggered by: App lifecycle (main.dart)

WF-SYNC-02: Full Sync Strategy

Trigger: Manual sync request / connectivity restore with pending local changes Frequency: On-demand Offline Support: No -- requires connectivity (skips sync if offline)

Workflow Diagram

graph TD
A((Manual Sync /<br/>Connectivity Restore)) --> B[Acquire _syncLock]
B --> C{Connected?}
C -->|No| D[Emit localDataChanged<br/>and return]
C -->|Yes| E[Set SyncState.syncing]
E --> F[Select FullSyncStrategy]
F --> G[Step 0: Refresh<br/>Agent Status]
G --> H[Step 1: Sync Master Data<br/>via MasterDataSyncOrchestrator]
H --> I[Step 1.5: Sync Assessment<br/>Questions]
I --> J[Step 1.6: Cleanup<br/>Orphaned Pending Records]
J --> K[Resolve PullScope<br/>via PullScopeResolver]
K --> L[Ensure Pull Scope Safety<br/>user-switch detection]
L --> M[Step 2: Push Farmers<br/>then Push Clients]
M --> N[Step 2.5: Push<br/>Production Cycles]
N --> O{Yield to Submit<br/>Priority Sync?}
O -->|Yes| P[Skip Pull Phase]
O -->|No| Q[Step 3: Pull Farmers<br/>Clients / CropDemands /<br/>Production / Orders /<br/>Campaigns / Assessments / Stock]
Q --> R[Step 4: Push Crop Demands<br/>Photos / Documents]
P --> R
R --> S[Step 5: Push Assessments]
S --> T[Step 6: Push Stock Losses<br/>+ Photos]
T --> U[Step 7: Push Stock Transfers]
U --> V[Step 7.6: Push Survey<br/>Responses]
V --> W[Step 7.7: Push Survey Files]
W --> X[Step 8: Execute Ready<br/>Retries]
X --> Y[Update Agent<br/>Last Sync Timestamp]
Y --> Z[Set SyncState.completed]

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1Gate_syncLock.synchronized()Prevents concurrent sync operationsQueues next sync if already running
2SwitchConnectivityService.isConnectedConnectivity checkReturns early if offline
3HTTPAgentStatusService.checkAgentStatus(forceRefresh: true)Detects agent deactivationLogs warning, continues
4FunctionMasterDataSyncOrchestrator.sync()Version-checked master data pullContinues with cached data on failure
5FunctionAssessmentQuestionsSyncHelper.syncAllQuestions()Freshness-gated assessment pullIsolated; failure does not block other phases
6FunctionSyncService.cleanupOrphanedPendingFarmers/Clients()Reconciles records already on serverLogs warning, continues
7FunctionPullScopeResolver.resolve(forceRefresh: true)Determines role-based data scopeFalls back to PullRole.unknown
8FunctionSyncCoordinator.pushFarmers() / pushClients()Uploads pending local recordsPer-entity error isolation with analytics
9FunctionProductionCycleSyncService.pushPendingCycles()Pushes cycles before pull to avoid confusionLogs warning, continues
10FunctionSyncCoordinator.pullFarmersForScope() / pullClientsForScope()Downloads scoped records from backendPer-entity error isolation
11FunctionRetryExecutorService.executeReadyRetries()Processes items whose backoff has elapsedLogs warning, continues

Data Transformations

  • Input: Local pending records + stale cached data
  • Processing: Push local changes first, then pull fresh server data (push-before-pull pattern)
  • Output: Synchronized local database with analytics history entry

Error Handling

  • Each sync phase is individually try/caught -- one failure does not block subsequent phases
  • _shouldYieldToSubmitPrioritySync() allows a full sync's pull phase to be interrupted if a user submits a form mid-sync
  • History tracking records SyncHistoryResult.partialSuccess if any phase has failures
  • On critical error, state transitions to SyncState.failed with error details persisted

Cross-References

  • Triggers: WF-SYNC-05 (master data), WF-SYNC-06 (farmer sync), WF-SYNC-07 (client sync), WF-SYNC-08 (queue processor paused during), WF-SYNC-09 (retries at Step 8), WF-SYNC-13 (stock pull), WF-SYNC-14 (production cycle), WF-SYNC-15 (survey sync), WF-SYNC-16 (assessment sync)
  • Triggered by: WF-SYNC-01 (connectivity listener), manual user action

WF-SYNC-03: Initial Sync Strategy

Trigger: First login (SyncRunMode.initialSync) Frequency: Once per user per device Offline Support: No -- requires connectivity for master data download

Workflow Diagram

graph LR
A((First Login)) --> B[Select InitialSyncStrategy]
B --> C[Acquire _syncLock]
C --> D{Connected?}
D -->|No| E[Return - use empty cache]
D -->|Yes| F[Step 1: Sync Master Data<br/>includeClientMasterData: false]
F --> G[Step 1.5: Sync Assessment<br/>Questions]
G --> H[Mark Initial Sync<br/>Complete]
H --> I[Set SyncState.completed]

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1TriggerrequestSync(runMode: SyncRunMode.initialSync)Called after first authenticationN/A
2FunctionMasterDataSyncOrchestrator.sync(includeClientMasterData: false)Syncs only core configs (provinces, crops, etc.) to minimize downloadContinues with empty cache on failure
3FunctionAssessmentQuestionsSyncHelper.syncAllQuestions()Pulls assessment question templatesIsolated from master data
4ReturnEarly return before bidirectional syncSkips agent status, orphan cleanup, push/pull, retriesN/A

Data Transformations

  • Input: Empty local database after first login
  • Processing: Download reference data only (no entity sync)
  • Output: Populated dropdown data (provinces, territories, crops, etc.) for offline form filling

Error Handling

  • Master data failure leaves user with empty dropdowns; subsequent full sync will populate
  • Assessment question failure is non-blocking

Cross-References

  • Triggers: WF-SYNC-05 (master data subset)
  • Triggered by: Login flow (splash screen routing)

WF-SYNC-04: Submit-Priority Sync Strategy

Trigger: Form submission (farmer registration, client creation, assessment, etc.) Frequency: On each form submission Offline Support: Partial -- queues locally if offline, pushes when connectivity returns

Workflow Diagram

graph TD
A((Form Submission)) --> B[requestSubmitSync]
B --> C{Already Draining?}
C -->|Yes| D[Coalesce: set<br/>_submitSyncQueued = true]
C -->|No| E[_drainSubmitSyncQueue]
E --> F[Select SubmitPrioritySyncStrategy]
F --> G[Set _submitSyncUrgent = true]
G --> H[Acquire _syncLock]
H --> I[Step 2: Push Farmers<br/>Clients / Production Cycles]
I --> J[Step 3: Pull Scope-Aware Data<br/>farmers/clients/cropDemands/<br/>production/orders/campaigns/<br/>assessments/stock]
J --> K[Step 4-7.7: Push<br/>CropDemands/Photos/Docs/<br/>Assessments/StockLoss/<br/>Transfers/Surveys/Files]
K --> L{More Queued?}
L -->|Yes| E
L -->|No| M[Schedule Auto-Pull<br/>after 1s delay]
M --> N[Clear _submitSyncUrgent]

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1TriggerrequestSubmitSync(source: ...)Fire-and-forget sync trigger from UICoalesces multiple rapid calls
2Gate_submitSyncDrainScheduledPrevents concurrent drain loopsCoalesces into next loop iteration
3Loop_drainSubmitSyncQueue()While loop processes all coalesced requestsFinally block resets drain flag
4Function_performSync(runMode: SyncRunMode.submitPriority)Push-first strategy, skips master data and agent statusSame error handling as WF-SYNC-02
5TimerrequestAutoPull(trigger: SyncTrigger.formSubmit, force: true)1-second delayed pull after push completesForce bypasses cooldown

Data Transformations

  • Input: Newly submitted form data saved to SQLite with sync_status = 'pending'
  • Processing: Push pending records immediately, then pull latest data
  • Output: Submitted records confirmed by server; fresh pull data cached locally

Error Handling

  • Coalescing prevents sync flooding from rapid multiple submissions
  • _shouldYieldToSubmitPrioritySync() in full sync and auto-pull contexts yields to this strategy
  • Auto-pull scheduled after drain completes to catch any server-side changes

Cross-References

  • Triggers: WF-SYNC-06, WF-SYNC-07, WF-SYNC-14, WF-SYNC-15, WF-SYNC-17 (auto-pull after drain)
  • Triggered by: UI form submission handlers

WF-SYNC-05: Master Data Sync Orchestration

Trigger: Called by SyncManager.syncMasterData() during full or initial sync Frequency: Per sync cycle (freshness-gated at 15 minutes, or version-gated) Offline Support: No -- requires connectivity; preserves cached data on failure

Workflow Diagram

graph TD
A((syncMasterData)) --> B{Connected?}
B -->|No| C[Return false]
B -->|Yes| D{Force Sync?}
D -->|Yes| G[Start Sync]
D -->|No| E[Fetch Remote Version<br/>GET /api/masterdata/sync/version]
E --> F{Version Changed?}
F -->|No| C
F -->|Yes| G
G --> H{Include Client<br/>Master Data?}
H -->|Yes| I[Sync Client Master Data<br/>currencies / businessTypes /<br/>paymentMethods / etc.]
H -->|No| J[Skip Client Master Data]
I --> K[Sync Core Master Data<br/>26 entity types]
J --> K
K --> L{Bulk Delta<br/>Endpoint Available?}
L -->|Yes| M[Single Bulk API Call<br/>with modifiedSince]
L -->|No| N[Per-Entity Sync<br/>provinces -> territories -><br/>villages -> crops -> ...]
M --> O[Persist Global Version]
N --> O
O --> P[Update SyncEntityType.masterData<br/>timestamp]

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1HTTPMasterDataSyncOrchestrator._fetchGlobalVersion()GET /api/masterdata/sync/version -- checks for version changeReturns null on failure; falls back to freshness check
2SwitchVersion comparisonCompares localVersion from SharedPreferences to remoteSkips sync if unchanged
3FunctionClientMasterDataRepositoryImpl.syncClientMasterData()Syncs ~10 client-facing reference tablesRecords partial failure, continues
4FunctionMasterDataRepositoryImpl.syncMasterData() -> MasterDataSyncHelper.syncAll()Syncs ~26 core reference tablesRecords partial failures per entity (FR6)
5Switch_tryBulkDeltaSync()Prefers single bulk endpoint; falls back to per-entityTransparent fallback

Data Transformations

  • Input: Locally cached reference data (potentially stale or empty)
  • Processing: Fetch-then-replace strategy per entity type; delta sync using modifiedSince timestamps
  • Output: Fresh reference data in SQLite; global version hash persisted in SharedPreferences

Error Handling

  • Empty provinces table forces sync regardless of freshness (prevents stuck dropdowns on first install)
  • Per-entity failure isolation (FR6): one entity's failure does not block others
  • MasterDataSyncNotifier emits progress events for UI consumption
  • Analytics tracking via SyncAnalyticsService

Cross-References

  • Triggers: N/A (leaf workflow)
  • Triggered by: WF-SYNC-02 (Step 1), WF-SYNC-03 (Step 1), WF-SYNC-17 (auto-pull)

WF-SYNC-06: Bidirectional Farmer Sync

Trigger: Called during Step 2 (push) and Step 3 (pull) of full/submit-priority sync Frequency: Per sync cycle Offline Support: Yes -- farmers are created locally with sync_status = 'pending'

Workflow Diagram

graph TD
A((Farmer Sync Phase)) --> B[Get Farmer Summary<br/>pending + failed counts]
B --> C{Pending Farmers > 0?}
C -->|No| F[Skip Push]
C -->|Yes| D[SyncCoordinator.pushFarmers<br/>via FarmersSync.pushPendingFarmers]
D --> E[Track Analytics:<br/>entity=farmers, type=push]
E --> F
F --> G{Pull Phase<br/>Active?}
G -->|No| H[End]
G -->|Yes| I[PullScopeResolver<br/>determines scope]
I --> J[SyncCoordinator.pullFarmersForScope<br/>delta sync via modifiedSince]
J --> K[Track Analytics:<br/>entity=farmers, type=pull]
K --> L[Upload Pending<br/>Farmer Photos]
L --> H

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1FunctionSyncCoordinator.farmerSummary()Counts pending + failed farmersN/A
2FunctionFarmersSync.pushPendingFarmers()Pushes each pending farmer with retry priority orderingReturns SyncResult with success/failure counts
3FunctionFarmersSync.pullFarmersForScope(scope)Delta pull using modifiedSince with role-based scopeReturns SyncResult
4FunctionFarmersSync.uploadPendingPhotos()Uploads profile and crop photos for synced farmersReturns map of uploaded/failed counts

Data Transformations

  • Push: Local SQLite farmers table rows with sync_status IN ('pending', 'failed') -> API POST /farmers
  • Pull: API GET /farmers?modifiedSince=...&agentId=... -> local SQLite upsert with conflict resolution
  • Photos: Local file paths -> cloud storage URLs -> update farmer record

Error Handling

  • Push uses getRetryableFarmersByPriority() which orders by retry eligibility
  • Photo upload failure does not block farmer sync
  • Analytics tracked per push/pull operation

Cross-References

  • Triggers: N/A (leaf workflow)
  • Triggered by: WF-SYNC-02 (Steps 2, 3), WF-SYNC-04, WF-SYNC-12 (login sync)

WF-SYNC-07: Bidirectional Client Sync

Trigger: Called during Step 2 (push) and Step 3 (pull) of full/submit-priority sync Frequency: Per sync cycle Offline Support: Yes -- clients are created locally with sync_status = 'pending'

Workflow Diagram

graph TD
A((Client Sync Phase)) --> B[Count Pending Clients]
B --> C{Pending > 0<br/>AND Online?}
C -->|No| F[Skip Push]
C -->|Yes| D[SyncCoordinator.pushClients<br/>via ClientsSync.pushPendingClients]
D --> E[Track Analytics:<br/>entity=clients, type=push]
E --> F
F --> G{Pull Phase<br/>Active?}
G -->|No| H[Continue to Documents]
G -->|Yes| I[SyncCoordinator.pullClientsForScope<br/>delta sync with scope]
I --> J[Track Analytics:<br/>entity=clients, type=pull]
J --> K[Pull Client Crop Demands<br/>for Scope]
K --> H
H --> L[Step 4: Push Pending<br/>Crop Demands]
L --> M[Upload Crop Demand<br/>Photos]
M --> N[Sync Pending Client<br/>Documents]

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1FunctionSyncCoordinator.clientsPendingCount()Counts pending clientsN/A
2FunctionClientsSync.pushPendingClients()Pushes each pending clientSkipped if offline to avoid DNS freezes
3FunctionClientsSync.pullClientsForScope(scope)Delta pull with role-based filteringReturns SyncResult
4FunctionClientsSync.pullClientCropDemandsForScope(scope)Pulls crop demand assignmentsReturns count pulled
5FunctionCropDemandsSync.syncPendingCropDemands()Pushes local crop demand draftsBlocked if client push had failures
6FunctionCropDemandsSync.uploadPendingPhotos()Uploads crop demand photosReturns uploaded/failed counts
7FunctionClientsSync.syncPendingDocuments()Uploads pending client documentsReturns count synced

Data Transformations

  • Push: Local clients table with sync_status = 'pending' -> API
  • Pull: API -> local SQLite upsert
  • Crop demand push is gated on client push success (FK dependency)
  • Document sync uploads file attachments to cloud storage

Error Handling

  • Client push is skipped when offline to prevent DNS lookup freezes
  • Crop demand push is blocked if client push had any failures (prevents orphaned demands)
  • Document sync errors are logged but do not block the sync cycle

Cross-References

  • Triggers: N/A (leaf workflow)
  • Triggered by: WF-SYNC-02 (Steps 2, 3, 4), WF-SYNC-04, WF-SYNC-12 (login sync)

WF-SYNC-08: Offline Queue Processor

Trigger: Connectivity available + 15-second timer tick Frequency: Every 15 seconds when online (configurable via OfflineQueueConfig.processingInterval) Offline Support: Yes -- this IS the offline support mechanism; queues items when offline, drains when online

Workflow Diagram

graph TD
A((Timer Tick / <br/>Connectivity Restore)) --> B{Connected?}
B -->|No| C[State: waiting]
B -->|Yes| D{Already Processing?}
D -->|Yes| E[Skip]
D -->|No| F[State: processing]
F --> G[Get Items To Process<br/>limit: 5, priority-ordered]
G --> H{Items Empty?}
H -->|Yes| I[State: idle]
H -->|No| J[For Each Item]
J --> K{Processor<br/>Registered?}
K -->|No| L[Skip Item]
K -->|Yes| M[Mark In-Progress]
M --> N[Execute Processor]
N --> O{Success?}
O -->|Yes| P[Mark Completed<br/>with Server Ack ID]
O -->|No| Q[Mark Failed<br/>with Backoff]
Q --> R[Log Next Retry Time]
P --> S{More Items?}
R --> S
L --> S
S -->|Yes| T{Still Connected?}
T -->|Yes| J
T -->|No| U[Stop Batch]
S -->|No| V[Emit Progress:<br/>batch complete]
U --> V
V --> I

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1TimerTimer.periodic(_config.processingInterval)15-second polling when onlinePaused during main sync
2FunctionSyncQueueService.getItemsToProcess(limit: 5)Fetches batch ordered by priority then FIFOReturns empty list if nothing ready
3Function_processors[item.entityType]Entity-specific processor callbackSkips item if no processor registered
4FunctionSyncQueueService.markInProgress(id)Sets status to in_progressN/A
5FunctionRegistered processorExecutes actual sync operationThrows on failure
6FunctionSyncQueueService.markCompleted(id, serverAckId)Marks item synced with server ackN/A
7FunctionSyncQueueService.markFailed(id, error)Increments retry count, calculates next backoffUses ExponentialBackoffService

Data Transformations

  • Input: sync_queue SQLite table rows with status = 'pending' and next_retry_at <= now
  • Processing: Entity-specific sync via registered processor callbacks
  • Output: Items marked completed (with server ack) or failed (with next retry timestamp)

Error Handling

  • Connectivity check between each item prevents hanging on lost connection
  • Exponential backoff prevents thundering herd on repeated failures
  • Processor is paused during main SyncManager._performSync() to prevent conflicts
  • Crash recovery: resetInProgressItems() at startup moves stuck items back to pending

Cross-References

  • Triggers: Entity-specific sync services (farmer, client, stock loss, etc.)
  • Triggered by: WF-SYNC-01 (initialization), connectivity restore

WF-SYNC-09: Exponential Backoff Retry

Trigger: Queue item failure Frequency: Per failure event Offline Support: Yes -- backoff timers are persisted to SQLite; survive app restarts

Workflow Diagram

graph LR
A((Item Failure)) --> B[Get Retry Count]
B --> C[Calculate Delay<br/>min 30s * 2^retryCount, 5h]
C --> D[Add Jitter<br/>+/- 10%]
D --> E[Set next_retry_at<br/>= now + delay]
E --> F[Persist to<br/>sync_queue table]
F --> G[RetryExecutorService<br/>polls every 30s]
G --> H{next_retry_at<br/><= now?}
H -->|No| I[Skip - not ready]
H -->|Yes| J[Execute Retry]
J --> K{Success?}
K -->|Yes| L[Mark Completed]
K -->|No| M[Increment retryCount<br/>Recalculate Delay]
M --> F

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1FunctionExponentialBackoffService.calculateDelay(retryCount)min(30s * 2^retryCount, 5h)N/A -- pure calculation
2FunctionExponentialBackoffService.calculateDelayWithJitter(retryCount)Adds +/-10% random jitter to prevent thundering herdN/A
3FunctionExponentialBackoffService.isReadyForRetry(nextRetryAt)Checks if current time exceeds scheduled retryReturns true if nextRetryAt is null
4TimerRetryExecutorService._retryTimer30-second polling intervalPrevents concurrent cycles via _isExecuting flag

Data Transformations

  • Backoff schedule: 30s -> 1m -> 2m -> 4m -> 8m -> 16m -> 32m -> 64m -> 5h (capped)
  • Jitter range: +/-10% of calculated delay
  • Unlimited retries (SyncConfig.maxRetryAttempts = -1) -- never gives up on data

Error Handling

  • Jitter prevents all devices retrying simultaneously after outage
  • RetryExecutorService is stopped during main sync to prevent conflicts
  • Each entity type has its own registered executor callback

Cross-References

  • Triggers: N/A (leaf workflow)
  • Triggered by: WF-SYNC-08 (queue processor failures), WF-SYNC-02 (Step 8 retry phase)

WF-SYNC-10: Conflict Resolution

Trigger: Pull operation finds local record that was also modified on server Frequency: Per conflicting record during pull Offline Support: N/A -- conflict resolution only occurs during active sync

Workflow Diagram

graph TD
A((Pull Finds<br/>Existing Record)) --> B[Detect Conflicting Fields<br/>skip metadata fields]
B --> C{Any Conflicts?}
C -->|No| D[Use Server Data<br/>no-conflict path]
C -->|Yes| E{Resolution<br/>Strategy?}
E -->|serverWins| F[Server Data Wins]
E -->|localWins| G[Local Data Wins]
E -->|latestTimestamp| H{Compare<br/>Timestamps}
H -->|Server Newer| F
H -->|Local Newer| G
H -->|Both Null| F
F --> I[Return ConflictResolutionResult<br/>winner=server]
G --> J[Return ConflictResolutionResult<br/>winner=local]

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1FunctionConflictResolver.detectConflictingFields()Compares all non-metadata fields between local and serverSkips: id, serverId, createdAt, updatedAt, syncStatus, syncError, etc.
2SwitchConflictResolutionStrategy enumRoutes to serverWins / localWins / latestTimestampDefault: serverWins
3FunctionConflictResolver._resolveByTimestamp()Compares updatedAt timestampsFalls back to server-wins if timestamps unavailable
4FunctionConflictResolver.isServerNewer()Convenience method for simple timestamp checkAssumes server authoritative if no timestamps

Data Transformations

  • Input: Two maps (localData, serverData) with optional timestamps
  • Processing: Field-by-field diff excluding metadata; strategy-based resolution
  • Output: ConflictResolutionResult<Map<String, dynamic>> with winning data, strategy used, winner source, and conflicting field list

Error Handling

  • Default strategy is serverWins -- safest for data integrity
  • Handles type mismatches via string comparison fallback
  • Handles nested maps and lists recursively
  • When both timestamps are null, defaults to server-wins

Cross-References

  • Triggers: N/A (leaf workflow)
  • Triggered by: WF-SYNC-06 (farmer pull), WF-SYNC-07 (client pull)

WF-SYNC-11: Pull Scope Resolution

Trigger: Before any pull operation during sync Frequency: Per sync cycle (cached for duration of cycle) Offline Support: Uses cached identity from AgentContext

Workflow Diagram

graph TD
A((Resolve Pull Scope)) --> B[Get Identity<br/>from AgentContext]
B --> C[Normalize Roles<br/>lowercase, strip symbols]
C --> D[Resolve Agent ID<br/>identity -> cached fallback]
D --> E[Get Active Dashboard Role<br/>from ActiveRoleManager]
E --> F{Explicit Active<br/>Role Set?}
F -->|Admin Role| G[PullRole.admin<br/>full data access]
F -->|Agent Role| H{Has Agent ID?}
H -->|Yes| I[PullRole.agent<br/>scoped to agentId]
H -->|No| K[Continue fallback]
F -->|Warehouse Role| J[PullRole.warehouse<br/>scoped to centerId]
F -->|No| K[Capability Fallback]
K --> L{Is Admin?}
L -->|Yes| G
L -->|No| M{Is Warehouse?}
M -->|Yes| N{Has Center ID?}
N -->|Yes| J
N -->|No| O{Also Agent?}
O -->|Yes| I
O -->|No| J
M -->|No| P{Is Agent?}
P -->|Yes| I
P -->|No| Q[PullRole.unknown]

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1FunctionAgentContext.getIdentity(forceRefresh)Resolves current user identity with rolesCached from last successful fetch
2Function_normalizeRole(role)toLowerCase().replaceAll(RegExp(r'[^a-z0-9]'), '')Handles all casing and symbol variations
3FunctionActiveRoleManager.getPersistedActiveRole()Gets explicitly selected dashboard roleReturns null if not set (multi-role hub)
4Function_resolveAgentId()Identity agentId -> cached getStoredAgentId()Returns null if both unavailable
5Function_resolveWarehouseScope()Fetches assignedCollectionCenterId from authReturns scope even if centerId is null

Data Transformations

  • Input: JWT roles, agent identity, active dashboard selection
  • Processing: Role normalization and priority-based resolution
  • Output: PullScope with role, roles, agentId, assignedCenterId, and signature

Error Handling

  • Multi-role users without explicit active role follow fallback chain: admin > warehouse > agent > unknown
  • Warehouse without center assignment falls back to agent scope if user also has agent role
  • PullScope.signature enables scope change detection between sessions

Cross-References

  • Triggers: N/A (leaf workflow)
  • Triggered by: WF-SYNC-02 (Step 3), WF-SYNC-04, WF-SYNC-17 (auto-pull)

WF-SYNC-12: Login Sync / Post-Login Data Fetch

Trigger: Successful authentication Frequency: Once per login Offline Support: Partial -- detects fresh install and forces full pull; times out gracefully

Workflow Diagram

graph TD
A((Successful Login)) --> B{Login Sync<br/>Enabled?}
B -->|No| C[Return success]
B -->|Yes| D[Resolve Agent ID]
D --> E{Fresh Install?<br/>0 local farmers}
E -->|Yes| F[Clear Sync Metadata<br/>force full pull]
E -->|No| G[Keep Metadata<br/>delta sync]
F --> H[Start LoginSyncNotifier]
G --> H
H --> I[Race: Sync vs<br/>3-minute Timeout]
I --> J{Timeout?}
J -->|Yes| K[Return timedOut<br/>use cached data]
J -->|No| L[Execute Sync Sequence]
L --> M[Phase 1-3: Push in Parallel<br/>farmers + clients + cropDemands]
M --> N[Phase 4-5: Pull in Parallel<br/>farmers + clients]
N --> O[Phase 6: Cleanup<br/>Duplicate Clients]
O --> P[Return LoginSyncResult]

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1SwitchSyncConfig.loginSyncEnabledFeature toggle for login syncReturns success if disabled
2FunctionFarmerLocalService.getAllFarmers()Detects fresh install (0 farmers)On error, clears all metadata for safety
3FunctionSyncMetadataService.clearSyncMetadata()Forces full pull by removing delta timestampsN/A
4ParallelFuture.wait([pushFarmers, pushClients, pushCropDemands])Pushes all pending data in parallelPer-phase error isolation
5ParallelFuture.wait([pullFarmers, pullClients])Pulls fresh data in parallelPer-phase error isolation
6FunctionClientDraftService.cleanupDuplicateClients()Removes duplicates from past race conditionsLogs warning on error
7RaceFuture.any([syncSequence, timeout])3-minute overall timeout (SyncConfig.loginSyncTimeout)Returns partial results on timeout

Data Transformations

  • Input: Authenticated user with potential pending local data
  • Processing: Parallel push-then-pull with timeout boundary
  • Output: LoginSyncResult with per-phase summaries, error list, and timeout indicator

Error Handling

  • 3-minute timeout prevents indefinite login blocking
  • Fresh install detection ensures first-time users get all data
  • Each phase (push farmers, push clients, pull farmers, pull clients) is independently try/caught
  • LoginSyncNotifier provides real-time progress for login UI

Cross-References

  • Triggers: WF-SYNC-06 (farmer push/pull), WF-SYNC-07 (client push/pull)
  • Triggered by: Authentication flow (jwt_auth_service.dart)

WF-SYNC-13: Stock Sync

Trigger: Pull phase of full sync or auto-pull Frequency: Per sync cycle (when stock pull is not yielded to submit-priority) Offline Support: Yes -- stock data is cached in SQLite for offline viewing

Workflow Diagram

graph TD
A((Stock Sync Phase)) --> B[Phase 1: Sync Centers<br/>GET /stock/centers]
B --> C[Cache center summaries<br/>to center_stock_cache]
C --> D[Phase 2: Sync Aggregation<br/>categories and crop types]
D --> E[Cache aggregation data<br/>to stock_aggregation_cache]
E --> F[Phase 3: Sync Batches<br/>individual batch details]
F --> G[Cache batch data<br/>to stock_batch_cache]
G --> H[Update SyncEntityType.stock<br/>timestamp]
H --> I[Emit StockSyncProgress<br/>phase: completed]

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1HTTPStockRepository.getAllCentersStock()Fetches center-level stock summariesReturns cached data on failure
2Databasecenter_stock_cache tableCaches center stock for offlineSQLite transaction
3HTTPStock aggregation endpointsHierarchical: categories -> crop typesPer-center error isolation
4Databasestock_aggregation_cache tableCaches aggregation dataSQLite transaction
5HTTPBatch detail endpointsIndividual batch recordsPer-batch error isolation
6Databasestock_batch_cache tableCaches batch detailsSQLite transaction

Data Transformations

  • Input: Server stock data (centers, aggregation, batches)
  • Processing: Three-phase progressive caching (center -> aggregation -> batches)
  • Output: Locally cached stock data for offline warehouse operations

Error Handling

  • Phase-based progress tracking via StockSyncProgress stream
  • Each phase can fail independently
  • StockSyncPhase enum tracks: idle -> syncingCenters -> syncingAggregation -> syncingBatches -> completed/failed

Cross-References

  • Triggers: N/A (leaf workflow)
  • Triggered by: WF-SYNC-02 (Step 3 pull phase), WF-SYNC-17 (auto-pull)

WF-SYNC-14: Production Cycle Sync

Trigger: Step 2.5 (push) and Step 3 (pull) of full sync Frequency: Per sync cycle Offline Support: Yes -- production cycles are created locally with sync_status = 'pending'

Workflow Diagram

graph TD
A((Production Cycle<br/>Sync Phase)) --> B[Push Phase: Step 2.5]
B --> C[ProductionCycleSyncService<br/>.pushPendingCycles]
C --> D[For Each Pending Cycle]
D --> E[Normalize Status<br/>aliases to canonical values]
E --> F[Upload Photo<br/>if present]
F --> G[POST to API<br/>with inputs + harvests]
G --> H{Success?}
H -->|Yes| I[Mark synced locally]
H -->|No| J[Mark failed,<br/>increment retry]
I --> K[Pull Phase: Step 3]
J --> K
K --> L[SyncService.pullProductionCycles<br/>FromBackend scope]
L --> M[Delta pull using<br/>modifiedSince + scope]
M --> N[Upsert to local<br/>production_cycles table]

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1FunctionProductionCycleSyncService.pushPendingCycles()Pushes locally created cyclesReturns ProductionCycleSyncResult
2FunctionStatus normalizationMaps aliases (planned/planifie/planifie to Planned, etc.)Extensive alias table for FR/EN variants
3HTTPChunkedUploadService / UploadServicePhoto upload for production evidenceSkipped if no photo
4HTTPPull endpoint with scopeDelta sync with role-based filteringReturns count pulled

Data Transformations

  • Push: Local production_cycles + production_inputs + production_harvests -> API
  • Pull: API -> local SQLite upsert (preserves unsynced local records)
  • Status normalization handles 30+ aliases across EN/FR and enum formats

Error Handling

  • Push before pull to prevent confusion with locally-pending cycles
  • GUID validation on cycle IDs
  • Unsynced production work is preserved across user/scope transitions

Cross-References

  • Triggers: N/A (leaf workflow)
  • Triggered by: WF-SYNC-02 (Steps 2.5, 3), WF-SYNC-04

WF-SYNC-15: Survey Response & File Sync

Trigger: Steps 7.6 and 7.7 of full sync, or via queue processor Frequency: Per sync cycle Offline Support: Yes -- responses saved locally, files queued for upload

Workflow Diagram

graph TD
A((Survey Sync Phase)) --> B[Step 7.6: Push Responses]
B --> C[Get Pending Responses<br/>from campaign_local_datasource]
C --> D{Any Pending?}
D -->|No| E[Skip to Files]
D -->|Yes| F[Batch Submit<br/>POST /api/surveys/responses/batch]
F --> G[For Each Result]
G --> H{Item Success?}
H -->|Yes| I[Mark Response Synced]
H -->|No| J[Mark Response Failed]
I --> E
J --> E
E --> K[Step 7.7: Push Files]
K --> L[Query survey_file_queue<br/>status = pending]
L --> M{Any Pending Files?}
M -->|No| N[End]
M -->|Yes| O[For Each File]
O --> P[Upload to Cloud Storage<br/>folder: survey-files]
P --> Q{Upload Success?}
Q -->|Yes| R[Mark File Uploaded<br/>update response answer URL]
Q -->|No| S[Log failure]
R --> N
S --> N

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1FunctionSurveyResponseSyncService.pushPendingResponses()Batch-submits all pending responses in single API callReturns SurveyResponseSyncResult
2HTTPCampaignRemoteDataSource.batchSubmitResponses()POST /api/surveys/responses/batchPer-response status in batch result
3FunctionSurveyFileSyncService.pushPendingFiles()Uploads pending file attachmentsReturns SurveyFileSyncResult
4HTTPCampaignRemoteDataSource.uploadFile()Uploads file to cloud storageReturns remote URL on success
5Function_updateResponseFileAnswer()Replaces local path with remote URL in response dataN/A

Data Transformations

  • Responses: Local survey_responses table -> batch API payload -> server; local records marked synced
  • Files: Local file path -> cloud upload -> remote URL; response answer field updated with URL
  • Files MUST be uploaded AFTER responses (files reference server-side response IDs)

Error Handling

  • Batch submission provides per-item success/failure status
  • File upload failures do not affect response sync status
  • Queue item processing (syncQueueItem) supports both create and update operations

Cross-References

  • Triggers: N/A (leaf workflow)
  • Triggered by: WF-SYNC-02 (Steps 7.6, 7.7), WF-SYNC-08 (queue processor for surveyResponse entity type)

WF-SYNC-16: Assessment Sync

Trigger: Step 1.5 (question pull) and Step 5 (assessment push) of full sync Frequency: Per sync cycle (questions freshness-gated at 24 hours) Offline Support: Yes -- questions cached for offline use; assessments queued locally

Workflow Diagram

graph TD
A((Assessment Sync)) --> B[Step 1.5: Pull Questions]
B --> C{Cache Fresh?<br/>< 24h since last sync}
C -->|Yes| D[Skip Pull]
C -->|No| E[Ensure Tables Exist]
E --> F[Get Last Sync Time]
F --> G{First Sync?}
G -->|Yes| H[Full Pull: GET all questions<br/>clear + replace cache]
G -->|No| I[Delta Pull: GET questions<br/>modifiedSince=lastSync]
H --> J[Save Questions Locally]
I --> J
J --> K[Update sync_metadata<br/>assessment_questions timestamp]
K --> L[End Pull Phase]
L --> M[Step 5: Push Assessments]
M --> N[AssessmentSyncIntegration<br/>.syncPendingAssessments]
N --> O[For Each Pending]
O --> P[POST to API]
P --> Q{Success?}
Q -->|Yes| R[Mark Synced]
Q -->|No| S[Mark Failed]
R --> T[Update SyncEntityType<br/>.assessments timestamp]
S --> T

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1SwitchSyncConfig.isAssessmentQuestionsFresh(lastSync)24-hour freshness checkForces sync if forceRefresh
2HTTPAssessmentRemoteDataSource.getAllQuestions(modifiedSince)Bulk question fetch with optional deltaN/A
3DatabaseAssessmentLocalDataSource.saveQuestions()Upsert for delta, clear+save for fullEnsures tables exist first
4FunctionAssessmentSyncIntegration.syncPendingAssessments()Pushes completed quality assessmentsReturns success/failure counts

Data Transformations

  • Question pull: API -> local cache with delta sync support
  • Assessment push: Local pending assessments -> API POST
  • Questions are used offline for quality assessment forms

Error Handling

  • Question sync failures are fully isolated -- do not block farmer/client sync
  • ensureTablesExist() handles first-run or migration scenarios
  • Push pending assessments also run during auto-pull to upload captures from offline sessions

Cross-References

  • Triggers: N/A (leaf workflow)
  • Triggered by: WF-SYNC-02 (Steps 1.5, 5), WF-SYNC-03 (Step 1.5), WF-SYNC-17 (auto-pull)

WF-SYNC-17: Auto-Pull Heartbeat

Trigger: Connectivity restore, app startup, foreground heartbeat timer, SignalR DataChanged, app resume Frequency: Cooldown-gated (30s for connectivity, 45s for resume, 15m for heartbeat) Offline Support: N/A -- only runs when online

Workflow Diagram

graph TD
A((Auto-Pull Trigger)) --> B{Initialized?}
B -->|No| C[Skip]
B -->|Yes| D{Submit Sync<br/>Pending?}
D -->|Yes| C
D -->|No| E{Already<br/>Auto-Pulling?}
E -->|Yes| C
E -->|No| F{On Cooldown?<br/>per-trigger tracking}
F -->|Yes| C
F -->|No| G[Acquire _syncLock]
G --> H{Full Sync<br/>Running?}
H -->|Yes| C
H -->|No| I[Check Connectivity]
I --> J{Connected?}
J -->|No| C
J -->|Yes| K[Set _isAutoPulling = true]
K --> L[Resolve PullScope]
L --> M[Ensure Pull Scope Safety]
M --> N{Yield to<br/>Submit Sync?}
N -->|Yes| O[Stop Early]
N -->|No| P[Push Pending Assessments]
P --> Q[Pull Master Data<br/>freshness-gated]
Q --> R[Pull Assessment Questions<br/>freshness-gated]
R --> S[Pull Farmers for Scope]
S --> T[Pull Clients for Scope]
T --> U[Pull Crop Demands]
U --> V[Pull Production Cycles]
V --> W[Pull Orders]
W --> X[Pull Campaigns]
X --> Y[Pull Assessments]
Y --> Z[Pull Stock Data]
Z --> AA[Update Agent Last Sync]
AA --> AB{Failures?}
AB -->|0| AC[Reset failure counter]
AB -->|>0| AD[Increment<br/>_consecutiveAutoPullFailures]
AD --> AE{>= 3 consecutive?}
AE -->|Yes| AF[Emit UX Warning]
AE -->|No| AG[End]
AC --> AG
AF --> AG
O --> AG

Node Descriptions

#n8n Node TypeComponentFunctionError Handling
1GateCooldown check via _lastAutoPullByTrigger mapPer-trigger cooldown enforcementSkip if on cooldown
2Gate_syncLock.synchronized()Prevents concurrent syncSkip if full sync running
3FunctionPullScopeResolver.resolve(forceRefresh: true)Fresh scope for each auto-pullFalls back to unknown
4Function_ensurePullScopeSafety(scope)Detects user/scope changes; clears stale cachesClears entity caches on user switch
5FunctionsyncMasterData(force: false)Freshness-gated master data pullNon-blocking
6FunctionEntity-specific pull operationsPull farmers, clients, demands, cycles, orders, campaigns, assessments, stockEach independently try/caught
7Counter_consecutiveAutoPullFailuresTracks repeated failuresWarns after 3 consecutive failures (SyncConfig.autoPullFailureAlertThreshold)

Data Transformations

  • Input: Stale local cache from last sync
  • Processing: Pull-only refresh (no push) with freshness gating per entity
  • Output: Updated local cache; failure counter for UX warnings

Error Handling

  • Each pull operation is independently try/caught -- partial success is tracked
  • _shouldYieldToSubmitPrioritySync() allows auto-pull to yield immediately to user-submitted data
  • After 3 consecutive failures, emits UX warning via UnifiedSyncStatusService
  • Failure counter resets on first successful pull
  • Per-trigger cooldown prevents excessive auto-pulls:
    • Connectivity restore: 30s cooldown
    • App resume: 45s cooldown
    • Foreground heartbeat: 15m interval
  • Connectivity restore with pending local changes triggers requestSubmitSync() instead

Cross-References

  • Triggers: WF-SYNC-05, WF-SYNC-06 (pull only), WF-SYNC-07 (pull only), WF-SYNC-13, WF-SYNC-14, WF-SYNC-16
  • Triggered by: WF-SYNC-01 (connectivity listener, heartbeat timer, SignalR handler, startup)

Cross-Workflow Dependency Graph

graph TD
WF01[WF-SYNC-01<br/>Initialization] --> WF02[WF-SYNC-02<br/>Full Sync]
WF01 --> WF04[WF-SYNC-04<br/>Submit Priority]
WF01 --> WF08[WF-SYNC-08<br/>Queue Processor]
WF01 --> WF17[WF-SYNC-17<br/>Auto-Pull]

WF02 --> WF05[WF-SYNC-05<br/>Master Data]
WF02 --> WF06[WF-SYNC-06<br/>Farmer Sync]
WF02 --> WF07[WF-SYNC-07<br/>Client Sync]
WF02 --> WF09[WF-SYNC-09<br/>Backoff Retry]
WF02 --> WF10[WF-SYNC-10<br/>Conflict Resolution]
WF02 --> WF11[WF-SYNC-11<br/>Pull Scope]
WF02 --> WF13[WF-SYNC-13<br/>Stock Sync]
WF02 --> WF14[WF-SYNC-14<br/>Production Cycle]
WF02 --> WF15[WF-SYNC-15<br/>Survey Sync]
WF02 --> WF16[WF-SYNC-16<br/>Assessment Sync]

WF03[WF-SYNC-03<br/>Initial Sync] --> WF05

WF04 --> WF06
WF04 --> WF07
WF04 --> WF14
WF04 --> WF17

WF08 --> WF09

WF12[WF-SYNC-12<br/>Login Sync] --> WF06
WF12 --> WF07

WF17 --> WF05
WF17 --> WF06
WF17 --> WF07
WF17 --> WF11
WF17 --> WF13
WF17 --> WF14
WF17 --> WF16

Key Source Files Reference

FilePurpose
mobile/mon_jardin/lib/data/services/sync_manager.dartCentral orchestrator -- SyncManager singleton, strategy selection, all sync phases
mobile/mon_jardin/lib/data/services/sync/sync_coordinator.dartDelegates to FarmersSync, ClientsSync, CropDemandsSync
mobile/mon_jardin/lib/data/services/sync/strategies/sync_strategy.dartAbstract strategy interface (7 boolean flags)
mobile/mon_jardin/lib/data/services/sync/strategies/full_sync_strategy.dartAll phases enabled
mobile/mon_jardin/lib/data/services/sync/strategies/initial_sync_strategy.dartMaster data + assessment questions only
mobile/mon_jardin/lib/data/services/sync/strategies/submit_priority_strategy.dartPush-priority with bidirectional, skips master data
mobile/mon_jardin/lib/data/services/sync/master_data_sync_orchestrator.dartVersion-gated master data sync with analytics
mobile/mon_jardin/lib/data/services/sync/master_data_sync.dartMasterDataSyncHelper -- 26 entity types, fetch-then-replace
mobile/mon_jardin/lib/data/services/sync/offline_queue_processor.dartOfflineQueueProcessor -- 15s timer, batch 5, 12 entity processors
mobile/mon_jardin/lib/data/services/sync/exponential_backoff_service.dart30s-5h backoff with +/-10% jitter
mobile/mon_jardin/lib/data/services/sync/conflict_resolver.dartServer-wins / local-wins / timestamp strategies
mobile/mon_jardin/lib/data/services/sync/pull_scope_resolver.dartRole-based scope: admin / agent / warehouse / unknown
mobile/mon_jardin/lib/data/services/sync/pull_scope.dartPullScope value object with signature for change detection
mobile/mon_jardin/lib/data/services/sync/login_sync_service.dartPost-login sync with 3-minute timeout and parallel phases
mobile/mon_jardin/lib/data/services/sync/farmers_sync.dartFarmer push/pull with photo upload
mobile/mon_jardin/lib/data/services/sync/clients_sync.dartClient push/pull with document sync
mobile/mon_jardin/lib/data/services/sync/crop_demands_sync.dartCrop demand draft sync with photo upload
mobile/mon_jardin/lib/data/services/sync/stock_sync.dartThree-phase stock sync: centers -> aggregation -> batches
mobile/mon_jardin/lib/data/services/sync/production_cycle_sync.dartProduction cycle push/pull with status normalization
mobile/mon_jardin/lib/data/services/sync/survey_response_sync.dartBatch survey response push
mobile/mon_jardin/lib/data/services/sync/survey_file_sync.dartSurvey file upload with URL replacement
mobile/mon_jardin/lib/data/services/sync/assessment_questions_sync.dartBulk question pull with 24h freshness cache
mobile/mon_jardin/lib/data/services/sync/retry_executor_service.dart30s polling, entity-specific executors
mobile/mon_jardin/lib/data/services/connectivity_service.dartDNS-verified connectivity with cancelable operations
mobile/mon_jardin/lib/core/config/sync_config.dartAll timing constants, freshness durations, feature toggles
mobile/mon_jardin/lib/data/models/master_data_sync_event.dart26-member MasterDataEntityType enum
mobile/mon_jardin/lib/data/local/models/sync_history_entry.dartSyncTrigger enum: manual, connectivity, startup, resume, foregroundHeartbeat, formSubmit, scheduled, retry