AI Copilot for mobile app navigation

NOVASOLUTIONS.TECHNOLOGY is engaged in the development, support and maintenance of iOS, Android, PWA mobile applications. We have extensive experience and expertise in publishing mobile applications in popular markets like Google Play, App Store, Amazon, AppGallery and others.
Development and support of all types of mobile applications:
Information and entertainment mobile applications
News apps, games, reference guides, online catalogs, weather apps, fitness and health apps, travel apps, educational apps, social networks and messengers, quizzes, blogs and podcasts, forums, aggregators
E-commerce mobile applications
Online stores, B2B apps, marketplaces, online exchanges, cashback services, exchanges, dropshipping platforms, loyalty programs, food and goods delivery, payment systems.
Business process management mobile applications
CRM systems, ERP systems, project management, sales team tools, financial management, production management, logistics and delivery management, HR management, data monitoring systems
Electronic services mobile applications
Classified ads platforms, online schools, online cinemas, electronic service platforms, cashback platforms, video hosting, thematic portals, online booking and scheduling platforms, online trading platforms

These are just some of the types of mobile applications we work with, and each of them may have its own specific features and functionality, tailored to the specific needs and goals of the client.

Showing 1 of 1 servicesAll 1735 services
AI Copilot for mobile app navigation
Complex
~2-4 weeks
FAQ
Our competencies:
Development stages
Latest works
  • image_mobile-applications_feedme_467_0.webp
    Development of a mobile application for FEEDME
    756
  • image_mobile-applications_xoomer_471_0.webp
    Development of a mobile application for XOOMER
    624
  • image_mobile-applications_rhl_428_0.webp
    Development of a mobile application for RHL
    1052
  • image_mobile-applications_zippy_411_0.webp
    Development of a mobile application for ZIPPY
    947
  • image_mobile-applications_affhome_429_0.webp
    Development of a mobile application for Affhome
    862
  • image_mobile-applications_flavors_409_0.webp
    Development of a mobile application for the FLAVORS company
    445

AI-Powered Navigation Copilot for Mobile Applications

Complex mobile applications—banking platforms, ERP systems, healthcare solutions—lose users not due to missing features, but because finding the right function is too difficult. Traditional solutions like onboarding tours and help sections work poorly: users complete the tour at first launch and forget it within days. An AI navigation Copilot solves this by understanding natural language requests and guiding users to the right location.

Navigation Copilot Capabilities

Not a chat interface, not an FAQ bot. Three concrete scenarios:

Deep link navigation. User writes "I want to transfer money to a card"—the assistant opens the required screen. Technically: an NLU model classifies the intent, maps it to a deep link, and the app executes programmatic navigation.

Contextual suggestions. User has been stuck on a screen for three minutes without taking action—Copilot offers help. Not generic "need help?", but contextual: "You're on a payment screen. Would you like me to explain the difference between transferring via phone number versus account details?"

Guided task execution. Multi-step tasks like "apply for a mortgage"—12 steps scattered across three sections. Copilot leads step-by-step, tracks progress, and explains each screen.

Architecture: NLU → Intent → Action

The most non-trivial part is mapping a user request to a specific app action. Two approaches:

Classifier-based. Predefined intent set (50–200 for a typical app), trained classifier. Fast, predictable, cheap at runtime. Struggles with non-standard phrasings.

LLM + function calling. Describe all screens and actions as a set of functions. LLM selects the right one based on user request:

// iOS — navigation functions description for LLM
let navigationTools: [ChatCompletionTool] = [
    ChatCompletionTool(
        type: .function,
        function: ChatCompletionToolFunction(
            name: "navigate_to_screen",
            description: "Opens app screen by identifier",
            parameters: NavigationParameters.schema  // {screen_id: string, params: object}
        )
    ),
    ChatCompletionTool(
        type: .function,
        function: ChatCompletionToolFunction(
            name: "highlight_element",
            description: "Highlights UI element on current screen with explanation",
            parameters: HighlightParameters.schema
        )
    ),
    ChatCompletionTool(
        type: .function,
        function: ChatCompletionToolFunction(
            name: "start_guided_flow",
            description: "Starts step-by-step guide for multi-step task",
            parameters: FlowParameters.schema
        )
    )
]

// Request with function calling
let request = ChatCompletionRequest(
    model: "gpt-4o-mini",
    messages: [systemMessage, userMessage],
    tools: navigationTools,
    toolChoice: .auto
)

LLM returns tool_calls with function name and parameters, app executes navigation.

Programmatic Navigation in iOS and Android

On iOS (SwiftUI)—via NavigationPath or custom Router:

class AppRouter: ObservableObject {
    @Published var path = NavigationPath()

    func navigate(to screen: AppScreen, params: [String: Any] = [:]) {
        switch screen {
        case .transfer:
            path.append(TransferRoute(params: params))
        case .loanApplication:
            path.append(LoanApplicationRoute(params: params))
        // ...
        }
    }

    // Called from AI Copilot
    func executeNavigationAction(_ action: NavigationAction) {
        DispatchQueue.main.async {
            self.navigate(to: action.screen, params: action.params)
        }
    }
}

On Android (Compose)—via NavController:

fun handleCopilotAction(action: NavigationAction, navController: NavController) {
    when (action.screenId) {
        "transfer" -> navController.navigate(
            "transfer?amount=${action.params["amount"] ?: ""}"
        )
        "loan_application" -> navController.navigate("loan/application")
        // ...
    }
}

UI Element Highlighting

Guided mode with element highlighting is technically more complex than navigation. Requires an element identification system independent of screen position.

On iOS: tag system via accessibilityIdentifier. Copilot knows element names; when needed, an overlay layer draws highlight animation over the target element.

On Android: similar approach via contentDescription or custom tags + ViewTreeObserver to get element coordinates at runtime.

Contextual Awareness

Copilot must know where the user is right now. Current screen, completed steps, unfilled fields—this context is injected into the system prompt:

func buildCopilotContext(currentScreen: AppScreen, formState: FormState?) -> String {
    var context = "Current screen: \(currentScreen.name).\n"
    if let form = formState {
        context += "Filled fields: \(form.completedFields.joined(separator: ", ")).\n"
        context += "Missing required fields: \(form.missingRequired.joined(separator: ", ")).\n"
    }
    return context
}

Common Implementation Errors

Main mistake: Copilot performs destructive actions without confirmation. Rule—navigation executes immediately, any data changes (form submission, payment creation) require explicit user confirmation, regardless of what Copilot said.

Second mistake: full descriptions of all 80 screens in the system prompt. This inflates the prompt to thousands of tokens. Solution—vector search over a screen catalog before LLM request: from 80 screens, only 5–10 most relevant enter the prompt.

Development Process

Inventory all screens and actions → design intent schema → implement NLU (classifier or LLM function calling) → build programmatic navigation system → UI overlay for element highlighting → guided flow engine → A/B test with task completion rate metrics.

Timeframe Estimates

MVP with LLM function calling and basic navigation—2–3 weeks. Complete system with guided flows, element highlighting, and contextual suggestions—3–5 weeks. Iterative improvements based on analytics—ongoing.