The Two Sides of the Coin: Understanding Arche AI's Deception Score vs. Confidence Level

Unpacking the science and technology powering AI-driven deception detection

At Arche AI, our mission is to provide you with the most nuanced and actionable intelligence possible. A key part of that is understanding the different layers of our analysis. One of the most common questions we get is about the "confidence level" you see in your results.

Let's clear this up: the confidence level is NOT the deception score. Understanding the difference is the key to unlocking the full power of the Arche AI platform.

Layer 1: The Core Deception Score (The "What")

First, it is important to remember that our system performs multiple, separate analyses that inform each other. The initial and most fundamental analysis is our primary deception model. When you see a segment marked as "Deception Detected," it has already passed this first test with a score greater than 60%.

This primary score is generated by analyzing the spectral data of the voice itself—the non-linguistic, subconscious vocal biomarkers that indicate cognitive load. This is the core "what" of our analysis; it's the raw signal that flags a moment for a closer look.

Layer 2: The Confidence Level (The "Why")

Once a segment is flagged, our Large Language Model (LLM) acts like a detective to analyze the language and context. The confidence level you see is a measure of the LLM's certainty in its own linguistic analysis of that specific segment.

This confidence score, a value from 0-100%, tells you three main things:

  • How certain the AI is about its interpretation of what might be deceptive.
  • The reliability of the linguistics-based analysis for that segment.
  • The quality of the linguistic evidence supporting the deception determination.

Our AI evaluates the strength of linguistic indicators, the clarity of the surrounding conversation, and the consistency of speech patterns to assign its confidence.

What the Different Levels Mean

Here's a quick guide to interpreting the confidence levels:

  • High Confidence (>80%): The AI is very certain about its linguistic interpretation. The context is clear, and the verbal patterns are consistent. This is a strong signal to investigate further.
  • Medium Confidence (50-80%): The AI has moderate certainty. There may be some ambiguity in the context or mixed signals in the speech patterns.
  • Low Confidence (<50%): The AI is less certain. This could be due to limited context, unclear speech, or conflicting linguistic indicators.

The Most Important Takeaway

You could have a segment with a 90% deception score but only a 60% confidence level.

What does this mean? It tells you that while our primary spectral analysis strongly suggests deception (the 90% score), our LLM is only moderately confident in its own linguistic assessment of why (the 60% score), perhaps due to a lack of surrounding context in the conversation.

This dual-metric system is designed to help you prioritize. A high deception score with low confidence doesn't invalidate the finding; it simply tells you that more human investigation is needed to truly understand the context behind the signal.

By using both the deception score and the confidence level, you can more effectively pinpoint which moments in a conversation deserve the most attention, turning complex analysis into clear, actionable intelligence.

Ready to experience AI-powered insight?

Start your journey with ArcheAI today and discover how our dual-metric system can help you make more informed decisions.

Try ArcheAI Now