FR/EN
Negent — Evidence-first

No guessing
Only evidence.

Negent Intelligence tturns your corpus into operational capability — semantic search, documented assistant, reconstructed dossiers, auto-generated records. Every answer traces back to a source. Not a probability.

Requires Negent Clean — or an already structured corpus
Negent Intelligence — Sourced Answer Mode
🔍
"What are the termination conditions in the Acme France contract ?"
The Acme France contract [1] requires a 90 jours notice period by registered mail. The amendment dated March 12, 2024 [2] reduced this period to 60 jours for SaaS modules only. Early termination penalties are defined in Annex C [3] : 15% of the remaining contract value.
Verified sources
[1]
Contrat_Acme_France_signe_v3.docx
Art. 12.3
[2]
Avenant_2024-03_valide.docx
§4
[3]
Annexe_C_penalites.pdf
p.4
✓  Grounded answer — 3 concordant sources
01Hybrid retrieval 02Systematic citations 03Reconstructed dossier 04Useful no-answer 05Native ACL 06Auto-generated record 07Version comparisons 08Observability 01Hybrid retrieval 02Systematic citations 03Reconstructed dossier 04Useful no-answer 05Native ACL 06Auto-generated record 07Version comparisons 08Observability
Why Intelligence

Finding isn't the hard part anymore.
Proving is.

Even when everything is "in there somewhere," information stays out of reach when it matters. Teams lose time, take risks, and can't trust what AI tells them.

🔍

Classic search finds files.

Not answers. You find "something that looks right" — but never the synthesis, never the full context of a dossier.

🧩

Information is fragmented

A real answer depends on a complete dossier: master document + annexes + amendments + decisions + emails. Those connections don't exist anywhere in a usable form.

🎲

Low confidence

You find something quickly — but is it the right version? Does it hold up? The doubt persists and slows every decision.

🤖

AI answers without proof

An assistant without a citation mechanism can answer correctly without being able to demonstrate it — or answer wrongly with full confidence. Either way, trust collapses.

🔓

Information leakage is possible

Without native ACLs in the retrieval layer, "intelligent" search becomes a vulnerability. A user can surface content they were never authorized to see.

📊

No path to scale

Demos work. Production fails. Without metrics, observability, and an improvement loop, an assistant stays an uncontrollable POC.

5 modes

Not a chatbot.
Five purpose-built tools.

Intelligence adapts to how you actually work. Each mode is designed for a specific need — from rapid search to complete dossier reconstruction.

🔍 Search
💬 Sourced answer
📁 Dossier
📋 Record
⚖ Comparison

Everything you're allowed to see.
Nothing you're not.

Unified search across your entire authorized corpus. Semantic + lexical + metadata filters. Results ranked by relevance — not by who uploaded last.

Hybrid search: semantic + keywords + metadata filters
Filters: project, entity, country, type, date, status, BU
Relevant passage previews — not just filenames
Native ACL enforcement: every result respects access rights
Graph expansion: related documents surface automatically
Sample result
📄

Contrat_Acme_France_v3.docx

...termination conditions are defined in article 12, with a 90-day notice period...

📎

Avenant_2024-03_valide.docx

...notice period reduced to 60 days for SaaS modules as of April 1, 2024...

📑

Annexe_C_penalites.pdf

...early termination penalty: 15% of remaining contract value...

No answer
without proof.

Ask in plain language. Intelligence finds the evidence, assembles it, and generates a structured response — with every source cited and linked to the exact passage.

Generation grounded exclusively in retrieved evidence
Systematic citations: source ID + passage + clickable link
Honest uncertainty: "I cannot conclude" + actionable next step
Structured output: summary + key points + exceptions + decisions
Context retained across the full conversation
Uncertainty Handling
⚠️

Partial answer available

Financial terms are in Annex D — not found in the indexed corpus. Here is what I have: ...

Grounded answer confirmed

3 3 concordant sources. Last modified: amendment dated 03/12/2024 by M. Laurent.

🚫

Out of scope

You do not have access to the Germany BU contract. Please contact your administrator.

Every piece.
One dossier.

ntelligence automatically reconstructs a complete dossier from the relationship graph: contract + amendments + annexes + history + decisions. One click — the entire context is navigable.

Rebuilt from Clean's relationship graph
Full version timeline and key decision history
Master document + all linked files (amendments, annexes, emails)
Completeness indicator: "Annex D is missing"
One-click dossier export
Dossier Acme France
📋

Master agreement (canonical reference)

Signed 06/15/2022 — current version v3 — 47 pages

📎

3 amendments identified

Jan 2023 · Sep 2023 · Mar 2024 (latest authoritative version)

⚠️

Annex D missing

Referenced in the March 2024 amendment — not indexed

Auto-generated records.
Never outdated.

Intelligence generates and maintains structured records for every key entity: projects, contracts, assets, partners. When sources change, records follow — automatically.

Project / contract / entity / asset records
Source-fed — zero manual input
Live updates through Clean's delta loop
Every field traced to its source, clickable
Exportable and embeddable in your business portals
Contract Record — Acme France
📅

Key dates

Signed: 06/15/2022 — Renewal: 06/15/2025 — Notice: 90 days prior

💶

Contract value

€480,000/year (Mar 2024 amendment) — +12% vs. initial contract

👤

Key contacts

Internal: M. Laurent — Client: S. Petit (CEO)

Two versions.
Every difference that matters.

Compare documents, contract versions, clauses, or full dossiers. Intelligence surfaces substantive differences — not just edited words — and traces every change to its source.

Version comparison: v2 vs v3, before/after amendment
Clause comparison: article by article, with semantic delta
Dossier comparison: two entities, two BUs, two projects
Substantive differences highlighted — not just textual changes
Use cases: legal, procurement, compliance, due diligence
Contract v2 vs v3 — Changes
🔴

Art. 12.3 — Termination

v2: 90 days → v3: 60 days (SaaS modules). Substantive change.

🟡

Art. 8 — SLA

v2: 99.5% → v3: 99.9% guaranteed uptime. Client-favorable improvement.

🟢

Unchanged clauses

14 articles identical between v2 and v3.

Pipeline

Evidence-first.
Every step of the way.

Every answer follows a structured pipeline — from intent parsing to grounded generation, through hybrid retrieval and rigorous evidence selection.

01

Intent parsing

Intent, language, scope, internal acronyms. The query is structured before any retrieval begins.

02

ACL & identity

Strict filtering by user rights. ACL is a core retrieval constraint — not a post-filter.

03

Hybrid retrieval

Semantic + lexical + filters + graph expansion. Maximum recall and precision.

04

Evidence selection

Reranking, deduplication, diversity. A compact, justifiable evidence pack.

05

Grounded generation

Response built solely from evidence. Citations, managed uncertainty, useful no-answer.

What makes Intelligence different

Beyond the chatbot.
A proof engine.

1

Evidence-first

Every answer is grounded in selected evidence, cited with clickable links. When proof falls short, Intelligence says "I cannot conclude" — and points to the next step. No hallucination. No bluffing.

Citations & useful no-answer
2

Hybrid retrieval + graph

Semantic + lexical + metadata filters + graph expansion. Not a vector search. A retrieval layer that reconstructs the full dossier context automatically.

Hybrid search
3

ACL native to retrieval

Access rights aren't a post-filter — they're a hard retrieval constraint. Zero leakage by design. Full audit trail on every query.

Enterprise-grade security
4

Observability & continuous improvement

Usage, quality, latency, cost dashboards. User feedback routed to Clean or to the retrieval layer. A system that gets better — not a POC you're afraid to touch.

Industrialization
Negent Architecture
Source SystemsSharePoint · Drive · ECM · Network
See Clean ↗
↓ Truth Layer — Canonical references, graph, ACLs
Negent CleanTrusted corpus · AI-ready index · Business graph
See Clean ↗
↓ grounded evidence · native ACLs · embeddings
Negent IntelligenceSearch · Answers · Dossiers · Records · Comparison
You are here ↗
↓ robust · governed · traceable foundation
Negent AgenticAutomated actions · Workflows · Agents
Coming soon ⋯

Intelligence is more powerful
with Clean.

Intelligence works on any indexed corpus. But with Clean in place, the gap is fundamental: canonical references carry authority, business relationships are mapped, contradictions don't exist.

The difference between "finding something relevant" and "answering with the right version, from the right dossier, for the right user.".

What Clean unlocks
Answer precision: promoted references take priority
Dossiers rebuilt directly from the business graph
Zero contradiction: one truth per document family
Records kept current through Clean's continuous delta
FAQ

What you're going to
ask us.

Does Intelligence work without Clean ?
+
Yes. Intelligence can run on any indexed corpus — including an existing one (SharePoint, another search engine, vector database). But performance is significantly higher when Clean is in place: no contradictions, trusted references, usable relationship graph. Clean is the recommended foundation — not a hard prerequisite.
How does Intelligence handle hallucinations ?
+
Through the evidence-first principle: the model only generates a response from evidence selected within your corpus. If evidence is insufficient or ambiguous, Intelligence responds "I cannot conclude with certainty" and suggests a next step (e.g. "here is what's missing / where to find it"). This mechanism eliminates fabrication — the model cannot invent what isn't in the selected evidence.
Are ACLs truly enforced down to the document level ?
+
Yes. User identity and permissions are evaluated before every query. Filtering is applied at the retrieval level — not as a fragile post-processing step. This means a user without access to a document will never see any passage from that document in a response — not even indirectly. Every query is logged with the permissions applied, enabling a complete audit trail.
How is answer quality measured over time ?
+
Through the observability dashboard: response rate, no-answer rate, clicked citations, user feedback (useful / not useful / wrong source). Recurring evaluations on a set of business questions detect regressions early. Each feedback signal is routed to the right component — Clean if the issue originates from the source, the retrieval layer if it comes from evidence selection.
Contact

Let's see what your corpus
can really answer.

Request a demo on your own scope. We show you Intelligence on your actual data — with a real, difficult business question.

🕐
Response within 24 hours
A Negent expert will reach out to scope your needs and propose a demo on your environment.
🎯
Demo on your data
Not a generic demo — we work from a sample of your corpus to show you Intelligence in your own context.
🔒
NDA available
For sensitive discussions, a confidentiality agreement can be signed from the very first contact.

Your data already has
the answers.

You're just missing an engine capable of finding them, proving them, and delivering them to the right person.

Negent Agentic

This module is currently in development. Agentic will enable automated actions across your information systems, directly from a trusted, governed document foundation.

Leave your contact details to be notified first.