● LIVE
OpenAI releases GPT-5 APIIndia AI startup raises $120MBitcoin ETF hits record inflowsMeta Llama 4 benchmarks leakedOpenAI releases GPT-5 APIIndia AI startup raises $120MBitcoin ETF hits record inflowsMeta Llama 4 benchmarks leaked
📅 Sat, 21 Mar, 2026✈️ Telegram
AiFeed24

AI & Tech News

🔍
✈️ Follow
🏠Home🤖AI💻Tech🚀Startups₿Crypto🔒Security🇮🇳India☁️Cloud🔥Deals
✈️ News Channel🛒 Deals Channel
Home/Cloud & DevOps/Markdown Knowledge Graph for Humans and Agents
☁️Cloud & DevOps

Markdown Knowledge Graph for Humans and Agents

You accumulate knowledge constantly — notes, docs, project decisions, things you'll need to remember later. AI agents could help you work with all of this. But how do you give them access to what you know? There's a growing industry around "agent memory" — vector databases, embedding pipelines, retr

⚡Quick SummaryAI generating...
D

Dmytro Halichenko

📅 Mar 21, 2026·⏱ 5 min read·Dev.to ↗
✈️ Telegram𝕏 TweetWhatsApp
📡

Original Source

Dev.to

https://dev.to/gimalay/markdown-knowledge-graph-for-humans-and-agents-43c4
Read Full ↗

You accumulate knowledge constantly — notes, docs, project decisions, things you'll need to remember later. AI agents could help you work with all of this. But how do you give them access to what you know?

There's a growing industry around "agent memory" — vector databases, embedding pipelines, retrieval systems. But for personal and project knowledge, the answer might be simpler: plain Markdown files.

The Problem with Agent Memory

The amount of knowledge and context we need to operate with grows exponentially. Codebases expand. Documentation multiplies. Every project accumulates decisions, patterns, and tribal knowledge that's hard to keep in your head — or fit in a context window.

AI agents are supposed to help. Every framework now ships with some form of memory management. LangChain has memory modules. CrewAI has knowledge sources. AutoGPT writes to files. The common pattern: agents need persistent, structured storage that survives beyond a single conversation.

The dominant approach uses vector embeddings. Store memories as embeddings, retrieve via semantic similarity, inject into context. It works, but it creates a problem: the agent's knowledge becomes opaque.

When your agent "remembers" something, where does that memory live? In a vector database you can't easily read. In embeddings you can't edit by hand. The agent has knowledge, but you can't see it, verify it, or share it.

A Different Approach

What if your notes and your agent shared the same knowledge base?

This is the idea behind IWE — a tool that treats Markdown as a knowledge graph accessible to both you and your AI agents. You edit in your preferred text editor with full LSP support. Your agent queries the same files through a CLI. Same source of truth, no sync.

How It Works

IWE consists of two components:

  1. An LSP server (iwes) that integrates with VS Code, Neovim, Zed, and Helix
  2. A CLI (iwe) for programmatic access — the part AI agents use

The core insight: your text editor already has a protocol for structured document access. The Language Server Protocol gives you completions, go-to-definition, references, and code actions. IWE implements LSP for Markdown knowledge bases.

The CLI as Agent Interface

The iwe CLI exposes the same knowledge graph to command-line tools:

iwe find "authentication"

iwe retrieve -k docs/auth-flow

iwe retrieve -k docs/auth-flow --depth 2

iwe retrieve -k docs/auth-flow --dry-run

An AI agent using Claude Code, Cursor, or any tool that can execute shell commands gets structured access to your knowledge base. No embeddings. No vector database. Just Markdown files with a query interface.

Key flags:

Flag Description
--depth N Follow inclusion links N levels deep
-c N Include N levels of parent context
-e KEY Exclude already-loaded documents
--dry-run Check document count and size before fetching

The --depth flag is particularly useful. It follows inclusion links and inlines child documents, giving the agent transitive context in a single retrieval call.

Inclusion Links: Structure Without Folders

What makes graph traversal work is a simple concept: inclusion links.

An inclusion link is a markdown link placed on its own line:

# Photography

[Composition](composition.md)

[Lighting](lighting.md)

[Post-Processing](post-processing.md)

When a link appears on its own line, it defines structure: "Photography" becomes the parent of the linked documents. Unlike folder hierarchies, a document can have multiple parents:

Frontend Development
├── React Fundamentals
├── Vue.js Guide
└── Performance Optimization

Backend Topics
├── Database Design
└── Performance Optimization  ← same document, multiple parents

This is polyhierarchy — structure without the limitations of folders. Context flows from parent to child. When you retrieve a document with depth, IWE follows these links to pull in child content.

Why this matters vs alternatives:

  • Folders: Force single placement. "Performance Optimization" can't live in both frontend and backend directories.
  • Tags: No structure, no ordering, no hierarchy within categories.
  • Inclusion links: Multiple parents, explicit ordering, annotations alongside links.

What This Enables

This approach gives you context engineering — control over exactly what enters the context window.

When an agent needs to understand your authentication system:

iwe retrieve -k docs/auth --depth 2

It gets back structured Markdown containing:

  • The auth document itself
  • Child documents expanded inline
  • Parent context and backlinks

This is deterministic retrieval. No embedding similarity thresholds. No "maybe relevant" results. The agent gets exactly the documents in your knowledge graph that connect to the topic.

Additional benefits:

  • Version-controlled knowledge — Git tracks every change
  • Transitive context in one command — no recursive API calls
  • Readable, editable, portable — it's just Markdown

The Tradeoff

IWE isn't a replacement for every memory approach:

Best for:

  • Structured knowledge (technical docs, project specs, reference material, task managenet)
  • Developer workflows with text editor/CLI comfort
  • Knowledge you want to read, edit, and version control

The key insight: this isn't "agent memory" bolted onto your workflow. It's your knowledge base — the one you already maintain for yourself — made accessible to agents when you want their help.

You remain in control. The files are yours, readable and editable. Agents become collaborators that can navigate your knowledge, not black boxes that store it.

Getting Started

IWE is open source and available on GitHub. See the Get Started guide for installation and setup instructions.

Tags:#cloud#dev.to

Found this useful? Share it!

✈️ Telegram𝕏 TweetWhatsApp

Read the Full Story

Continue reading on Dev.to

Visit Dev.to ↗

Related Stories

☁️
☁️Cloud & DevOps

Majority Element

about 3 hours ago

☁️
☁️Cloud & DevOps

Building a SQL Tokenizer and Formatter From Scratch — Supporting 6 Dialects

about 3 hours ago

Moving Beyond Disk: How Redis Supercharges Your App Performance
☁️Cloud & DevOps

Moving Beyond Disk: How Redis Supercharges Your App Performance

about 3 hours ago

The Stake Was Governance Outside the Schema. MICA v0.1.5 Pulled It In
☁️Cloud & DevOps

The Stake Was Governance Outside the Schema. MICA v0.1.5 Pulled It In

about 3 hours ago

📡 Source Details

Dev.to

📅 Mar 21, 2026

🕐 about 3 hours ago

⏱ 5 min read

🗂 Cloud & DevOps

Read Original ↗

Web Hosting

🌐 Hostinger — 80% Off Hosting

Start your website for ₹69/mo. Free domain + SSL included.

Claim Deal →

📬 AiFeed24 Daily

Top 5 AI & tech stories every morning. Join 40,000+ readers.

✦ 40,218 subscribers · No spam, ever

Cloud Hosting

☁️ Vultr — $100 Free Credit

Deploy cloud servers in 25+ locations. From $2.50/mo. No contract.

Claim $100 Credit →
AiFeed24

India's AI-powered tech news hub. Daily coverage of AI, startups, crypto and emerging technology.

✈️🛒

Topics

Artificial IntelligenceStartups & VCCryptocurrencyCybersecurityCloud & DevOpsIndia Tech

Company

About AiFeed24Write For UsContact

Daily Digest

Top 5 AI stories every morning. 40,000+ readers.

No spam, ever.

© 2026 AiFeed24 Media.Affiliate Disclosure — We earn commission on qualifying purchases at no extra cost to you.
PrivacyTermsCookies