Welcome to the final chapter of our journey building a production-grade Mermaid analyzer and fixer. Throughout this guide, we’ve focused on correctness, performance, and best practices. Now, as we approach deployment, it’s crucial to consider the long-term aspects: how to keep our tool reliable, performant, and adaptable to future needs.
In this chapter, we will delve into critical topics such as monitoring the tool’s performance, establishing robust maintenance strategies, and exploring avenues for future extensibility. We’ll integrate structured logging, set up performance benchmarks, design a conceptual plugin system, discuss WebAssembly (WASM) compilation, and demonstrate CI/CD integration. By the end of this chapter, you will have a comprehensive understanding of how to ensure the mermaid-tool remains a valuable asset for years to come, with a clear path for its evolution.
Planning & Design
Maintaining a production-ready tool involves continuous effort in several key areas. We need mechanisms to observe its behavior (monitoring), processes to keep it updated and bug-free (maintenance), and an architecture that allows it to grow and adapt (extensibility).
Monitoring & Logging Strategy
For a CLI tool, monitoring primarily involves robust logging and performance benchmarking. We’ll adopt tracing for structured, context-rich logging, which is a modern Rust standard. This allows us to emit diagnostic information that can be easily consumed by log aggregation systems if the tool were to run in a server environment, or simply provide clearer output for local debugging.
Maintenance Practices
Maintenance encompasses:
- Dependency Management: Regularly updating
Cargo.tomldependencies to patch security vulnerabilities and leverage new features. - Code Quality: Continuous integration checks, linting, and formatting.
- Bug Fixing & Releases: A clear process for identifying, fixing, and releasing new versions.
- Documentation: Keeping user and developer documentation up-to-date.
Future Extensibility Architecture
To ensure the mermaid-tool can evolve, we’ll design for extensibility in several key areas:
- Plugin System for Custom Rules: This will allow users or third-party developers to contribute their own validation or formatting rules without modifying the core tool. We’ll define a
Ruletrait that external crates can implement. - WebAssembly (WASM) Target: Compiling the core lexer, parser, and validator logic to WASM will enable the
mermaid-toolto run in web browsers, Node.js, or other WASM-compatible environments, opening doors for web-based editors or integrations. - VS Code Extension Integration: Leveraging a WASM build, or directly calling the CLI, a VS Code extension could provide real-time linting, formatting on save, and quick fixes directly within the editor.
- CI/CD Integration: Automating the validation and fixing of Mermaid diagrams in a continuous integration pipeline ensures that diagrams remain consistent and correct across a project.
The following Mermaid diagram illustrates the conceptual architecture of our mermaid-tool and its potential extensibility points:
Step-by-Step Implementation
1. Structured Logging with tracing
We’ll replace any basic println! statements with tracing macros to provide structured, configurable logging.
a) Setup/Configuration
Add tracing and tracing-subscriber to your Cargo.toml.
# Cargo.toml
[dependencies]
# ... existing dependencies ...
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "fmt"] }
b) Core Implementation
First, initialize the tracing subscriber in src/main.rs. This sets up how logs are collected and displayed. We’ll configure it to read log levels from the RUST_LOG environment variable.
// src/main.rs
// Add these at the top of the file
use tracing::{info, error, debug, warn, trace};
use tracing_subscriber::{EnvFilter, fmt};
// ... existing code ...
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize tracing subscriber
// This allows controlling log levels via the RUST_LOG environment variable
// e.g., RUST_LOG=info cargo run
// e.g., RUST_LOG=debug cargo run -vv
fmt::subscriber()
.with_env_filter(EnvFilter::from_default_env())
.init();
info!("Mermaid Tool started.");
// ... existing CLI parsing logic ...
let args = cli::parse_args();
debug!("Parsed arguments: {:?}", args);
// Example of using tracing in the main logic
if let Some(input_path) = &args.input {
info!("Processing input file: {}", input_path.display());
// ... file reading logic ...
} else {
info!("Processing input from stdin.");
}
// Replace existing println! or eprintln! with tracing macros
// For example, in your processing loop:
// ...
// let result = process_mermaid_code(&code, &args);
// match result {
// Ok(processed_code) => {
// info!("Successfully processed Mermaid code.");
// // ... output logic ...
// },
// Err(e) => {
// error!("Failed to process Mermaid code: {}", e);
// // ... error handling ...
// }
// }
// ...
info!("Mermaid Tool finished.");
Ok(())
}
Now, integrate tracing macros throughout your application logic, especially in src/lexer.rs, src/parser.rs, src/validator.rs, and src/rule_engine.rs.
Example in src/lexer.rs:
// src/lexer.rs
use tracing::{debug, trace, error}; // Add this import
// ... existing imports ...
pub struct Lexer<'a> {
input: Chars<'a>,
// ...
}
impl<'a> Lexer<'a> {
pub fn new(input: &'a str) -> Self {
debug!("Initializing lexer with input of length {}", input.len());
Lexer {
input: input.chars(),
// ...
}
}
pub fn tokenize(&mut self) -> Result<Vec<Token>, LexerError> {
let mut tokens = Vec::new();
while let Some(token) = self.next_token()? {
trace!("Tokenized: {:?}", token); // Trace each token
tokens.push(token);
}
debug!("Lexing complete. {} tokens generated.", tokens.len());
Ok(tokens)
}
fn peek(&self) -> Option<char> {
self.input.clone().next()
}
fn consume(&mut self) -> Option<char> {
let c = self.input.next();
trace!("Consumed char: {:?}", c); // Trace character consumption
c
}
// ... rest of the lexer implementation ...
}
c) Testing This Component
To test logging, run your mermaid-tool with different RUST_LOG environment variables:
# Run with info level logs (default if not set)
cargo run -- input.mmd
# Run with debug level logs
RUST_LOG=debug cargo run -- input.mmd
# Run with trace level logs (very verbose)
RUST_LOG=trace cargo run -- input.mmd
# Filter logs for a specific module (e.g., only lexer debug logs)
RUST_LOG="mermaid_tool::lexer=debug" cargo run -- input.mmd
You should observe the info!, debug!, and trace! messages appearing in your console output based on the RUST_LOG setting.
2. Performance Benchmarking with criterion
For a performance-critical tool like a linter/formatter, benchmarks are essential to prevent regressions and identify optimization opportunities.
a) Setup/Configuration
Add criterion as a dev-dependency in Cargo.toml.
# Cargo.toml
[dev-dependencies]
criterion = { version = "0.5", features = ["html_reports"] }
[[bench]]
name = "my_benchmark" # You can name this anything
harness = false
Create a new benches directory in your project root, and add a file for your benchmarks, e.g., benches/lexer_parser_benchmarks.rs.
b) Core Implementation
We’ll create benchmarks for the lexer, parser, and the full processing pipeline.
// benches/lexer_parser_benchmarks.rs
use criterion::{criterion_group, criterion_main, Criterion};
use std::fs;
use mermaid_tool::lexer::Lexer;
use mermaid_tool::parser::Parser;
use mermaid_tool::validator::Validator;
use mermaid_tool::rule_engine::{RuleEngine, rules::all_rules};
use mermaid_tool::cli::CliArgs; // Assuming CliArgs for context, adjust if needed
// A dummy Mermaid diagram for benchmarking
const SMALL_DIAGRAM: &str = r#"
graph TD
A[Christmas] -->|Get money| B(Go shopping)
B --> C{Let me think}
C -->|One| D[Laptop]
C -->|Two| E[iPhone]
C -->|Three| F[fa:fa-car Car]
"#;
// A larger diagram for more realistic benchmarks
const LARGE_DIAGRAM: &str = r#"
graph TD
A[Start] --> B{Process Input};
B --> C{Validate Data};
C -->|Valid| D[Transform Data];
C -->|Invalid| E[Log Error];
D --> F[Store Result];
E --> F;
F --> G[Generate Report];
G --> H[End];
subgraph Data Flow
I[Source] --> J[Extract]
J --> K[Load]
K --> L[Transform]
L --> M[Destination]
end
subgraph User Interaction
N[User Request] --> O[Auth]
O -->|Success| P[API Call]
O -->|Failure| Q[Error Response]
P --> R[Business Logic]
R --> S[Database Access]
S --> T[Return Data]
T --> N
end
subgraph Async Operations
U[Event Trigger] --> V(Queue Message)
V --> W[Worker Service]
W --> X{Process Task}
X -->|Success| Y[Notify Success]
X -->|Failure| Z[Retry Logic]
end
A -- "Initial state" --> I;
I -- "Data pipeline" --> N;
N -- "User action" --> U;
classDef default fill:#f9f,stroke:#333,stroke-width:2px;
classDef important fill:#afa,stroke:#333,stroke-width:2px;
class A,G,H,I,M,N,U,Y,Z important;
"#;
fn benchmark_lexer(c: &mut Criterion) {
let mut group = c.benchmark_group("Lexer Performance");
group.sample_size(1000); // Increase sample size for more accurate results
group.bench_function("lexer_small_diagram", |b| {
b.iter(|| {
let mut lexer = Lexer::new(SMALL_DIAGRAM);
lexer.tokenize().unwrap();
})
});
group.bench_function("lexer_large_diagram", |b| {
b.iter(|| {
let mut lexer = Lexer::new(LARGE_DIAGRAM);
lexer.tokenize().unwrap();
})
});
group.finish();
}
fn benchmark_parser(c: &mut Criterion) {
let mut group = c.benchmark_group("Parser Performance");
group.sample_size(500);
let small_tokens = Lexer::new(SMALL_DIAGRAM).tokenize().unwrap();
let large_tokens = Lexer::new(LARGE_DIAGRAM).tokenize().unwrap();
group.bench_function("parser_small_diagram", |b| {
b.iter(|| {
let mut parser = Parser::new(&small_tokens);
parser.parse().unwrap();
})
});
group.bench_function("parser_large_diagram", |b| {
b.iter(|| {
let mut parser = Parser::new(&large_tokens);
parser.parse().unwrap();
})
});
group.finish();
}
fn benchmark_full_pipeline(c: &mut Criterion) {
let mut group = c.benchmark_group("Full Pipeline Performance");
group.sample_size(100); // Full pipeline is heavier, so fewer samples
let args = CliArgs::default(); // Use a default or mock CliArgs for context
group.bench_function("full_pipeline_small_diagram", |b| {
b.iter(|| {
let mut lexer = Lexer::new(SMALL_DIAGRAM);
let tokens = lexer.tokenize().unwrap();
let mut parser = Parser::new(&tokens);
let ast = parser.parse().unwrap();
let mut validator = Validator::new();
let mut diagnostics = Vec::new(); // Collect diagnostics
validator.validate_ast(&ast, &mut diagnostics);
let mut rule_engine = RuleEngine::new(all_rules());
// In fix mode, we'd apply fixes, here we just run checks
let _fixed_ast = rule_engine.apply_rules(&ast, &mut diagnostics, true);
})
});
group.bench_function("full_pipeline_large_diagram", |b| {
b.iter(|| {
let mut lexer = Lexer::new(LARGE_DIAGRAM);
let tokens = lexer.tokenize().unwrap();
let mut parser = Parser::new(&tokens);
let ast = parser.parse().unwrap();
let mut validator = Validator::new();
let mut diagnostics = Vec::new();
validator.validate_ast(&ast, &mut diagnostics);
let mut rule_engine = RuleEngine::new(all_rules());
let _fixed_ast = rule_engine.apply_rules(&ast, &mut diagnostics, true);
})
});
group.finish();
}
criterion_group!(benches, benchmark_lexer, benchmark_parser, benchmark_full_pipeline);
criterion_main!(benches);
c) Testing This Component
Run the benchmarks from your project root:
cargo bench
Criterion will compile your benchmarks and then run them, producing statistical reports in target/criterion/report. You can open these HTML reports in your browser to view detailed performance graphs and comparisons.
3. CI/CD Integration with GitHub Actions
Automating the mermaid-tool in your CI/CD pipeline ensures that all Mermaid diagrams in your repository are always valid and correctly formatted.
a) Setup/Configuration
Create a GitHub Actions workflow file: .github/workflows/mermaid_lint.yml.
b) Core Implementation
This workflow will check out your code, install Rust, build mermaid-tool, and then run it against all .mmd files in your repository. It will fail the build if any errors are found.
# .github/workflows/mermaid_lint.yml
name: Mermaid Diagram Lint and Fix
on:
push:
branches:
- main
- master
pull_request:
branches:
- main
- master
jobs:
lint_mermaid:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
profile: minimal
- name: Build mermaid-tool
run: cargo build --release
- name: Find Mermaid files
id: find_files
run: |
MERMAID_FILES=$(find . -name "*.mmd" -o -name "*.mermaid" | tr '\n' ' ')
echo "mermaid_files=$MERMAID_FILES" >> "$GITHUB_OUTPUT"
# This step finds all .mmd and .mermaid files and stores them in an output variable.
# It's important to handle cases where no files are found.
- name: Run mermaid-tool in lint mode
if: ${{ steps.find_files.outputs.mermaid_files != '' }} # Only run if files were found
run: |
echo "Running mermaid-tool lint on:"
echo "${{ steps.find_files.outputs.mermaid_files }}"
./target/release/mermaid-tool lint ${{ steps.find_files.outputs.mermaid_files }}
# The 'lint' command will report errors and exit with a non-zero code on failure,
# causing the CI job to fail.
- name: Run mermaid-tool in fix mode (and check for changes)
if: ${{ steps.find_files.outputs.mermaid_files != '' }}
run: |
echo "Running mermaid-tool fix and checking for uncommitted changes..."
./target/release/mermaid-tool fix ${{ steps.find_files.outputs.mermaid_files }} --inplace
git diff --exit-code ${{ steps.find_files.outputs.mermaid_files }}
# This step first applies fixes in-place.
# Then, `git diff --exit-code` checks if any files were changed by the fix.
# If changes exist, it means the files were not correctly formatted/linted
# before the commit/PR, and the step will fail.
# This ensures all Mermaid files are always formatted correctly.
c) Testing This Component
- Commit the
.github/workflows/mermaid_lint.ymlfile to yourmainormasterbranch. - Push a new commit that includes an invalid Mermaid file (e.g., missing a
graph TDdeclaration). - Observe the GitHub Actions workflow execution in your repository. It should fail at the
Run mermaid-tool in lint modestep. - Push another commit with a valid but unformatted Mermaid file (e.g., inconsistent spacing).
- Observe the GitHub Actions workflow. It should pass the
lintstep but fail at thefix modestep becausegit diff --exit-codewill detect changes. - Finally, commit a correctly formatted and valid Mermaid file. The workflow should pass successfully.
4. Plugin System for Custom Rules (Conceptual Design)
Implementing a full dynamic plugin system in Rust (using libloading) can be complex due to ABI stability issues across Rust versions. For a comprehensive guide, we’ll outline the design and provide a simplified conceptual implementation where external rules are just structs implementing a trait within the main crate, but the principle extends to dynamic loading.
a) Setup/Configuration
No new dependencies for the conceptual design. We’ll modify src/rule_engine/mod.rs and src/rule_engine/rules.rs.
b) Core Implementation
First, define the Rule trait in src/rule_engine/mod.rs if you haven’t already.
// src/rule_engine/mod.rs
// ... existing imports ...
use crate::ast::Ast;
use crate::diagnostics::{Diagnostic, Severity};
pub mod rules; // Make sure this is public
/// A trait for defining custom validation and formatting rules.
pub trait Rule: Send + Sync + 'static { // Added Send + Sync + 'static for thread safety and static lifetime
/// A unique identifier for the rule (e.g., "M001").
fn code(&self) -> &'static str;
/// A human-readable name for the rule.
fn name(&self) -> &'static str;
/// A description of what the rule checks for.
fn description(&self) -> &'static str;
/// The default severity of the rule (e.g., Warning, Error).
fn severity(&self) -> Severity;
/// Checks the AST for violations of this rule.
/// If `apply_fixes` is true, the rule should attempt to modify the AST
/// to fix the issue, if a safe fix is possible.
/// Returns a list of diagnostics generated by this rule.
fn check_and_fix(&self, ast: &mut Ast, apply_fixes: bool) -> Vec<Diagnostic>;
}
pub struct RuleEngine {
rules: Vec<Box<dyn Rule>>,
}
impl RuleEngine {
pub fn new(rules: Vec<Box<dyn Rule>>) -> Self {
RuleEngine { rules }
}
pub fn apply_rules(&mut self, ast: &mut Ast, diagnostics: &mut Vec<Diagnostic>, apply_fixes: bool) -> Ast {
let mut current_ast = ast.clone(); // Clone to allow rules to modify
for rule in &self.rules {
debug!("Applying rule: {} ({})", rule.name(), rule.code());
let rule_diagnostics = rule.check_and_fix(&mut current_ast, apply_fixes);
diagnostics.extend(rule_diagnostics);
}
current_ast // Return the potentially modified AST
}
}
Now, let’s create a placeholder for an external rule within our rules module, demonstrating how it would look. In a real plugin system, this would be in a separate crate and dynamically loaded.
// src/rule_engine/rules/mod.rs
pub mod missing_graph_declaration;
pub mod node_label_quotes;
// ... other existing rules ...
pub mod custom_example_rule; // Add this line
// Re-export all rules
pub use missing_graph_declaration::MissingGraphDeclarationRule;
pub use node_label_quotes::NodeLabelQuotesRule;
// ... other existing rules ...
pub use custom_example_rule::CustomExampleRule; // Re-export it
use super::Rule; // Import the trait
pub fn all_rules() -> Vec<Box<dyn Rule>> {
vec![
Box::new(MissingGraphDeclarationRule),
Box::new(NodeLabelQuotesRule),
// ... other existing rules ...
Box::new(CustomExampleRule), // Add your custom rule here
]
}
Now, create src/rule_engine/rules/custom_example_rule.rs for our example.
// src/rule_engine/rules/custom_example_rule.rs
use crate::ast::{Ast, Node, Edge, DiagramType};
use crate::diagnostics::{Diagnostic, Severity, Span, DiagnosticBuilder};
use crate::rule_engine::Rule;
use tracing::{warn, debug};
/// M999: Custom Example Rule - Warns if a node label contains "Legacy"
pub struct CustomExampleRule;
impl Rule for CustomExampleRule {
fn code(&self) -> &'static str {
"M999"
}
fn name(&self) -> &'static str {
"Custom Legacy Node Detector"
}
fn description(&self) -> &'static str {
"Flags nodes that contain the word 'Legacy' in their label, suggesting potential refactoring."
}
fn severity(&self) -> Severity {
Severity::Warning
}
fn check_and_fix(&self, ast: &mut Ast, _apply_fixes: bool) -> Vec<Diagnostic> {
let mut diagnostics = Vec::new();
// This rule only applies to flowcharts and sequence diagrams for simplicity
if !matches!(ast.diagram_type, DiagramType::Flowchart | DiagramType::Sequence) {
return diagnostics;
}
for node in ast.get_nodes() {
if let Some(label) = &node.label {
if label.contains("Legacy") {
warn!("Found 'Legacy' in node label: '{}'", label);
diagnostics.push(
DiagnosticBuilder::new(self.code(), self.name(), self.description(), self.severity())
.with_span(node.span)
.with_message(format!("Node '{}' contains 'Legacy' in its label. Consider refactoring.", label))
.with_help("Rename legacy components or mark them for deprecation.")
.build()
);
}
}
}
diagnostics
}
}
c) Testing This Component
To test this conceptual plugin system, simply run your mermaid-tool with input containing a “Legacy” node.
Create test.mmd:
Then run:
cargo run -- lint test.mmd
You should see a warning diagnostic similar to:
warning: M999: Custom Legacy Node Detector
--> test.mmd:3:9
|
3 | B --> C[Legacy System Integration]
| ^ The word 'Legacy' in label: 'Legacy System Integration'.
| |
= Node 'Legacy System Integration' contains 'Legacy' in its label. Consider refactoring.
= Help: Rename legacy components or mark them for deprecation.
5. WebAssembly (WASM) Build (Setup)
Compiling your core logic to WASM allows it to run in web environments. We’ll set up the project for this.
a) Setup/Configuration
Install wasm-pack:
cargo install wasm-pack
Modify Cargo.toml to create a cdylib target for WASM. This is a separate library target that wasm-pack will use.
# Cargo.toml
# ... existing [lib] section ...
[lib]
crate-type = ["cdylib", "rlib"] # Add "cdylib" for WASM
# ... existing [dependencies] ...
[dependencies]
# ...
wasm-bindgen = { version = "0.2", optional = true }
# Add any other dependencies that need to be WASM-compatible or are core logic
[features]
default = []
wasm = ["wasm-bindgen"]
b) Core Implementation
Create a src/lib.rs (if you don’t have one, or modify your existing one) that exposes the core mermaid-tool functionality via wasm_bindgen.
// src/lib.rs
#[cfg(feature = "wasm")]
use wasm_bindgen::prelude::*;
#[cfg(feature = "wasm")]
use tracing::{debug, error, info}; // Use tracing here too for consistency
// Re-export core modules
pub mod lexer;
pub mod parser;
pub mod ast;
pub mod validator;
pub mod rule_engine;
pub mod diagnostics;
pub mod formatter;
pub mod cli; // Though CLI itself won't be WASM, its components might be used
// A simple function to initialize tracing for WASM environments
// This would typically be called once when the WASM module is loaded in JS
#[cfg(feature = "wasm")]
#[wasm_bindgen(start)]
pub fn main_wasm() {
// When debugging, uncomment this line to see console logs from Rust
// console_error_panic_hook::set_once();
// tracing_wasm::set_as_global_default(); // Uses tracing_wasm for browser console output
info!("Mermaid Tool WASM module initialized.");
}
/// Parses and validates Mermaid code, returning diagnostics and optionally a fixed version.
/// This function serves as the primary entry point for WASM.
#[cfg(feature = "wasm")]
#[wasm_bindgen]
pub fn process_mermaid_code_wasm(
input_code: &str,
apply_fixes: bool,
strict_mode: bool,
) -> JsValue {
info!("WASM: Processing Mermaid code ({} chars). Fixes: {}, Strict: {}",
input_code.len(), apply_fixes, strict_mode);
let mut lexer = lexer::Lexer::new(input_code);
let tokens = match lexer.tokenize() {
Ok(t) => t,
Err(e) => {
error!("WASM Lexer error: {:?}", e);
let diag = diagnostics::DiagnosticBuilder::new(
"M000", "Lexer Error", &e.to_string(), diagnostics::Severity::Error)
.build();
return JsValue::from_serde(&serde_json::json!({
"diagnostics": vec![diag],
"fixed_code": input_code.to_string(), // Return original on lexer error
"changes_applied": false,
"error": e.to_string(),
})).unwrap();
}
};
debug!("WASM: Lexing complete. {} tokens.", tokens.len());
let mut parser = parser::Parser::new(&tokens);
let mut ast = match parser.parse() {
Ok(a) => a,
Err(e) => {
error!("WASM Parser error: {:?}", e);
let diag = diagnostics::DiagnosticBuilder::new(
"M000", "Parser Error", &e.to_string(), diagnostics::Severity::Error)
.build();
return JsValue::from_serde(&serde_json::json!({
"diagnostics": vec![diag],
"fixed_code": input_code.to_string(), // Return original on parser error
"changes_applied": false,
"error": e.to_string(),
})).unwrap();
}
};
debug!("WASM: Parsing complete. AST created.");
let mut diagnostics = Vec::new();
let mut validator = validator::Validator::new();
validator.validate_ast(&ast, &mut diagnostics);
debug!("WASM: Validation complete. {} diagnostics.", diagnostics.len());
let original_ast_json = serde_json::to_string(&ast).unwrap();
let mut rule_engine = rule_engine::RuleEngine::new(rule_engine::rules::all_rules());
let fixed_ast = rule_engine.apply_rules(&mut ast, &mut diagnostics, apply_fixes);
let fixed_ast_json = serde_json::to_string(&fixed_ast).unwrap();
let changes_applied = original_ast_json != fixed_ast_json;
debug!("WASM: Rule engine applied. Changes: {}", changes_applied);
if strict_mode && diagnostics.iter().any(|d| d.severity == diagnostics::Severity::Error) {
error!("WASM: Strict mode enabled and errors found.");
return JsValue::from_serde(&serde_json::json!({
"diagnostics": diagnostics,
"fixed_code": input_code.to_string(), // In strict mode, don't return fixes if errors
"changes_applied": false,
"error": "Strict mode: errors found, no fixes applied.".to_string(),
})).unwrap();
}
let fixed_code = if apply_fixes && changes_applied {
let mut formatter = formatter::Formatter::new();
match formatter.format_ast(&fixed_ast) {
Ok(code) => {
info!("WASM: Formatted fixed AST.");
code
},
Err(e) => {
error!("WASM Formatter error: {:?}", e);
// If formatter fails, return original or partly fixed code
input_code.to_string()
}
}
} else {
input_code.to_string() // No fixes applied or no changes
};
JsValue::from_serde(&serde_json::json!({
"diagnostics": diagnostics,
"fixed_code": fixed_code,
"changes_applied": changes_applied,
"error": null,
})).unwrap()
}
c) Testing This Component
To build the WASM module:
# Build for web
wasm-pack build --target web --out-dir wasm-pkg --features wasm
# Build for Node.js
wasm-pack build --target nodejs --out-dir wasm-pkg-node --features wasm
This will generate a wasm-pkg (or wasm-pkg-node) directory containing the WASM module (mermaid_tool_bg.wasm) and JavaScript bindings (mermaid_tool.js). You can then import this module into a web project or Node.js script.
Example Node.js usage (test_wasm.js):
// test_wasm.js
const { process_mermaid_code_wasm } = require('./wasm-pkg-node');
async function testWasm() {
const mermaidCode = `
graph TD
A[Start] --> B(Process)
B --> C{Decision}
C -->|Yes| D[End]
C -->|No| B
`;
console.log("--- Lint Mode ---");
let resultLint = process_mermaid_code_wasm(mermaidCode, false, false);
console.log(JSON.stringify(resultLint, null, 2));
const invalidMermaid = `
graph TD
A[Node without closing bracket
`;
console.log("\n--- Invalid Mermaid ---");
let invalidResult = process_mermaid_code_wasm(invalidMermaid, false, false);
console.log(JSON.stringify(invalidResult, null, 2));
const fixableMermaid = `
graph TD
A --> B
C --> D[Node without quotes]
`;
console.log("\n--- Fix Mode ---");
let resultFix = process_mermaid_code_wasm(fixableMermaid, true, false);
console.log(JSON.stringify(resultFix, null, 2));
console.log("\n--- Strict Mode (with error) ---");
const strictErrorMermaid = `
graph TD
A[Start] --> B
B --> C[Undeclared node]
`;
let resultStrictError = process_mermaid_code_wasm(strictErrorMermaid, true, true);
console.log(JSON.stringify(resultStrictError, null, 2));
console.log("\n--- Strict Mode (no error) ---");
let resultStrictOk = process_mermaid_code_wasm(mermaidCode, true, true);
console.log(JSON.stringify(resultStrictOk, null, 2));
}
testWasm();
Run it:
node test_wasm.js
You should see JSON output containing diagnostics and fixed code, demonstrating the WASM module’s functionality.
6. VS Code Extension Integration (Conceptual)
A VS Code extension would typically:
- Use the WASM build (or call the
mermaid-toolCLI directly). - Provide document linting (red squiggles for errors/warnings).
- Offer “Format Document” functionality.
- Implement “Quick Fixes” for suggested corrections.
The architecture would involve:
- A TypeScript/JavaScript extension host script.
- Loading the
mermaid-toolWASM module. - Registering a
DocumentLinkProvider,DocumentFormattingEditProvider, andCodeActionProvider. - Calling
process_mermaid_code_wasmon document changes to get diagnostics and apply fixes.
While a full VS Code extension is beyond this guide’s scope, the WASM compilation provides the necessary foundation.
Production Considerations
- Logging Levels & Rotation: In production,
RUST_LOG=infoorwarnis typical to avoid excessive log volume. For long-running processes (ifmermaid-toolwere integrated into a service), consider log rotation tools likelogrotateto manage disk space. - Performance Baseline & Regression: Regularly run
cargo benchand compare results to a baseline. Integrate this into CI/CD to catch performance regressions early. - Security of Plugins: If a truly dynamic plugin system (loading
.so/.dllfiles) were implemented, strict security measures would be paramount:- Signed Plugins: Only load plugins signed by trusted authorities.
- Sandboxing: Run plugins in a sandboxed environment (e.g., WebAssembly micro-runtime, or OS-level sandboxing) to limit their access to system resources.
- Auditing: Thoroughly audit all third-party plugins. For our current trait-based “plugin” system, this is less of a concern as rules are compiled directly into the binary.
- Automated Dependency Updates: Use tools like Dependabot (for GitHub) or Renovate to automatically create pull requests for dependency updates, helping to keep your project secure and up-to-date with minimal manual effort.
- Error Reporting: For internal usage, consider integrating an error reporting service (e.g., Sentry, Bugsnag) if
mermaid-toolwere to run as part of a larger service and encountered panics or unhandled errors. For a CLI, clearerror!logs are usually sufficient.
Code Review Checkpoint
At this stage, we have enhanced our mermaid-tool with crucial production-readiness features:
- Structured Logging: Integrated
tracingfor better observability and debugging. - Performance Benchmarks: Added
criterionbenchmarks to monitor performance and prevent regressions. - CI/CD Integration: Provided a GitHub Actions workflow to automate linting and fixing of Mermaid diagrams in a repository.
- Extensibility Foundation: Designed a conceptual plugin system and prepared the core for WASM compilation.
Files Created/Modified:
Cargo.toml: Addedtracing,tracing-subscriber,criterion,wasm-bindgen(optional feature).src/main.rs: Initializedtracingsubscriber, replacedprintln!withtracing!macros.src/lexer.rs,src/parser.rs,src/validator.rs,src/rule_engine/mod.rs: Addedtracing!macros for debug output.benches/lexer_parser_benchmarks.rs: New file forcriterionbenchmarks..github/workflows/mermaid_lint.yml: New file for GitHub Actions CI/CD.src/rule_engine/mod.rs: ModifiedRuleEngine::apply_rulesto accept&mut Ast.src/rule_engine/rules/mod.rs: Addedcustom_example_ruletoall_rules().src/rule_engine/rules/custom_example_rule.rs: New file for a conceptual custom rule.src/lib.rs: Added#[cfg(feature = "wasm")]block withprocess_mermaid_code_wasmfunction usingwasm_bindgen.
This chapter significantly elevates the mermaid-tool from a functional prototype to a robust, maintainable, and extensible production-grade utility.
Common Issues & Solutions
Issue:
tracinglogs not appearing or too verbose.- Problem: The
RUST_LOGenvironment variable is not set correctly or is too restrictive/permissive. - Solution: Ensure
RUST_LOGis set before runningcargo run.RUST_LOG=info cargo runfor general info.RUST_LOG=debug cargo runfor more detail.RUST_LOG=trace cargo runfor maximum verbosity.RUST_LOG="mermaid_tool::lexer=debug,mermaid_tool::parser=info" cargo runto filter by module.
- Prevention: Document expected
RUST_LOGusage in your project’sREADME.md.
- Problem: The
Issue:
cargo benchfails or reports no benchmarks.- Problem: Incorrect
Cargo.tomlconfiguration forcriterion, or benchmark files are not in thebenches/directory. - Solution:
- Verify
[dev-dependencies]includescriterionand[[bench]]section is correct. - Ensure
harness = falseis set for[[bench]]. - Check that the benchmark file (
benches/*.rs) usescriterion_group!andcriterion_main!macros correctly.
- Verify
- Prevention: Always follow the
criteriondocumentation for setup.
- Problem: Incorrect
Issue: WASM build fails with linker errors or missing symbols.
- Problem: Some Rust crates or features are not compatible with the
wasm32-unknown-unknowntarget, orwasm-packisn’t configured correctly. - Solution:
- Review dependencies: Ensure all dependencies used in the WASM-exposed functions are compatible with WASM. Some libraries might require specific features or alternatives for WASM.
- Check
wasm-bindgenattributes: Ensure all functions and types exposed to JavaScript are correctly annotated with#[wasm_bindgen]. - Verify
Cargo.tomlcdylibandwasmfeature setup.
- Prevention: Isolate WASM-specific code using
#[cfg(feature = "wasm")]and#[cfg(not(feature = "wasm"))]to avoid compiling incompatible code. Test WASM builds frequently.
- Problem: Some Rust crates or features are not compatible with the
Issue: CI/CD workflow fails on
git diff --exit-code.- Problem: The
mermaid-tool fixcommand made changes to the Mermaid files, meaning they were not correctly formatted or linted before being committed. - Solution:
- Run
cargo run -- fix <your-mermaid-files> --inplacelocally. - Commit the changes.
- Push the updated, formatted files.
- Run
- Prevention: Encourage developers to run
cargo run -- fix . --inplacelocally before committing, or set up a Git pre-commit hook that runs the fixer.
- Problem: The
Testing & Verification
- Logging: Run
cargo run -- lint example.mmdwithRUST_LOG=debugandRUST_LOG=trace. Verify that the detailed logs appear as expected, showing lexer, parser, validator, and rule engine steps. - Benchmarking: Execute
cargo bench. After completion, open the generated HTML reports intarget/criterion/reportin your browser. Examine the performance graphs for lexer, parser, and the full pipeline. Look for consistent performance and ensure there are no unexpected spikes or regressions. - CI/CD:
- Push a new branch with a deliberately malformed Mermaid file (e.g.,
graph TD A -- B). Observe the CI pipeline fail on thelintstep. - Push a new branch with a valid but unformatted Mermaid file (e.g.,
graph TD A-->B). Observe the CI pipeline fail on thefixstep due togit diff. - Push a new branch with a perfectly valid and formatted Mermaid file. Observe the CI pipeline pass all steps.
- Push a new branch with a deliberately malformed Mermaid file (e.g.,
- WASM (Conceptual): Run
wasm-pack build --target nodejs --out-dir wasm-pkg-node --features wasmand thennode test_wasm.js(from the example provided). Verify that the JSON output is correct, diagnostics are reported, and fixes are applied whenapply_fixesis true.
By performing these checks, you confirm that the monitoring, maintenance, and extensibility foundations are correctly implemented and functioning as intended.
Summary & Next Steps
Congratulations! You have successfully completed the development of a production-grade Mermaid code analyzer and fixer written in Rust. From initial setup to a complete compiler-like pipeline, including lexing, parsing, AST generation, strict validation, a deterministic rule engine, and robust formatting, you’ve built a truly powerful tool. In this final chapter, we’ve equipped it with essential long-term capabilities:
- Enhanced Observability: Integrated
tracingfor structured, configurable logging, making debugging and operational monitoring much easier. - Performance Assurance: Established
criterionbenchmarks to continuously monitor and ensure the tool’s high performance. - Automated Quality: Implemented CI/CD integration using GitHub Actions to enforce Mermaid diagram quality and consistency across your projects.
- Future-Proofing: Designed a conceptual plugin system for custom rules and laid the groundwork for WebAssembly compilation, opening doors for browser-based integrations and a VS Code extension.
The mermaid-tool is now ready for deployment and continuous use. It stands as a testament to Rust’s capabilities for building high-performance, reliable, and maintainable systems.
What’s next for your mermaid-tool?
While this guide concludes here, your journey with the mermaid-tool can continue. Consider these potential next steps:
- Refine Plugin System: Develop a full dynamic plugin loading mechanism using
libloading(with careful consideration of ABI stability) or explore alternative plugin architectures. - Complete WASM Integration: Build a small web application or a Node.js module that fully utilizes the WASM-compiled
mermaid-toolfor real-time diagram validation and formatting. - Develop a VS Code Extension: Create a VS Code extension that integrates the WASM module for a seamless developer experience, offering real-time feedback, formatting on save, and quick fixes directly in the editor.
- Expand Rule Set: Implement more advanced validation and formatting rules based on the evolving Mermaid specification or specific organizational style guides.
- GUI or Web Interface: Build a simple graphical user interface (GUI) using a framework like egui or a web frontend using Yew/Dioxus (leveraging WASM) to provide a more interactive experience.
- Community Contributions: Open-source your
mermaid-tool(if you haven’t already!) and invite community contributions to expand its features and rule set.
This project has provided you with deep insights into compiler-style tool development, Rust’s ecosystem, and best practices for creating production-ready software. May your Mermaid diagrams always be perfectly structured and validated!