Genesis Rojas

Applied AI Engineer · Spain / Remote

LLM systems, backend integration and operational discovery for enterprise AI.

Profile

Applied AI Engineer

My work combines LLM systems, backend engineering and operational discovery for enterprise AI. I work on systems where the model is only one part of the problem: documents, evidence, business rules, review practices, technical constraints and user trust all affect the design.

Recent projects include investment banking KYC, AI-enabled requirements discovery, agent marketplace infrastructure and long-form transcription pipelines. The common thread is practical AI delivery in environments where outputs need to be structured, traceable and usable by business teams.

Practical view. In enterprise AI, the model is rarely the whole system. The difficult work is often around it: selecting the right source material, linking claims to evidence, deciding where people review outputs and making the result safe enough to use inside an operational process.

Recent domain KYC

Evidence corroboration in an investment banking context.

Agent ecosystem 20+

Agent A2A environment with release coordination.

Performance lift 80 → 90%

Improvement against analyst ground truth.

Discovery corpus 168k+

ServiceNow tickets reviewed for transformation potential.

Selected projects

Enterprise AI work under real operational constraints.

The projects below are anonymised where needed. The emphasis is on the system problem, the technical work and the operating context.

Investment banking KYC A2A / MCP Evidence traceability

KYC Evidence Corroboration Agent

The agent operated inside a larger KYC ecosystem, where generated conclusions were only useful if they could be tied to the correct evidence. The project required more than improving model responses. It required stable links between claims, extracted evidence and business analyst review.

I took over an underperforming system and reworked the prompt and evidence architecture. Jinja2 templates replaced hardcoded prompt blocks, OCR chunking and token budgeting handled long document sets, and tuple-keyed evidence indexing reduced the risk of evidence attached to the wrong claim during concurrent LLM calls.

Project note

Fluency was not enough. A response could still fail if the evidence reference was wrong, unsupported or attached to the wrong claim. The main design problem was evidence architecture: document handling, claim-reference mapping, structured validation and release readiness.

Requirements discovery LangGraph MCP ServiceNow

AI-enabled Requirements Discovery Accelerator

The accelerator supported requirements discovery before implementation. ServiceNow tickets, operational documentation and stakeholder input were used to identify candidate areas for transformation and prepare use cases for downstream technical work.

The workflow used LangGraph orchestration, MCP servers for ServiceNow, SharePoint and SQLite, Pydantic structured outputs and human checkpoints before classification, recommendations and client-facing outputs. Validation interfaces supported tower-specific review and structured feedback.

Project note

The qualitative research background appeared in the workflow design itself: workshop structure, validation interfaces, interpretation checkpoints and the separation between AI exploration and human judgement. The goal was not automatic conclusion generation, but better evidence for transformation decisions.

Banking platform Backend Azure PostgreSQL API

Agent Marketplace Backend

The marketplace treated agents as enterprise assets rather than isolated experiments. The backend needed to represent ownership, metadata, access, publication flows, lifecycle states and operational signals, not only display a catalogue.

The implementation included Pydantic models, SQLAlchemy and Alembic patterns, Azure PostgreSQL, API endpoints, Terraform scripts, Azure Monitor alerting and contributions to event streaming integration.

Project note

A platform for agents is partly a product surface and partly an operating model. The data model has to support discoverability, governance, ownership, permissions, publication status and future usage patterns.

System framing

Core system concerns before implementation.

Applied AI work usually starts with the material around the model: documents, tickets, transcripts, evidence, business rules and review practices. The system design has to preserve those relationships.

Typical structure

Different projects use different tools, but the same concerns appear often: source material, evidence structure, AI workflow, review gates and release path.

01

Source material

Documents, tickets, transcripts, case data, business rules and informal context.

02

Evidence structure

References, chunks, schemas, field mappings and traceability conventions.

03

AI workflow

Prompt systems, agents, orchestration, retrieval, classification or generation paths.

04

Review gates

Ground truth, analyst review, human checkpoints, disqualification rules or confidence signals.

05

Release path

Backend integration, environments, monitoring, documentation and iteration.

Working principles

Practical principles for applied AI work.

01

Start with the real workflow.

Understand how work is actually done: documents, decisions, exceptions, user judgement and institutional habits.

02

Make evidence explicit.

Define what counts as evidence, how it is referenced, how it is attached to outputs and how a reviewer can inspect it.

03

Design review into the system.

Review points, schemas, validation interfaces and disqualification rules should be part of the architecture, not added at the end.

04

Build for release, not only demonstration.

AI workflows need backend services, testable paths, environment coordination, documentation and monitoring if they are expected to survive real use.

Technical surface

Tools grouped by role in the system.

The stack is less important as a keyword list than as a working surface. Each group supports a different part of enterprise AI delivery.

Agents and orchestration

Used when a workflow requires state, tool access, intermediate outputs and reviewable transitions rather than a single model call.

GPT-4o · LangGraph · LangChain · MCP / A2A patterns · agent cards · Jinja2 · tiktoken · RAG

Backend and data

Used to make AI workflows testable, integrated and maintainable through typed models, APIs, migrations and persistent data structures.

Python · FastAPI · Pydantic · SQLAlchemy · Alembic · PostgreSQL · Azure PostgreSQL · SQLite · REST APIs

Enterprise systems

Used when AI work has to connect to existing operational platforms, infrastructure, monitoring and delivery processes.

ServiceNow · SharePoint · Kafka · Azure Monitor · GitLab CI · Terraform · AWS Lambda · Step Functions · S3

Evaluation and governance

Used to make outputs reviewable through evidence references, ground-truth checks, schema normalisation, auditability and observability patterns.

Ground-truth evaluation · evidence traceability · schema normalisation · auditability patterns · Langfuse · DSPy · W&B exposure

Foundations

Architecture, anthropology and technical delivery.

Architecture and anthropology shaped habits that still matter in applied AI: reading context, working with constraints, identifying tacit rules and noticing the difference between documented process and actual behaviour.

Useful habits for enterprise AI.

Technical project delivery trained planning, coordination and risk management. Qualitative research trained attention to behaviour, context and interpretation. Together, they support AI work where the system must fit an organisation, not only a technical brief.

Architecture and delivery

Planning, coordination, risk, multidisciplinary teams and delivery across construction and design environments.

Anthropology and research

Ethnographic methods, needs assessment, behavioural observation and interpretation of context-rich data.

Current use in AI

Requirements discovery, stakeholder workshops, validation interfaces and AI systems designed around operational behaviour.

Writing and teaching

Making technical systems easier to understand.

Production AI depends on shared understanding between technical teams, business users and decision-makers. Writing and teaching support that part of the work.

GraphRAG and Knowledge Graphs

Co-authoring a technical book on GraphRAG architectures, knowledge graph construction, retrieval patterns and implementation considerations for enterprise AI systems.

Data Science and AI instruction

Academic delivery for a Data Science and AI cohort, covering applied AI, data strategy, machine learning and project framing across different technical levels.

CV, extended project notes and writing samples available where relevant. Contact: contact@rojasruiz.com.