Lasse Bærland Strand

Core Team

Lasse Bærland Strand

Next-Gen RAG & Transformers

Master Student

About

Lasse Bærland Strand works at the intersection of intelligent systems, optimization, and real-world deployment. At the Agentic Systems Lab, he is developing AI systems that do more than produce outputs—they diagnose, adapt, and improve themselves under complex constraints. His work spans agentic retrieval systems and high-frequency forecasting models, with a common focus on making advanced machine learning more reliable, efficient, and operationally useful. Originally from the Norwegian University of Science and Technology and currently at ETH Zürich, Lasse combines strong academic training with unusually hands-on systems experience. Before joining ASL, he spent three years as an IT consultant in a Norwegian startup environment, where he led the development of an optimization system projected to reduce material costs by $10 million annually. He also worked at CERN, contributing software used by major Large Hadron Collider experiments, and won the Norwegian AI Championship. Across these settings, he has consistently worked on technically demanding problems with clear real-world consequences.

Research Areas

01Agentic RAG systems
02LLM pipeline optimization
03Market prediction models

Project 1

Reasoning Agents for Self-Optimizing RAG Systems

This project explores how large language model agents can be used to automatically optimize retrieval-augmented generation pipelines in a more intelligent and diagnostic way. Instead of relying on brute-force search or shallow trial-and-error, the system analyzes retrieval failures, interprets why they occurred, and proposes targeted changes to the pipeline configuration. This includes tuning retrieval components, reformulation strategies, and evaluation mechanisms. A key part of the work is the generation of task-specific benchmark questions directly from the underlying data, allowing the system to evaluate itself on realistic, domain-relevant tests rather than generic proxy metrics. Scientifically, this project investigates a new direction for automated machine learning in discrete, high-dimensional systems. Traditional optimization methods can identify better-performing settings, but they rarely capture the structural reasons why a pipeline succeeds or fails. Lasse is exploring a reasoning-based alternative sometimes framed as a form of “agentic descent,” where the agent’s diagnostic logic acts as a guide through a search space that has no classical gradient. This creates a cognitive layer on top of pipeline optimization and raises important research questions around explainability, adaptation, and autonomous system improvement in complex LLM workflows. Many organizations can build a promising RAG demo, but very few can reliably bring it into production without extensive manual iteration, testing, and engineering overhead. A system that continuously improves retrieval quality, adapts to domain-specific data, and reduces the need for expert tuning could dramatically lower the cost of deploying enterprise AI. That makes this work highly relevant for developer tooling, internal knowledge systems, and any setting where dependable AI performance is essential at scale.

Project 2

Efficient Transformer Models for Energy Market Price Prediction

This project focuses on adapting transformer-based limit order book models for fast and efficient price trend prediction in energy markets. Building on the TLOB architecture originally developed for financial data, he is investigating how to improve computational efficiency while preserving predictive performance in high-frequency environments. The goal is not only to accelerate inference and training, but also to test whether models that capture fine-grained market microstructure in equities can transfer successfully to energy trading, where patterns of liquidity, volatility, and market behavior differ substantially. From a scientific perspective, the project addresses an important question in modern time-series modeling: how robust are transformer architectures when moved across asset classes with distinct market dynamics? Energy markets offer a compelling testbed because they are increasingly shaped by renewable generation, storage, and rapid supply-demand fluctuations. By studying how order book signals behave in this setting, the work may reveal broader principles about non-stationary sequence modeling, domain transfer in financial machine learning, and efficient transformer design for real-time decision contexts. It is both a modeling challenge and a contribution to understanding market structure itself. As electricity systems become more volatile, operators of battery storage and flexible energy assets need better tools to respond to price movements in real time. More accurate and faster forecasting at the order-book level could support better trading, dispatch, and balancing decisions, improving both profitability and grid responsiveness. A model that makes advanced market intelligence usable in operational settings would be valuable for energy traders, storage operators, and infrastructure platforms seeking a more automated and data-driven edge in increasingly dynamic power markets.

Other team members

Students

Interested in collaborating?

We are always looking for talented students, researchers and industry partners.

Get in Touch