Author: Constantine Goltsev

Constantine Goltsev is the Co-founder & CTO of Apolo. With 20+ years of experience in leading tech teams and building AI-driven solutions, he brings deep expertise in machine learning, cloud infrastructure, and digital publishing. He holds a BA in Applied Mathematics from UC Berkeley.

Lastes from

Author: Constantine Goltsev

The Jagged Frontier: Drop-In Human Replacements or Idiot Savants?

Modern AI dazzles with feats like theorem-proving yet still bungles grade-school logic, creating a “jagged frontier” of uneven skills. This article unpacks new evidence—from Salesforce’s SIMPLE puzzle benchmark, IBM-led Enterprise Bench, and Apple’s controversial “Illusion of Thinking” study—to show why LLM brilliance can hide catastrophic blind spots and what that means for anyone betting their business on AI.

Read post

Lab in the Loop: From Research Tool to Research Leader

AI systems like Robin and Zochi are no longer just tools - they’re emerging as autonomous researchers. From proposing drug treatments to publishing peer-reviewed papers, these multi-agent AI scientists signal a radical shift in how scientific discovery is conducted, accelerating breakthroughs and challenging the role of human researchers.

Read post

Hallucinations in LLMs

Large language model hallucinations—when AI generates false but convincing information—have become a serious real-world problem, impacting fields like law and academia. New research shows these hallucinations stem from specific, traceable neural mechanisms rather than random errors, opening the door to better understanding, prediction, and potential control.

Read post

Reward Modeling in Reinforcement Learning

Modern LLMs use reward models—trained to reflect human preferences—to align their behavior through RLHF. While effective, this approach faces challenges like reward hacking and Goodhart's law. New research offers solutions such as verifiable feedback, constrained optimization, and self-critiquing models to improve alignment and reliability in complex tasks.

Read post

Beyond Transformers: Promising Ideas for Future LLMs

Transformers have powered today’s AI revolution—but limitations around speed, memory, and scalability are becoming clear. This article explores three promising alternatives: diffusion-based LLMs that generate text in parallel for faster, more controllable outputs; Mamba’s state space models, which scale to million-token contexts without quadratic costs; and Titans, a memory-augmented architecture that can learn new information at inference time. Each approach tackles core challenges in latency, context handling, and long-term reasoning—opening new opportunities for businesses to reduce compute costs and deploy smarter, more adaptable AI systems.

Read post

Introducing Apolo: Future-Proof Enterprise AI Infrastructure

As AI evolves toward reasoning models and near-AGI, enterprises need secure, scalable, and compliant infrastructure. Apolo offers an on-prem, future-ready AI stack—built with data centers—that supports model deployment, fine-tuning, and inference at scale. Designed for privacy, agility, and rapid AI growth, Apolo empowers organizations to stay in control as the AI revolution accelerates.

Read post

The Data Center Evolution: GenAI is Here, and It’s Already Driving Revenue

AI is transforming data centers, enabling businesses across industries to drive real revenue through faster, smarter infrastructure. Apolo’s multi-tenant MLOps platform supports these advancements, allowing companies to unlock the full potential of AI for tangible business outcomes.

Read post