Digital pathology has promised a step change in how slides are analyzed, but for most routine cases, the microscope still dominates. Optical engineer MichaelJohn Fanous thinks that gap comes down to practicality rather than potential.
In this interview, Fanous traces the path from a highly theoretical AI imaging concept to Scanimus – a portable system designed to bring fast, accessible digitization to the pathologist’s desk. Along the way, he reflects on the technical trade-offs, skepticism around AI-driven reconstruction, and what happens when you put a new kind of tool directly into clinicians’ hands.
How did the idea for the portable scanner first take shape?
Scanimus didn’t start as a product idea. It evolved out of a series of academic projects at both the University of Illinois Urbana-Champaign and University of California, Los Angeles (UCLA), beginning with a very theoretical, mathematics-heavy paper and eventually leading to something called BlurryScope, which we published in NPJ Digital Medicine last August. At that stage, the goal wasn’t to build a usable medical device – it was to explore what was possible at the intersection of computational imaging and machine learning.
Not long after that paper was published, I realized that with a few targeted upgrades to the components, and some practical additions, the system could be turned into something much more compelling for everyday pathology use.
How does your product differ technically from conventional slide scanners?
It’s not quite a microscope, and it’s not quite a scanner, but it draws from both.
Most conventional scanners use what’s called a “stop-and-stare” approach. The stage moves, stops, an image is captured, and then it repeats. It’s a stepwise process.
What we do is quite different. We scan continuously, recording a video as the stage moves at high speed. The stage only stops at the very edges of the scan. In microscopy terms, we’re talking about speeds of 10 to 20 millimeters per second, which is extremely fast.
The trade-off is that, at those speeds, you introduce motion blur. With typical camera exposure times, the image gets smeared, particularly in the horizontal spatial frequencies.
How do you address the image quality challenges that come with continuous scanning?
The key is pairing that fast, blurred scan with a reference scan acquired much more slowly. For training, we scan at something like 50 microns per second, which is almost imperceptibly slow. That gives us a high-quality ground truth.
We then use an AI model – specifically an image-to-image translation approach – to learn how to reconstruct a sharp micrograph from a blurred one. So during routine use, the system can infer a crisp image from the fast scan.
Understandably, that raises some concerns. When you say you’re reconstructing detail using AI, people are cautious, and rightly so. But the underlying mathematics is sound, and what we’ve shown is that the reconstruction can be done very reliably.
Where does the device fit within current pathology workflows?
There are two main ways to examine a slide: the traditional route is the manual microscope, which hasn’t fundamentally changed in centuries, and the alternative is digital pathology, where slides are scanned using specialized equipment.
That second option usually requires a separate operator, often in a different location, working with expensive systems that can be complex to use. It also generates very large datasets, much of which may never actually be reviewed in detail.
What we’re trying to do with Scanimus is sit somewhere between those two approaches. It’s a kind of hybrid model that lowers the barrier to digitization and makes it more accessible within routine practice. The immediate use case is triage and assistive review, rather than full replacement of existing workflows.
What problem are you ultimately trying to address?
Even in well-resourced healthcare systems, more than 90 percent of routine biopsies are never digitized. That’s a significant gap, especially given how much emphasis there is on digital pathology and AI.
During my doctoral work and postdoc, I spent a lot of time with pathologists, so I’ve seen first-hand what their workspaces look like and how they operate day to day. There are real constraints in terms of space, time, and cognitive load. What they tend to want is something straightforward. Ideally, you press a button, an image is generated, and any analysis happens in the background without adding friction to their workflow.
So the focus has been on making the system as simple and ergonomic as possible. Something compact, lightweight, and easy to integrate onto a standard desk.
Does the system still allow for more conventional imaging approaches?
Yes, and that’s important. We also offer a slower, stop-and-stare mode, so users can directly compare the two approaches. If you want that more traditional acquisition method, it’s there.
That said, because we’re working with more modest hardware and a smaller camera, it won’t match the throughput of high-end commercial scanners in that mode. The strength of the system really lies in the continuous scanning approach and the computational reconstruction that follows.
What do the robotic elements of the system enable in practice?
I sometimes compare it to a self-driving car. You can operate it manually, or it can guide itself. The interesting part is that the system can learn from how it’s used.
Whether a pathologist is moving the slide physically or navigating via the touchscreen, that interaction can be recorded and modelled. Over time, you can start to capture individual usage patterns, almost like a fingerprint for each user.
That opens up possibilities for personalized workflows and automation that adapts to how different pathologists actually work, which I think is one of the more compelling aspects of the system.
What have been the challenges of developing the system?
Hardware is called “hard” for a reason. When you’re dealing with physical components that have to work in sync with software and firmware, even small changes can ripple through the entire system.
Any modification means re-coordinating everything, which can be frustrating. But at the same time, that’s also what makes it rewarding. You’re building something tangible. You can see it, touch it, interact with it. And when all the pieces finally come together into a working system, that’s immensely satisfying.
How has the funding journey been so far?
We do have some support through UCLA, including a grant via the technology development group and the entrepreneurship program, which has been helpful. But beyond that, I’ve been actively pitching to venture capital firms globally.
There was some initial traction, but it didn’t fully materialize. I think part of the challenge is that this is the kind of technology you really need to see and interact with in person to appreciate. It’s difficult to convey the full potential to people who aren’t directly embedded in pathology or microscopy.
How was the device initially received after launch?
We officially launched in September 2025, mainly through social media and some cold outreach. The response at that stage was fairly modest and a bit mixed.
That changed completely when I brought the device to an international pathology meeting. The reaction there was overwhelming – far beyond what I had anticipated. There was some skepticism, and a bit of negativity, but even that was encouraging. It showed people were engaging seriously with the idea.
When I mentioned the price, the reaction was often surprise, in a positive sense. That suggested to me that the value proposition was landing as intended.
What can the pathology community expect next from the device?
We’ll be unveiling a number of new features at upcoming meetings, which I’m genuinely excited about. The broader vision is to develop this into more of a general platform rather than a single-purpose device.
For example, fluorescence capability is something we’re actively planning, and that came up repeatedly in discussions with users. We’re also expanding into cytology, with models designed to handle applications like blood smears and Pap smears.
The focus will be on getting these new features in front of pathologists, seeing how they perform in real-world settings, and continuing to refine the system based on that feedback. That process of iteration, driven by direct user interaction, is really central to how we’re developing the platform.
