Illuminated: IEEE Photonics Podcast

Pioneering Biomedical Imaging with Computational Tools

IEEE Photonics Society Season 1 Episode 6

Discover how Professor Aydogan Ozcan and his team's journey through the evolving landscape of computational optical biomedical imaging is revolutionizing global health diagnostics. He peels back the layers of how AI and deep learning are reshaping the future of medical imaging, and the profound impact of computational techniques in resource-limited settings.

In this episode, discussions are also had with moderator Peter Munro about the regulations and ethical considerations of AI, confronting the reality of health inequality and the promise of technology in bridging the gap. Each share the pivotal moments and mentorship experiences that inspired a continuous cycle of curiosity and innovation in their careers. The conversation will impart knowledge on our listeners but also ignite a spark for lifelong learning and discovery.

Host:
Akhil Kallepalli
Leverhulme Early Career Fellow
University of Glasgow, UK

Moderator:
Peter Munro
Professor and Vice Dean of Research
University College London, UK

Expert:
Aydogan Ozcan
Chancellor's Professor at UCLA, USA
Professor at Howard Hughes Medical Institute, USA
Associate Director at California NanoSystems Institute, USA

Have a topic you're interested in hearing about? Let us know!

Speaker 1:

Illuminated by IEEE.

Speaker 2:

Photonics is a podcast series that shines light on the hot topics in Photonics and the subject matter experts advancing technology forward.

Speaker 2:

Hi everyone and welcome to today's episode of Illuminated. My name is Akhil and it's my pleasure to be your host today. I'm a biomedical physicist working at the University of Glasgow as a LeavieHume Earlier Career Fellow and a research fellow. In my role at the IEEE Photonic Society, I'm supporting and promoting initiatives very much like this podcast to raise the profile of valuable young professionals within various sectors. Now within the IEEE Photonic Society, the young professionals initiative is for anyone up to 15 years post their first degree. The affinity group within the IEEE Photonic Society is committed to helping one pursue a career in Photonics. We're here to help evaluate your career goals, better understand technical pathways and subject matters and refine skills and help grow your professional networks through mentorship. This podcast is one such way for information. On to our podcast. Now it's my pleasure to introduce the moderator for today, professor Peter Munro.

Speaker 2:

Peter is a professor of computational optics in the Department of Medical Physics and Biomedical Engineering at University College London and the Vice Dean of Research in the Faculty of Engineering Sciences. Peter's research focuses on the use of computational approaches to study, optimized, interpret and design optical imaging systems. He has worked on a range of techniques, including confocal microscopy, optical coherence, tomography, x-ray phase imaging and photoacoustic imaging. He's recently released an open source software package, tdms time domain, maxwell Solver, which can be used as a platform for simulating a range of optical imaging techniques. He has served for the last three years as IEEE Photonics Conference Biophotonics and Medical Optics Topic Chair, and it was at the previous iteration of the conference where Peter and I met for the first time. Over to Peter.

Speaker 1:

Thank you, achille, and it's my pleasure to introduce the guest speaker that we have today, professor Adowan Oskhan. Professor Oskhan is the Chancellor's Professor and the Volgenau Chair of Engineering and Innovation at UCLA and an HHMI Professor at the Howard Hughes Medical Institute. He is also the Associate Director of the California Nanosystems Institute. I'm going to give a few of Professor Oskhan's key achievements in a moment, of which there are many, but before getting into that, I just wanted to highlight that he is an expert in our field, such as computational imaging, deep learning, optics, microscopy, holography, sensing and biophotonics and around those topics that we will spend much of our time talking about.

Speaker 1:

Now just a few highlights from Professor Oskhan's career.

Speaker 1:

So he was elected fellow of the National Academy of Inventors and holds more than 70 issued patents in those research topics that I just mentioned, and he's author of one book and co-author of more than 1000 peer reviewed publications in leading journals and conferences.

Speaker 1:

He has received a long list of awards, including the Presidential Early Career Award for Scientists and Engineers, international Commission for Optics, ico Prize, dennis Gabor Award from the SPIE, the Joseph Fraunhoffer Award and Robert M Burley Prize. There is a very long list and I will, with Professor Oskhan's permission, abridge that list, but I will direct you to his website because you will be able to get a full rundown of that list. But I think it's worth mentioning that he has been elected a fellow of Optica, aaas, spie, ieee, aimbe, rsc, aps at the Guggenheim Foundation and is a lifetime fellow member of Optica, nai, aaas and SPIE, which is a comprehensive collection of fellowships. So with that, I think we will now move on to the technical part of this podcast, and I'm just going to start that by asking Adawan if he could start with some words about what motivated you to get into the field of computational optical biomedical imaging.

Speaker 3:

Well, first of all, thanks for having me. It's great to see you, peter, and so wonderful to be here and spending this next hour talking about science and optics and machine learning and how it can perhaps change the way that we do microscopy and oscopy or, in general, imaging through computation. So, going back to your question, so I think when I first started my independent career I was bombarded with a lot of problems to solve, most of which were around global health. For example, how do you look at blood specimen or tissue specimen, sputum taken from patients in resource limited settings? How do you bring imaging, microscopy, solutions for diagnostic, sensing, solutions for diagnostic that could work in a village where there is no real infrastructure?

Speaker 3:

So that was kind of like part of my training at Harvard Medical School, where I was kind of constantly looking around and trying to understand what's a good direction to kind of apply optics for, and I realized sensing and diagnostics and the intersection of that field with optics was a great avenue. I soon realized computation was a wonderful way to enhance the performance of poor devices mobile devices, inexpensive parts, inexpensive optics, plastic lenses or even no lenses, and that's how I started actually, and I was lucky at that time in the sense that smartphones or not so smart, then this is back 2006, where smartphones didn't really exist as we understand today. Mobile phones, cell phones, were picking up and penetration of cell phones, even to the remote part of the world, was constantly rapidly increasing and they had some very interesting platforms to do computational imaging and that's how I started to bring advanced microscopy and sensing solutions through mobile phones into global health settings where diagnostics, imaging like microscopy, topology and services were not conducted as we understand in a modern infrastructure.

Speaker 1:

And those initial approaches that you developed as far as I'm aware, they were print that the computational side was principally physics based. So how did you go about transitioning to AI based approaches?

Speaker 3:

So for quite a bit of time, between 2006 and perhaps 2015 and 16, we created all kinds of computational imaging microscopy systems, all of which were using physics as the core principle and coherent way, propagation, solutions to the Maxwell's equations in principle, coherent imaging, partially coherent imaging, incoherent imaging and holography. It was fascinating and we created lots of chip scale microscopes where the microscope was as tall as a few centimeters, benefiting from CMOS images from mobile phones, 5, 10 megapixel, creating diffractional, limited imaging across a very large sample volume, needle in a haystack. Problems finding pathogens across the large sample volume, good for UTOM T2 samples and blood smears or blood based diagnostics. That was essentially a fantastic journey of a decade, using physics and holography, creating on chip microscopes installed with mobile phones or standalone handheld systems. But at that time, especially by 2015, machine learning and specifically deep neural nets were conquering a lot of the performance milestones in computer science that were previously thought as very difficult to achieve surpassing human-based decision making in some cases for especially like, for example, recognition, etc. And soon. Then I was very lucky in the sense that I had a lot of data that be generated, like literally terabytes, hundreds of terabytes of data, of samples, with the ground truth of construction methods in my pocket, driven by physics, and we started to compare, in terms of the fidelity of the reconstruction and machine learning mimic, what physics is doing. This was my baseline, basically, in this comparison.

Speaker 3:

We were fortunate to have a lot of high quality data that led us to train neural networks to mimic what our understanding with the physical base reconstruction was performing, and we understood the major two advantages for machine learning and computational imaging, which is non-etrative, extremely fast reconstructions, because a lot of what I was doing for the first decade of my independent career, I was trying to speed up algorithms for image reconstruction. A lot of them were iterative and we were increasing the dimensionality of our measurements to support convergence faster, better and achieving super-resolution of one kind, even when we realized that with that rich data, neural networks could actually be trained. Yes, it could take a couple of days at that time to train a neural network, but it was a beautiful approximation for what the physics-based solution was doing faster, non-etrative that was my baseline. That was very exciting. And we applied this to not just holography. We applied this to bright field microscopy to improve their resolution.

Speaker 3:

Depth of field with lightest mobile phone-based microscope created transformations that took care of color aberrations and distortions and resolution limitations of inexpensive plastic lenses of mobile phones for microscopy, and shown that it can do diffraction-limited imaging and do establish, basically, transformations between one form of microscopy to the next.

Speaker 3:

After the baseline, we soon realized there was even a bigger set of opportunities that we could not reach with physics and that was, I think, approximating functions that are not having physical forward models and that opened the plethora of new opportunities for deep learning in microscopy, in my opinion, which included, for example, virtual staining of tissue, establishing, for example, transformations from ortho-phylorescence contrast of samples into bright field contrasts, where it also mixed the staining coming from a histology lab, so kind of like functions where normally the physical forward models beyond our understanding. And that still holds, I think, as one of the biggest advantages of supervised learning in microscopy achieving transformations that are perhaps too difficult to understand how they're working, but through image data you can, and if you can establish those transformations, they're transformative for various different applications, including histology and supercology.

Speaker 1:

And so we will come on to talk a little bit more about the challenges, but perhaps you could just talk a little bit about some of the challenges you faced as being an earlier doctor of AI approaches.

Speaker 3:

Well, of course, if you start with supervised learning which was wonderful For example, you take a hologram and you can now reconstruct a hologram with bright field contrast. We call this bright field lograppy, but it was establishing a coherent to incoherent single frequency, multi-color transformation. This was enabled by, basically, data from two different modalities that are cross-registered that was fed to a supervised learning model. But that in itself is perhaps one of the highlighting some of the challenges of supervised learning approaches. First of all, you need a lot of data high-quality data, I think, for AI in computational imaging, image registration, workflow, cleaning of data and bringing domain expertise to make sure that garbage does not leak into your training is a tedious task with an expensive task. Sometimes access to high-quality data is not available, especially if the specimen are expensive and hard to find. On top of all of that, the potential of piqueness of the model and what is learned is this a double-edged sword. I'm opening a new set of transformations that I couldn't understand before, and now I can perform them and validate their accurate against the ground truth, but at the same time, I don't have a very good understanding of how they work. What does that mean? That means there can be potential hallucinations. You've got to create watchdogs to constantly monitor your experience and your model. This sounds scary to a lot of people at first, and I was one of the early adopters, as you said. If you look at 70 years ago, eight years ago, when we were presenting these ideas in conferences, yeah, there were a lot of skepticism about hallucinations, about the opaqueness of what we're learning. But, to be fair, this opaqueness actually is everywhere in the pipeline.

Speaker 3:

Think of, for example, pathology. You give your biopsy and it goes to a lab for a diagnostic, into the diagnosis, but within that clinical workflow, a lot of things are actually stochastic and they're opaque. We just don't know how opaque they are. But if you work with the lab system and if you give, for example, hundreds of specimens to a lab, you understand what fraction of those are messed up. The clinical workflow has its own checks and balances. Make sure mistakes in the workflow, which you can say hallucinations in a different domain, are carefully filtered out. This is the same thing when AI in computational image. It enters, for example, the pathology workflow. It's still going to be working within the clinical workflow with different kinds of checks and balances, the first and foremost, though, which is the pathology looking to those images and saying, hey, now this is not staying well, staying me again. So then you just directly take more tissue and stain it.

Speaker 3:

So hallucinations are concerns, but I think it's not a major concern, it's not a game stopper for us, because there are different ways of looking at it and regulating it and creating a workflow that is rigorous to the standards of diagnosing patients with the highest standard that you can imagine. So those are some of the challenges, but nowadays I think the community is less skeptical. When we first started this line of research, yeah, the discussions were at a different scale how I was doing some polling, when I was presenting these things with large crowds, half of them were very skeptical. I was asking them would you believe in an image that you see generated by AI? Literally half of them were saying no, I wouldn't. The other half we're looking around and raising their hands yeah, I would. So that's now changed.

Speaker 1:

So could you perhaps say something about how you mitigate the hallucinations? What do you do to counteract them?

Speaker 3:

So actually, if you kill the creativity of the AI model, so AI models are wonderful in creating things that from a distribution point of view, that the distribution point of view would be believable. This domain is very powerful because a lot of the applications for these kinds of generative models powerful models is actually, for example, to create new images, new art, new faces. You would believe that, yes, this person must have lived, but that's very dangerous. That creativity is very dangerous. The more physical insights, the more conditioning you bring to the generative AI to kill that imagination space, the better your models get. For microscopy, we do that by basically putting structural loss terms during the training phase, where we regulate the generator to be faithful to the micro and nano structure of the input fields. A lot of what we do is not garbage in noise in and a beautiful image out. It's actually a diffraction limit that our highest and our beautiful input data in. But I want something else at the output. That input micro structure, nano structure is actually used to regulate the hallucinations space, to be faithful to the actual realities of the tissue. In other words, I don't want to create a new patient that 100 different pathologists would say, yeah, this is actually a patient's specimen, beautiful as they. Let's diagnose this imaginary patient. Nobody wants to pay for imaginary patient diagnosis. We must be creating the image of that patient and their structural conditioning is very important. We should realize that good quality, diffraction of limited images are at the input, helping us to do this regulation.

Speaker 3:

Another way of bringing regulation to the generative AI models have them not hallucinate, but bounded within the physical world. Limits of the physical world is actually used physics as a regularizer. One of the recent work that we did in 2023 on this actually used Maxwell's equations to use as a way to use the wave equation driven from the Maxwell's equations to regulate learning of a model. In this case, the gist of the idea was this this was zero-stop learning, meaning that this required no data, no experiments, no experimental setup and no prior knowledge about the sample that you will be imaging. It's actually just opposite to supervised learning. It's self-supervised learning. We call it Gedankinet after Gedankinet experiments popularized by Einstein thought experiments, and the idea was this the equation is repeatable. If you rely your learning on the repeatability of a physical law, then it actually makes the generator model learn within the bounds of wave propagation. Basically, if I, for example, take a piece of stone in my hand and release it and repeat this 10 times. All of them are exactly following the same trajectory, the same time point in space and time. It's repeatable.

Speaker 3:

Ai was actually performing Gedankinet experiments based on the wave equation and panelizing itself, self-supervising itself to be consistent in its reconstruction for the wave equation, and that proved phenomenal. You trained it with no data, no experiments, hallucinations of samples and just having Gedankinet thought experiments. It actually first time it saw an experimental data. It was able to do quantum phase imaging, holographic construction of samples. If you actually try to push it to hallucinate, it was actually not hallucinating, but it was hallucinating within the bounds of the wave equation and the physics. That was a beautiful learning for us to understand the power of physics as a regularizer for the learning of models, in this case without any experimental data. It could be opposite to self-supervised learning in a self-supervised manner with physics. All in all, in one sentence, you got to condition and regulate generated models to be within the bounds of what you believe is the physical world, whether it's the nanostructure of tissue or the way that waves are propagating in space at second.

Speaker 1:

So I think we could have an entire podcast on many of these individual topics, but if I was just to summarize, I guess, what you've been talking about, would it be fair to say that you're both using physics to assist AI and AI to assist physics, like you're doing both of those.

Speaker 3:

Absolutely, it's a two-way communication. Actually, you're absolutely right. In some domain it opens up doors where our physical understanding was limiting us to do things that we always wish to do, but we didn't know how to establish a forward model to solve. That's where AI was really very powerful. But then supervised learning has its own bag of problems that I summarize. That's where physics can also be very powerful to regulate it. Every specific domain of the problem that we want to solve must have some bounds dictated by the laws, like, for example, for holography, coherent imaging, maxwell's equations they work everywhere. Precisely the same very strong condition for AI to be penalized. We call it physics consistency law, which means physics is consistent. It means if AI is going to repeat the same Gedankine experiment a million times, it must be consistent. So inconsistencies in physics is only because of AI's mistake. Then learn from your mistakes. That's the gist of Gedankine learning, self-sacrifice learning.

Speaker 3:

I think this is a very rich topic and, depending on what you want to do with AI, I think the answer is still floating around and how you do it, how you execute it, who's going to use it for what purpose, is going to dictate the regulations? And, of course, entities like the FDA is constantly looking at this. It's a push-pull mechanism. Startups, companies large-scale, mid-scale, small-scale companies they're constantly coming up with ways of using AI for human health and it's a push-pull. What needs to be done and it really depends on how you're going to use the technology, for what purpose and how the patient is going to benefit from it. What are the risk factors? I'll give you an example from virtual staining. This will tell you the landscape of the maze of opportunities and how each one is treated differently. Let's say, we want to talk about the opportunity to highlight it. Virtual staining you take samples of label-free tissue from biopsies and you stain them With that chemistry. It's a very powerful technology, a very good user of AI, because you take native endogenous contrast of the fluorescence of tissue biopsy tissue, let's say and you transform it, using a generative model that's conditioned to a beautiful supervised learning, into an image that mimics exactly what comes out of an histology lab the next day or the next week, but no chemistry involved, faster, more consistent and cheaper All the good things that you want to have. Well, how is this going to be used for impact in the world?

Speaker 3:

For biomedical space, there are many opportunities. If you're talking about primary diagnosis of a patient, let's say a patient comes, biopsy is taken and the pathologist is going to diagnose for the first time what's happened you need a rigorous approval process for this, a class 3 approval from the FDA. To use this in the United States. That would mean you will have to work with multiple sites in the world, each with maybe a few hundred cases, and evaluate the equivalency of this as opposed to the clinical care for the same patients under the hands of different pathologists, different hospitals, et cetera. It's really probably the most rigorous class 3 approval process that you would have to go through. However, imagine another case where this technology is going to be used for secondary opinion, teleconsultation. Let's say there's a case from Asia or Europe and you're seeking for secondary opinion. Well, it's not primary diagnosis and it's not actually as regulated as a class 3 process. It goes through a separate set of approval processes which will follow. In the United States, at least, the College of American Pathologists have guidelines for a new technology because it's teleconsultation, it's secondary, there's already a diagnosis made, which means the risk is lower.

Speaker 3:

Another application of this same technology in animal fisheries, for example, looking into toxicology studies for different drugs. You have these animal models. Every pharma company has to get these reports submitted to the FDA. That because it's animal tissue and you apply virtual standing on it to generate evidence for the FDA, it goes through a separate regulatory process. These need to be, depending on who is using it for what purpose, need to be well mitigated. Of course, in the research domain it's an entirely different game. Let's say, for cancer research in a university using a product of a virtual standing technology for their own research. That's a different level Risk factor, that's a different level of essentially approval processes that you have to go through. So it all depends on who's using it for what purpose and what are the risks involved in the final outcomes that you generate.

Speaker 1:

Okay, and changing topic a little bit, you've done a lot of work throughout your career on solutions in low resource settings, so I wondered if you could say something about what the implications of AI based computational imaging are for health inequality.

Speaker 3:

I think it's a great question and it has multiple facets to it. So in terms of cost of equipment, access to advanced measurement tools, I think it's going to revolutionize access to advanced systems. I'll give you an example, since we were talking about the pathology. That's a very expensive domain. Digital pathology in general is a very expensive piece of technology for a hospital to convert their samples from gloss based biopsy storage of tissue to a digital storage and digital scanning system, one of those microscopes that they're about $300,000. These pathologists scanners are perorids in the sense that they're rapid, they can have an enormous throughput of scanning giant amount of tissue with diffraction, limited resolution, hey, but they are $300,000. And a digital pathology infrastructure for a hospital, you're talking about $5 million to $6 million and these numbers probably increase because of the inflation. Right now these are older numbers. So access to advanced devices, sensors, microscopes with AI is going to be, I think, revolutionized because we've shown this repeatedly you can take inexpensive, lower throughput, lower resolution, aberration having optics and transform them into by training against a ground truth that the expensive one, by training models against them, you can actually have beautiful models that, from device to device to device, generalize Very well. We're taking inexpensive optical instrumentation and work for you as if it's an expensive one For research, for clinical use, for microbiology labs, for histology labs. Globally, I think this is going to be democratizing access to advanced equipment, which means there will be new companies that understand the market and try to produce inexpensive tools that, of course, can be distributed and used in different parts of the world. I always mention commercialization for this, because here are the impact is not just creation of the IP or creation of a prototype, because you can donate prototypes of new technology but if there's no real product in there, there's no service and once something breaks, it's not replaced again. That's why impact cannot really come with donated smart equipment, because they will soon be broken. If there's no infrastructure or a commercial model, then it won't be sustainable. That's why I'm always interested in commercialization of technology, because that's an aspect of impact for senior technology, really with the needle in terms of inequality, for example, to access to health care and advanced instruments that come with advanced health care systems.

Speaker 3:

Same is true with sensing. I was talking a lot about the astrology and microscopes, but the same exact message applies to sensing, especially point of care sensing. Point of care sensing for especially developing work is very important. A lot of times when a health care professional sees a patient, that's your only opportunity to diagnose and to administer a treatment. The chances of you seeing the patient again is very slim.

Speaker 3:

That's why you need technologies that are mobile, cost-effective, portable, that can access the patient and immediately, within 15 to 20 minutes, diagnose the condition from a sliver, from a finger prick, blood sample, urine, et cetera. That's where point of care diagnostics and solutions of AI and power point of care diagnostics will be very important for basically reaching out to developing countries where patient return is an issue. How do you get that patient? Unfortunately, the same problem also exists in a different manifold in the United States and other parts of the world, the western world, especially for treating, for example, homeless patients. It's another problem. You need mobile clinics where, when you see the problem, you can actually start the treatment, because there's no guarantee that you will be able to see the same patient again when you're doing your surveillance.

Speaker 1:

So that's really interesting. I mean so you've gone into what I was about to ask you about, which is what the future might hold for computational imaging, and I think if I ask you that question, it may be very difficult for you to give a concise answer. So I'm going to help you a little bit. Of course, you can add to it. But if, supposing, you met the president or your president and your president was saying right, I've got this money to give you, why should I give it to you? What would you say to him?

Speaker 3:

Well, so currently, whatever I told you is something that I'm very passionate about, but there's something else that I didn't tell, which is something that I'm trying to scale up in, and that's the push-pull between AI and optics and physics, but this time from AI side to the optical design of optical systems. This is a little bit different than what we've been discussing so far, and if I were to meet President Biden or any rich daughter, I would seek funding to actually build optical processors designed by AI. So let me give you a little bit of context for what I mean by this. Today's computational imaging, by and large of course there are exceptions is driven by human perception, and the instrumentation that we used are mostly built for humans to enjoy and humans to define the ground, humans to perceive and understand. So that's the language that we built, with these beautiful, spatially invariant points of good functions that, at different scales, we've been engineering for centuries. But from the perspective of AI as the new head in the block next to humans, that's not the case, because AI doesn't come with the eye that we have and the brain that we have and the perception that we have. As a result, there must be a new language between the scene, the object, the analog world represented by waves at different scales, at different parts of the spectrum, and AI and I think our research is pointing more and more on the amazing richness that AI-based designs optical instruments.

Speaker 3:

We call them all optical processors as front end to a digital system. These are systems where AI can program in the physical domain by the understanding of light matter interactions. Ai can optimize a physical embodiment that can achieve an arbitrary set of spatially varying points with functions to build computational cameras that actually establish a new language between the analog wave world represented by photons and the digital back end. I call this a new dictionary, so to speak, by program diffraction, and that, in my opinion, is very exciting because it takes the human app and asks hey, if I want to have a robotic system with their own, let's say, back end, with the product assumption requirement, with the speed requirement, with the mobility requirement, with the cloud access requirement, find me a dictionary that that AI model, with all these engineering constraints, should establish with the world. That's where those program light matter interactions will come in handy, because those are jointly optimized systems where AI will tell the digital AI and AI will tell hey, this is the only thing that I can have. I need a microstackle inference. I could not have more than a few microwatts per inference and I don't have a megapixel. I only have 0.5 megapixel and I don't have access to the cloud, and my battery is this much. Well, with that, what is the language that I should establish with the surrounding world represented by photons? That's where AI dictates its own dictionary that it likes to speak, and that dictionary is dictated by basically a program matter, a material that has sub-wavelength features on it in 3D, and I call them diffractive networks, diffractive optical networks or diffractive optical processor. Interchangeably used, that, in my opinion, is creating a plaster of opportunities at different parts of the spectrum, all the way from terahertz to infrared, to visible and maybe even shorter wavelengths.

Speaker 3:

However, we weren't accustomed to thinking with the concept of spatially varying points with functions. That was actually a noise term for previous generations of optical designs. Now, all of a sudden, I can ask the question of hey, this is the set of spatially varying points with functions For a reason. I know this is very good for me. Ai says that for a different reason. How do you approximate that and what do I do with that?

Speaker 3:

Well, what you do with that is this you take a certain computational task for a robotic system, you divide part of it to be all-optically computer, part of it to be digitally computer, and they joint the right to recognize each other. The communication or the currency between them is AI generated, has nothing to do with human perception or human understanding, which means humans would be just like looking at garbage-looking images, which would be wonderful for AI. What do you gain out of this? Faster speed of operation, compact modules that do not require many pixels, a lot of power and, ultimately, these are green technologies, because the optical front end is passive. I'm talking about light matter interactions, that you can think of it as a transparent stamp that is looking like frosted gloss, but it's actually computing as the light is penetrating through it in a few picoseconds or tens of picoseconds of time. That is what I would raise funding from my president if I was given the opportunity.

Speaker 1:

That's very interesting. Thank you, I don't want that. That was my final technical question. But I have one final question, and you might call this moderators prerogative. But I noticed in your bio that you are the recipient of a National Geographic emerging Explorer Award and that piques my interest because I thought you are an explorer in the field of optical imaging. But I thought maybe you could just explain how you came to receive that award.

Speaker 3:

Well, first of all, grateful it was one of the National Geographic and National Geographic Society, ngs for recognizing some of the earlier things. Well, at that time I was obsessed with mobile phones Mobile phones as the primary platform for me to enjoy advanced optical measurements in resource limited settings, and it had huge implications for medicine in Africa, in different parts of the world where infrastructure is broken, and they recognized the power of this story and they they. At that time they had this program selecting some young scholars, young you know explorers of different kinds, and they also started to open their door and embrace scientists like myself and that perhaps I was one of the first generations of engineers, optical physicists, to be admitted to this cohort of you know explorers to share how we can actually use science and engineering STEM fields for broader impact and explore scientific disciplines to kind of change the needle for our climate, our healthcare and, in general, democratize access to good things that unfortunately still not democratize globally for different reasons. That's essentially the story of it, and I learned a lot from NGS, national Geographic Society, in terms of storytelling.

Speaker 3:

You know, when we were first admitted to the program, they told about, they told us about NGS, and that was the first time that I understood the power of storytelling. These are things that scientists, during our PhDs, were not accustomed to. Kind of thinking like that. The power of storytelling is amazing, and I think national geographic is one of the pioneers in that domain. They're taking very difficult concepts rooted in, let's say, science, ecology, for example, and explaining it to the masses for, you know, shaping up public opinion about important problems that we face as the whole global set of countries that suffer from these problems. That was essentially the story, and I'm grateful, of course, to NGS.

Speaker 1:

Well, thank you for explaining that and thank you also for this very interesting discussion. I've thoroughly enjoyed it and I'm sure the listeners will too. So with that, I would like to ask a kill. I know it kills. Very keen to ask you some further questions, so I'll hand over to you a kill.

Speaker 2:

I've just been.

Speaker 2:

I've just been sitting and enjoying the conversation. It's very, very interesting and I think everything that's been discussed everything AI and non air related it's been very, very interesting. I just thought I'd pitch in with a few questions, and obviously the questions are for you and for Peter, so we can make this sort of a discussion between the three of us. My questions are quite general and more related to things like career transitions, experiences along the way, the journey that you have had, the both of you have had along everything you've done up until this point let's put it that way. So your career transitions to me were extremely interesting and I've seen through the conversation and obviously reading about you before, you've had quite a few transitions, quite a few sort of research interests along the way as well. How did you find that journey, both personally and professionally, and what do you think was the most important skill set that you developed along the way?

Speaker 3:

So you know my key feature that helped me. My key, perhaps you know the key element I believe that helped me navigate at different stages of my career, is when I'm passionate about something, I get obsessed with it and all of a sudden it's not anymore a research question or research field for me. I'm just following my career as if I like things. I really dive deeper into it and try to cover all aspects of it till I'm, you know, satisfied. My curiosity is satisfied and that makes it enjoyable because it transforms the professional at least part of the professional work that I do, at least part of it into my hobby. Yeah, I'm just enjoying myself, you know, and I'm being paid to follow. My curiosity and passion drives it.

Speaker 3:

So that makes it, I think, easier to navigate. It also gives me the feel to insist on difficult problems till I kind of have a better grasp of solving them and it gives me patience and it gives me the kind of motivation to stay with a problem. I think it's very important we got to stay with difficult problems for a long time because there's a reason why they're difficult and kind of surpass them, to kind of create a new field that overcome major limitations of the previous decades you got to stay with the problem quite a bit and I like that phase of, you know, having a bunch of things that I don't understand and I'm obsessed with trying to understand it. That's what I think. You know what I do.

Speaker 2:

Excellent, and, peter, I've seen obviously, I've read about it the IEEE Photonic Society in the past has actually done a webinar series during COVID, and David Samson was on it. I can see that you've worked with him before. There's a bit about the time you spent in Australia as well, so how have you found the journey and what do you think is one of the key skill sets along the way?

Speaker 1:

So actually much of what I would say is actually quite similar to I1.

Speaker 1:

And one of the things that I reflect on is, especially in the early stages of my career and doing my PhD, giving myself the opportunity to really go deep into problems, really understand the real fundamentals of the tools that we use, and often that felt indulgent, you know.

Speaker 1:

It felt like maybe there wasn't, you know, an endpoint to this, but actually that was crucial in sort of generating ideas for later in my career and generating understanding. And what I think was then useful was after a period of time when I started working in a more applied role and then I started to work with colleagues who were solving, let's say, real world problems, and then I found that I had the tools and the capability to actually to solve some of these problems and then I became more aware of the need and I was able to really, you know, focus in on some of those problems. But I think it's really important, especially these days, as we seem to be getting busier and busier, to just allocate time for trying to solve problems, as I1 was also saying, because that's how we generate ideas, that's how curiosity grows and we generate ideas, and those ideas they're really. You know, they are the sort of the currency in a way that you know that we work with.

Speaker 2:

That's quite interesting, isn't it? Because I'm fairly early in my career. The most of you have been in your fields for a while now. It's quite interesting to actually see you get the ideas. You sort of hold onto them for a while, wait for the right opportunity and just make sure that at that point you've gained the skills that you can use to apply, while you hold on to the idea for much longer than you've actually applied it to the first place.

Speaker 1:

What I find now is in me, you know, working with and mentoring PhD students, and, you know, in postdocs, I find that the PhD students, my PhD students, are coming back to me with these ideas. You know, I don't feel like I've got quite as much time as I did, you know, for sort of exploring and curiosity, but now, working with staff and PhD students, that cycle is continuing and I think that's you know why it's very important for academics to invest in the people that they supervise, as well as for those people themselves. But it supports this cycle of idea generation.

Speaker 2:

That is quite interesting. I've supervised a few undergraduate students in recent times and we've come across the same cycle as well, where they always go oh, we've had another idea, and you sort of sit back and go. This is obviously the same few papers you've read to come to this conclusion that I had a few maybe two or three years ago, not any more than that. But that quite nicely brings me to the question then of you have all of these ideas. You're obviously working with somebody at every stage of your career. Have you had any interactions along the way with people you consider to be your mentors?

Speaker 1:

Ah no, why don't you have a first go at that? Yeah, I mean so this the most.

Speaker 3:

Of course, many, of course, many, right you know, along along our graduate work, undergraduate work, graduate work and postdoctoral work, many mentors you know here and there of course contributed enormously to our growth During that phase where we were acquiring the fundamentals. This was all the push-full between our mentors and us as students and trainees. If I could just single out one, the transition that I've had from school of engineering, which I've got my PhD from, to a medical school where I was literally doing biophotonics as part of a hospital. It was kind of like a unique setting in Boston. So that transition for me was the transition of my mentor being an engineer to a physician's scientist and an MD PhD. That was striking because that's the part where the feedback that I got was so direct in the difference between engineering novelty and impact for the patient.

Speaker 3:

If you're doing biomedical research and all of a sudden, the most brilliant, in my opinion, ideas that are very innovative from an engineer's perspective was facing the enemy of hey, what is the impact for the patient?

Speaker 3:

I remember when I first got this question by my mentor at that time, gary Tierney from Harvard. He asked me this question what is the impact in your innovation for the patient? And that was the first time that I got that question asked directly to me and I wasn't good at. The answer I kind of liked was hey, it's just different, it's so innovative, we've got to do this because it's so clever, it's engineering-wise, beautiful and never done before. But he was unsatisfied and I think that was just the question itself, nothing else. It was just a five-minute discussion. That was an eye-opening experience for myself and it helped me to position my research portfolio at the intersection of engineering novelty and impact. I understood that innovation novelty in itself, if it doesn't meet the intersection set with impact, whatever you do can be biomedical or something else impact that is actually a very powerful set to operate it. And that was just a five-minute lesson from my mentor then, which was Christ.

Speaker 1:

So I guess, if, and similarly for me, I've had multiple mentors along the way and continue to have a mentor, but I think one thing that just comes to mind immediately is when one of my mentors just encouraged me to trust my judgment. Now, I know this, some people are perhaps better at this than others but I realized that I was perhaps spending a lot of time sort of evaluating am I making the right decision? But I think there comes a point when you just have to say you know what. I've thought about this, I've done the research, I've got evidence here, this is my judgment and I'm going to trust my judgment and take that decision and let's move forward from there. Because for me and I know not everyone does this, but I was prior to that at times essentially stalling and wasting time because I wanted to be sure I'd made the right decision. In actual fact, sometimes you just need to trust your judgment.

Speaker 2:

I had one final question for the both of you and, funnily enough, this is one of those questions you set for students in examinations. Here's the question. It has a part A and a part B, and each of the A and the B part has a part A and a part B. So I'm going to break it down. The first question is technical. So, in your current fields, where do you are right now with your experience and your overview of what's happening in the field? What do you think students, early career researchers, people who are interested in the general field, should be looking at, should be considering when they're making their decisions of how they want their career to build? This is technical and also, if you could factor in an element of the fact that both of you work on opposite sides of the pond, that would be great as well.

Speaker 3:

So I guess this is like an advice to the students. First of all, the fields are getting fuzzier and fuzzier in terms of the boundaries and the required set of information and skills that you need to have. That is constantly expanding. All in all, I think self-learning skills, self-training skills of the students, us, everybody, will be very important going forward. We got to constantly update ourselves as a student, as a trainee, as a mentor, as a PI, and that's never going to stop. And if that stops we're falling behind because the amount of new field, new subfield, new information being generated has never been this high in the history of MENCO. We're seeing an exponential growth in the amount of information that we're putting to archive as an example. So that means self-learning skills will be very important, in my opinion. And second, students and everybody. This is true for early stage researchers, early career researchers, postdocs, juniors, faculty seniors.

Speaker 3:

I think, in general, if everybody is looking at right, perhaps some people should look at left. And it's very important to understand game theory approaches because if there is a popular thing that 99.9% of your colleagues at a given class are selecting, perhaps you should better reflect your strengths, interests, passion, perhaps to see if you can look at left to see if there's something there it is going to protect you, because I think lost in the crowd could be a reason why we sometimes lose, spend, feel or trace. So that's where I always do the same thing If I feel judge that something is getting too crowded, a little bit high for some times, I try to look at left and follow a passion of mine, because to me that's a good way to secure the freshness of what I work on, the excitement of what I work on, and that's where a lot of my interests are. I want to see things at a very early stage. Not late stage that's me personally, but just everybody looking at right instead of looking at left.

Speaker 1:

And I think so.

Speaker 1:

What I would say is somewhat similar to that, but I agree that there is an expanding skill set that is needed because increasingly, what we do is interdisciplinary, so it's not just multidisciplinary, it's actually crossing boundaries.

Speaker 1:

So I still think that the fundamental building blocks of what we do are still the classical subjects of maths, physics, various forms of biology, biochemistry, chemistry, neuroscience, that type of thing. They are all great things, and that is not a definitive list, by the way, but they are all great things to study. But what I think is really important is students, undergraduates, should be aiming to really understand what they're studying and with that understanding they will then with the self-learning that Ada Wan was mentioning. By developing a really good understanding of something fundamental, it really aids that self-learning later on and it means that that will always be able to be applied. But if surfacing and I think this is really a challenge, as we're asking younger people to learn more, it's harder for them to attain the depth. But I think that is the challenge is to really try to obtain a conceptual understanding, because that is going to pay great benefits later on.

Speaker 2:

Brilliant. Thank you very much. I'll try and end this on a relatively optimistic note, as we always like to do, and I'm going to ask you a question that I will answer first myself and make it harder for you, just in case you have the same answer. So I've worked with people at Glasgow for a long time. One of the most influential people in my career is Mike Turrand PI, the person that I work with the most, and I once asked him a question.

Speaker 2:

If you had to give a piece of advice to anyone at this stage, thinking about what to do, where to go, who to work with, what would that be? And his suggestion is something that now I try and follow as best as I can. The people that we work with are sometimes more important than the actual aspect of the topic that we're trying to solve. If we go to work every single day, enjoying the company around us and the people that we actually work with, the solutions that we can come up with are incredible. So there was a prominence and there was a value given to the element of teamwork, but also identifying what's a good work environment. So if you had to leave this this is your one piece of advice to end today. What would that be? It could be something that somebody's already told you, and that would be a very nice way of taking that advice forward as well.

Speaker 3:

I would say anyone should follow their passion, and passion is a very powerful fuel for success, as Peter was mentioning, with the amazing emergence of new information going deep to the fundamentals at different parts of your research portfolio and interests can be really challenging. With a lot of distraction from social media, other things, politics you can just spend time there without getting anything useful. With all of that, it's going to be challenging your defense, for those is your passion, because if you're passionate about something, it can be for various different reasons. I'm all okay with any motivation, but if you're passionate about it, that's going to be your energy to overcome some of those hurdles and work with your team and succeed together in whatever you want to achieve, and I think I always try to do the same.

Speaker 3:

I don't work on things that I really am not passionate about. It's helping, motivating me, keeping me kind of awake, as opposed to spending time looking in social media and politics, which creates a lot of problems, a lot of things to consider. Right, I'm motivated to keep them awake. Focus on what you believe will move the needle for science, for your research, for improving the world through science and engineering. That's what I would say Excellent and Peter.

Speaker 3:

Yeah.

Speaker 1:

And so I think that's a very good advice. So what I would say is that, as humans, we are all different. It's very tempting to come into your institution and look at the leaders in your institution and think I need to be just like them, and there is some value in that. However, what I think is really important is to look for every person to look at themselves, understand what their strengths are, understand where their areas for growth are, and understand what their passions are, and really think and talk to others about how you can develop to be the best that you can be and to get the most out of your career, both in terms of success but in terms of enjoyment. Because, yeah, everyone is different. Everyone has a different set of abilities, limitations, and I think, working in this type of field, it's really important, I think, for people just to be reflective about really what their individual strengths and areas for improvement are and to not be afraid to really examine that.

Speaker 2:

Excellent. Well, I couldn't think of a better note to end on. So thank you very much, peter, thank you very much Adan, thank you for joining us today for this podcast episode. It's been a pleasure talking about AI. It's been a pleasure talking about biomedical imaging. We've talked about so much and this has been a fantastic conversation from start to the end. So thank you very much, thank you for joining us and see you in the next one.

People on this episode