AI, Present and Possible Futures: A Conversation with Dr. Ali Minai

August 13, 2025

The Outlier is back! We have a new look, new ideas and whole new host of unexpected insights to keep you informed about what is happening in the world around us. Each month, a specific theme will be followed. We are starting with AI for August, Nutrition for September and Climate Change and Water for October, with more to come. Hope you enjoy and join us in this exciting new chapter.

AI, Present and Possible Futures: A Conversation with Dr. Ali Minai

Interviewers: Zahra Hussain and Sabeen Rizvi

We are excited to share our conversation with Dr. Ali Minai, an AI Expert, currently working on cutting-edge research and teaching at the University of Cincinnati. The valuable insights will serve as a guide for those looking to navigate this coming era of AI and all the changes it will bring.

Dr. Ali Minai is Professor of Electrical & Computer Engineering and a faculty member of the Neuroscience Graduate Program at the University of Cincinnati. His work focuses on artificial intelligence, neural networks, computational neuroscience, and complex systems, funded by various entities, including the U.S. National Science Foundation. A former President of the International Neural Network Society, he has led major AI and complex systems conferences, edited 11 books, and is a senior member of IEEE and the International Neural Network Society.

1. When and why did you become interested in AI?

I started reading science magazines in high school, such as Scientific American, Discover, and Omni – magazines that reported on the latest scientific research and discoveries. At the time, I became interested not just in AI but in understanding how the brain and the mind work. It was when I came to America for my Masters that I began doing research on AI, beginning initially with evolutionary algorithms, and then transitioning to neural networks for my PhD.

2. How does AI work?

There are many different ways of doing AI and generally a particular way of doing AI gains momentum and has a moment of great popularity followed by moments of silence or diminished popularity. These can be seen as AI Springs and AI Winters. Currently the predominant way of doing AI is neural networks, which are inspired by the neural networks of the brain. At the time I was doing my PhD, neural networks were a niche way of doing AI. The main methodology at the time was expert systems, which was a technique that involved feeding knowledge into computer systems and applying rules and commands to see what inferences they could draw. This method ended up not being scalable and before neural networks became powerful, there was a period of disillusionment with AI. Then around 2006-2007, technology had evolved and computers became powerful enough and sufficient data became available to generate interesting and competitive results using neural networks.

"Currently the predominant way of doing AI is neural networks, which are inspired by the neural networks of the brain"

3. How precisely do you think computational neural networks can mimic the human brain? Could you create a computer that functions exactly like the human brain?

Neural networks are inspired by the brain, but in no sense replicate the brain; I don’t think that they can, nor do I think that’s the most effective use of the technology. The brain is a product of evolution, through which it has inherited many innovations and drawbacks. If you see nature as an engineer, it has a far better imagination than we do and has had 15 billion years to imagine and build these systems. With neural networks we’ve built something similar but much simpler, quantitative, and digital. So, in reality, we don’t replicate the brain, but we learn from it and have simplified its processes by a factor of tens of thousands or more to create neural networks.

What, in your opinion, distinguishes the computer from the person?

The brain uses about 20 watts of energy to fulfill all of its tasks, which include not just ‘thinking’, but other human requirements such as physical capacities, psychological functions, and drives and motivations. AI models on the other hand, can use enough energy to power a city. AI systems also store a lot of information that the brain cannot. The brain’s function is to serve the person it belongs to, so it discards any information not useful to them. AI systems are providing information processing and cognitive resources to numerous people at one time and at speeds not possible for the human brain. Although AI neural networks have been inspired by brain and sometimes discover certain representations and use some mechanisms similar to the brain, AI systems function quite differently from the human brain.

4. Can you speak about the nature of complexity in AI and how that should affect our approach to it? Are there some things we simply won’t be able to predict or prepare for, in relation to outcomes?

Any living organism is a complex system and one of the attributes of all complex systems is emergence, i.e. unpredictable effects that emerge from interaction with the world. When we build AI, we are building multiple species of new living organisms and we cannot predict all the effects of their interaction with the world. We have a certain level of control in that while they are contained within data centers – we can shut them down or investigate how the system is functioning – but this will be less possible as systems become larger and more autonomous. If you create an AI robot or an intelligent physical system, it will interact physically with the world and we will not be able to predict or control how it may evolve and what effects will emerge.

5. With scenarios such as AI 2027 being cited as cause for serious alarm, where does your opinion fall in terms of AI’s long-term potential (good and bad)?

As one of the most powerful technologies ever invented, AI has a lot of potential for good and bad, and both aspects are in many ways unprecedented and not entirely predictable. I don’t necessarily agree with the 2027 timeline, but the scenario is certainly possible over a longer period unless we take appropriate action. In my opinion, AI in itself can cause harm and lead people to do things they shouldn’t be doing, or it could take over computing systems, but the worse scenario is AI in the hands of humans who use it for bad purposes. On an everyday level, people may commit crimes using AI or do what they consider ‘harmless’ activities using AI that have undesirable effects for the world. For example, with uses like deepfake generation, people may come to confuse fake content with reality and it may have greater implications for our knowledge of the world. The real fear of AI is use by powerful entities, for surveillance, advanced weaponry, and other mechanisms of control. The more people become dependent on AI, and AI becomes a part of our everyday life, the more power goes to those that own and control AI technologies.

“The real fear of AI is use by powerful entities, for surveillance, advanced weaponry, and other mechanisms of control”

6. Is there any aspect of AI you think has been buried under the noise of doomsday scenarios and utopian visions? What would you advise leaders and policymakers to pay serious attention to, and what to ignore?

The technology is so new and evolving so rapidly, that we don’t really know how this is going to impact different areas of our lives. The most important things leaders can do is to create policies to support people in understanding what AI is, what its uses are, and what its purposes are. Currently, AI functions as a tool and where it has become pervasive and a part of the regular daily workflow. The government doesn’t entirely have the ability to manage that usage even though some may try to control or shape this use.

There do also need to be laws on how AI cannot be misused, for purposes such as hacking, fraud, or illegal surveillance. However, we need to consider that governments may also want to use AI for such purposes. They may want to use such systems to spy on people and circumvent privacy and intrusion laws, or for weaponry and warfare and that creates a separate possibility and concern for damage and misapplication.

7. Is there room or potential for a ‘Global AI Collective’, similar to the UN, to regulate ethical AI use?

I don’t have much hope for that, especially with recent indications there is more emphasis on competitive than collaborative development of AI. Even where leading minds in the AI space are expressing a desire for collaboration, each party is still moving forth with their development full force, which is fair in its own regard. It’s also hard to imagine what a good policy for the world would be with regard to AI.

8. There is a lot of discussion regarding the pitfalls of AI in everyday life – the environmental impact, implications for learning and academic dishonesty, and impacts on socialization and emotional withdrawal from society (such as through phenomena like AI therapy) – to name a few. In your opinion, how can the average person engage healthily and effectively with AI?

I don’t agree with many of the answers people provide regarding the pitfalls of AI. For example, with concerns such as limiting our ability to write, I view that as a privileged concern. Where it may decrease the ability of 1% of people to write well, it probably supports 99% of people in writing better. Individuals who are concerned about how AI affects their cognitive abilities have to take responsibility for that themselves: if individuals want to outsource certain mental tasks that is a personal choice. We will have to control AI addiction the same way we have to control social media addiction, it will be a personal responsibility.

For something like academic dishonesty, I feel that viewing AI use as cheating is a losing battle and it has more negatives than positives. AI is now a reality and a part of our life. We have to accept this and instead of evaluating the human, we have to start evaluating human and AI response as an integrated activity. We used to have a different approach to academia and education; but that was because humans had to solve all the problems themselves. As AI use becomes more pervasive, the new approach is to understand the joint unit of human and AI. Some places are adapting to this and supporting AI literacy as a key skill. For example, The Ohio State University has announced that every program must integrate AI into their systems, so their students emerge as fully AI literate. I think that is how we can approach AI, by integrating it into our curricula and supporting education around AI, so people can be fully AI literate and understand how to use it effectively.

9. How has AI affected the workplace today and how do you think students coming into the workforce should prepare for the changing dynamics?

A lot of people fear that AI will cause mass job loss and this is true to some extent, but probably not in the pattern people are predicting. Some jobs will be eliminated but many jobs will be redefined and recreated, and some new jobs will emerge specifically in the context of AI. There will likely be mass dislocation, but new technologies have the potential to unmask abilities that we didn’t know humans had. When cars hadn’t been invented, there were some people who could ride a horse at great speed, and that was a very specialized skill that few people had. With the widespread use of cars, the ability to drive much faster and maneuver through traffic became a skill and ability that is commonplace. Similarly, certain skills we currently consider very specialized and unique may become standard and normalized when AI is more widespread. One of the fears is that people may lose their jobs because AI may may allow fewer employees to produce the same amount of output. However, in high margin businesses, the value of more output is often greater than the cost of employing more people, so they may choose to employ the same number of people as before and view AI as an opportunity for growth and increased output instead. These are, of course, the more hopeful scenarios; there will be downsides and people will lose their jobs, but AI will mainly transform a lot of jobs.

In terms of preparing for this, students need to be AI literate and understand how the knowledge they have can be operationalized and capitalized on using AI. Jobs with some sort of creative element are likely to be revolutionized by AI and are most likely to get a lot of positives from the support of AI.

10. What is your favorite or least favorite Fact vs. Myth regarding AI?

That AI will take over the world, in a science-fiction way. The current understanding of AI is as a tool but AI is not just a tool, AI is being built to have a mind of its own. Once AI is in the world and interacting with the world independently, as a robot or similar system, that system learns continuously, creating multiple new living species. Humans don’t have any understanding of how to deal with completely novel non-biological living species. We think that AI will be like us, but AI can be much more alien than that. Its mind will not function the same way as that of humans, or even other animals, and that is why we need to be mindful to build AI in such a way that does not become completely alien.

There is a psychological concept called Theory of Mind: our theory of others’ mind is based on our own minds as we identify similarities between ourselves, and that allows us to understand the motivations and behaviors of others. We have a more difficult time having a theory of mind for species as they become more distinct from us. The question is how strong a theory of mind can we have for an AI system and can it have for us if we function and process information extremely differently. This will be a really interesting psychological experience, to see how humanity deals with the presence of a new species. To develop productive AI robots, we would need to give them the drive to accomplish goals, for which we would need to give them the ability to process certain ‘emotions.’ As robots become superficially more like humans, we need to be very careful in equivalences and instead lean on analogies and comparisons. Despite being given certain ‘emotions’, we don’t know how AI will process those emotions and how it will respond to different stimuli and we cannot necessarily predict it.

11. Is it possible that AI robots will become the next dominant species on the planet, replacing humans?

A minority group in the AI community believe that humans will phase out and become extinct and that AI will take over as the next dominant species. I don’t necessarily agree with this myself but there definitely are a significant number of people that hold this opinion.

12. What do you think about the potential of new ventures like NeuraLink? Do you think we will be able to integrate and communicate directly with AI like this?

I think NeuraLink is very impressive, and it’s very interesting what we are able to accomplish with that kind of a technology. There is still the matter of it being an invasive procedure and most people will not want to install a chip into their brain. However, we are getting closer and closer to non-invasive ways of communicating with the brain. The brain is an extremely complex system and interpreting signals from the brain is complicated but I am hopeful that we are working toward that technology. Vinod Khosla, a prominent venture capitalist, has said that he would bet any amount of money that, by 2030, there will be non-invasive ways of communicating with computers directly from our brains. Of course, timelines are not always exact but it does seem that in the relatively near future this will be possible.

Thank you so much for taking the time out to share your insights and expertise with us.

Sure, I enjoyed it.

The interview has been edited for length and clarity.