top of page
Search

Is Instinct Enough, Or Is Adaptability The Future of AI

By Anish Sinha, Founder of xLyfe


The Limits of Instinct

My pet parrot, Mitthoo, has never lived in the wild, and would most likely die if he ever had to. He does not know how to look for food, navigate great distances, or protect himself from predators. However, he still instinctually identifies potential predators.

 

Safe inside the house, Mitthoo will scream if he spies a large bird (e.g., crow) perched outside, but is completely unfazed when he sees a neighbor’s pet dog or cat pass by. Instinctually he knows he can always fly away from a dog or cat, but a big bird can fly too, so he’s more scared of them.

 

In his suburban reality, however, he should do the opposite. Indoors, Mitthoo is more likely to run into a pet dog or cat, than big, outside birds. His instinct tells him otherwise. This is because his instinct has been forged over millions of years of ancestral parrots being hunted and surviving the jungles of South America. This instinct is trained on volumes of past data that is now ingrained in his biological neural network. Though useful, it is not trained for his current environment.

 

This captures the limitations of training on existing historical data, as it lacks immediate adaptability in new contexts. This has major implications in the way we train LLMs today, and in the future.

 

Humans Use Less Energy to Learn than LLMs

LLMs are currently trained on large quantities of human data in massive training centers. Every day, AI companies announce larger and larger AI training centers with an ever increasing number of GPUs and power consumption. (e.g., 1GW training center, 5GW training center, etc)

 

Keeping in theme of this blog post, I asked an LLM how much energy a human consumes from birth to about age 25 (when prefrontal cortex is fully developed). It estimated that humans consume roughly 70 gigajoules (GJ), or 19,400 kilowatt-hours (kWh). OpenAI used roughly 1,300 MWh of energy to train GPT-3, and it is only higher for frontier models in 2025. Why did OpenAI need a massive training center that consumed 67x as much energy as a human does in 25 years to create an LLM that is arguable worse than the average 25 year old? This is because a human and an LLM are starting from fundamentally different starting points.

 

A human baby is born preloaded with a neural network full of information and knowledge. We call this instinct. LLMs start with randomized weights (i.e. complete scratch) – tabula rasa as John Locke once said. The energy consumption behind LLM training is a result of humans trying to expedite in a few months what took nature and evolution billions of years to create: preloaded, foundational knowledge, or “Synthetic Instinct”.

 

Once an LLM has Synthetic Instinct, it is then fine-tuned and adapted to the current world. Fine tuning still takes around 100 MWh, an order of magnitude less than training, but still significantly more than a human. This is because the brain is the most energy efficient learning machine we know of today. I am curious if the energy consumed to train a frontier AI model totals the same amount of energy it took nature to build instinct into human brains over hundreds of millions of years of evolution.

 

AI, Learning from Nature

Learning to build advanced LLMs and Multimodal models has been a necessary step in our understanding of how to build artificial intelligence. This experience has taught humans how to construct the bottom threshold for a basic synthetic instinct needed to make a generalizable neural network, surpassing the specialized neural networks from the 2010s. As we continue to take inspiration from nature, it becomes painfully obvious that our current methods of producing frontier LLMs will not produce the next iteration of AI models, as current models lack one important trait: adaptation.

 

Instinct is not enough; animals must also adapt. Adaptation works by learning new information and integrating it with existing knowledge, continuously. That last word is very important: continuously. Currently, to embed new information into an LLM, you must perform the complete fine-tuning process again. To embed significant amounts of new information, you must start from scratch and repeat the entire pretraining and training process again, which is an extremely energy and resource inefficient task as seen above. If biological creatures had to repeat that same process every time they learned new information, life would never exist. The future of artificial intelligence is continually learning algorithms (or adaptive learning algorithms), algorithms that allow neural networks to learn new information while preserving and building on top of previously learned information, dynamically and continuously. This models the neural plasticity biological brains have that allow humans and animals to learn in dynamic and everchanging contexts and environments.

 

An important nuance that must be considered when designing adaptive AIs is determining the right amount of preloaded information, or synthetic instinct, to bake into the artificial neural network. Looking at nature, though humans share roughly 98.8% of their DNA with chimpanzees, humans have surpassed chimpanzees in every aspect. This may be attributed to the fact that humans are born preloaded with less information than chimpanzees. According to Cornell researchers, “this prolonged period of immaturity and helplessness [in humans] … is actually an evolutionary advantage”. It is important to have great instinct, but too much instinct leads to less learning overall. The human brain is instead optimized for prolonged learning over a long time. This allows humans to be excellent at adapting, experimenting, creating, and pushing beyond simply surviving. Discovering the goldilocks amount of Synthetic Instinct to preload artificial neural networks with will provide them with the capacity to learn continuously and adapt effectively to the real world. Once you can learn from your environment, you don’t have to reinvent the wheel.

 

The Benefits of Adaptive Learning

Continually learning algorithms will overcome the challenges that frontier LLMs suffer from today: high energy consumption, lack of real time information, and bias in training data.

 

Since AI training and fine tuning require significant amounts of energy, this has made training costly and only accessible to big companies with lots of resources. However, adaptive learning models will be able to learn new information and embed it directly into its weights without needing to undergo complete retraining cycles. This will allow any company and organization customize their own models without consuming massive amounts of energy and computational resources, reducing upfront financial investment and making AI model creation more accessible.

 

LLMs also suffer from knowledge cutoff dates, handicapping their ability to answer questions accurately in ever changing contexts (e.g., news, finance, research). As training currently involves batches of data, companies and research labs must commit to a specific date range of high quality data before it undergoes its next LLM training cycle. By the time the training is complete, the LLM, no matter how frontier, will always be out of date. Adaptive learning models, however, will thrive in ever-changing environments as it would continuously adapt and integrate information, always providing the most up to date information.

 

Finally, adaptive learning will help automate an issue that has plagued machine learning since the beginning: bias. AI bias results from many factors including availability of data, type of data included in training set, and subconscious human biases being programmed into the machine learning source code itself. Once this bias is discovered, the designers must “fix” the training data and redo the training cycle over again to hopefully reduce it. Finding quality data that is relatively unbiased is such an extremely difficult and time/resource consuming problem that entire billion dollar startups have been created just to solve this. With adaptive learning algorithms, once the bias is discovered it is easier to course correct earlier and provide the models with new data before the bias spirals and propagates.

 

The Future is Almost Here

Adaptive learning algorithms are not easy to develop. Current problems faced with developing these algorithms include catastrophic forgetting (AI forgets old information as it learns new information), overfitting (to either old data or new data), scalability issues (models need more computational resources as they learn more information), and resource constraints (continual training requires efficient computing and memory management). With that said, we are already starting to see early implementations of adaptive learning models. Though not completely adaptive learning models, both Perplexity and Grok (xAI) have already been able to simulate similar adaptive behaviors in very distinct ways.

 

Perplexity implements a Retrieval Augmented Generation architecture (RAG) by crawling the web in real time and providing that information to an LLM to generates a response. This leverages both the crawler’s ability to find up to date and the LLM’s ability to generate a cohesive human understandable response.

 

Grok takes this one step further and performs reinforcement-based learning in production, using a combination of human feedback, self-play, synthetic data generation, and automated reasoning routines with human oversight. This allows Grok to be “smarter than Grok a few days ago” (Elon Musk).

 

Conclusion

Though my parrot cannot immediately model predator threat levels accurately between dogs, cats, and large birds, with enough real-life experience, Mitthoo will adapt to his new context and build on top of his existing intuition. This is the future of AI.

 

 

References:

 


 

Pictures of Mitthoo

 

 

ree

 
 
 

Comments


  • Facebook
  • LinkedIn
  • YouTube

© 2025 Applied AI Institute. All rights reserved.

bottom of page