The Core Banking System is the invisible engine powering the daily operations of financial institutions, from account management to payments and lending. Its evolution reflects major technological revolutions and rising customer expectations for personalisation, speed and security.
Here is an overview of the spectacular transformation of Core Banking, from physical ledgers to the era of Artificial Intelligence.
Before the digital era, transactions were recorded in handwritten journals and ledgers, a centuries-old practice. The first computerised Core Banking systems (1970–1990) emerged to help banks manage high volumes of operations. These systems relied on legacy languages such as COBOL, created in 1959.
This first generation was defined by rigid monolithic applications and batch processing: transactions were posted to the actual account only at the end of the day (EOD – End Of Day), since each branch operated its own local server.
Advances in IT and telecommunications made it possible to share information quickly and efficiently across branches. This centralisation paved the way for the concept of Centralized Online Real-time Exchange (CORE).
By the 1990s, the second generation of Core Banking systems evolved into more product-oriented infrastructures offering 24/7 access to banking services.
The rise of ATMs played a decisive role: software previously limited to branch networks became accessible via terminals, including ATMs and payment devices. Thanks to centralisation, deposits were immediately reflected, allowing customers to withdraw money from any branch.
The rise of the Internet in the 1990s marked a major turning point: it fuelled the growth of front-office applications and online banking services.
In the 2010s, the adoption of digital and cloud technologies accelerated. Infrastructure shifted toward a customer-centric model supported by a digital layer for greater flexibility.
The move toward Service-Oriented Architectures (SOA) enabled easier integration between front-end applications (user interfaces, including mobile apps) and core systems.
Impact of Mobile Apps and Smartphones
Banking services became accessible through multiple channels—mobile banking, Internet banking and more.
Key transformations included:
• Introduction of graphical user interfaces (GUI) for the web and Windows environments, improving user experience.
• Emergence of APIs, which enabled banks to integrate FinTech apps and third-party services securely.
• Rapid growth of mobile penetration and Internet access, especially in Africa, turning mobile payments and digital solutions into viable alternatives to traditional banking infrastructure.
• Acceleration of digital adoption during COVID-19, as customers shifted from branch interactions to in-app usage, highlighting the critical importance of digital-first models.
Standardisation: BIAN
With increasing system complexity and the need for interoperability, standardisation initiatives such as BIAN (Banking Industry Architecture Network) gained prominence.
BIAN defines a groundbreaking technology framework that normalises and simplifies core system architecture. It provides best-practice architecture, APIs and service domains designed to address the challenges of legacy infrastructures. Oracle’s APIs, for example, align with BIAN standards to streamline integration.
The 4th generation, emerging after 2020, is defined by platformisation and cloud-native architectures. These modern platforms, often based on microservices, provide agility, scalability and speed-to-market. This approach is often called composable banking.
The Massive Arrival of Artificial Intelligence
The integration of Artificial Intelligence (AI) and Machine Learning (ML) is the most defining trend of this new era. AI promises to reshape Core Banking by offering:
• Advanced predictive analytics
• Stronger fraud detection
• Hyper-personalised customer experiences
• Smarter decision-making mechanisms (e.g., explainable AI for credit assessment)
However, adopting AI remains a major challenge for institutions operating rigid legacy systems. Executives increasingly recognise that AI’s efficiency and ROI depend on the migration toward modular, flexible architectures.
An IBM report shows that only 32% of banks have successfully integrated AI into their core systems, although appetite for AI continues to grow.
For banks, modernising the Core Banking System has become a fundamental requirement. Agility, efficiency and customer-centricity rely on this transformation in a financial world that evolves constantly—one where the mobile phone has become an essential channel.
In 1971, American engineer Ray Tomlinson was searching his keyboard for a symbol that could separate the user’s name from the server in the first-ever email addresses. As his eyes scanned the keys, he landed on one very specific character: the @.
But why was that symbol already on his keyboard?
Quite simply because computer keyboards were designed based on typewriter layouts. A logical continuation in the evolution of input tools. But if typewriters were created for writing, why include a symbol that rarely—if ever—appears in novels?
Because writers were not the main buyers of typewriters. Companies were. And among them, the most typed documents were contracts, invoices, purchase orders, and reports. In the English-speaking world, commercial notation often used the @ symbol to mean “at the rate of” (e.g., 5 books @ 2$ each).
This symbol, inherited from the mercantile practices of the European Renaissance, emerged in Italy, notably with the emergence of banking systems. It helped simplify commercial writing, saving space and improving readability in a context of growing trade.
The advent of international commerce further reinforced its use. The @ became a convention, then a standard—quietly embedding itself in typewriters and, by inheritance, in computer keyboards.
The @ has since moved beyond email. It now defines a new kind of identity: the handle on Instagram, the tag on Twitter/X, the mention on Slack. It connects individuals to platforms and communities to digitals ecosystems.
More recently, some digital banks have adopted it as a transaction identifier: it now allows instant money transfers between accounts—no phone number, beneficiary name, or address required. A radical simplification, fully aligned with its historical use.
From trade symbol to digital icon, the @ has come to embody a lasting bridge between finance and technology—cutting across eras, cultures, and digital behaviours.
But will this historic link between finance and tech endure? And in what form?
Will it be reimagined as a blockchain identifier? Will it evolve into a standard of exchange in the metaverse? Or become part of a new symbolic language, where every interaction carries its own digital signature?
One thing is clear: born from commercial logic and adopted by computing systems, the @ continues to shape the way we connect.
When I took over the performance testing team during the migration to our new T24 core banking system, we ran a series of performance and stress tests based on business-provided NFRs (Non-Functional Requirements).
Looking back, if I had to do it again, I would take a very different approach.
Here’s the method I would use today.
1. Don’t wait for business-defined NFRs.
The business often lacks clear visibility into the actual load the system will face. Building tests based solely on their assumptions is risky and usually incomplete.
2. Use a system-thinking approach.
Any system can be modeled simply as:
It works for everything:
A car: you add fuel, press the pedal → it moves
An ATM: you insert your card, type your PIN → it gives you cash
An application: it receives files, messages, events, or API calls → users interact and the system outputs files, JSON, or responses
n performance testing, we often focus on one specific flow: measuring the response time of a message, an API call, or a user screen.
In stress testing, we simulate all incoming flows together, plus “normal” user activity.
That approach has value. But it’s also built on a lot of assumptions.
Today, I prefer a different method: Find the breaking point.
Like a rubber band you stretch further and further — until it breaks — the goal is to identify when the system fails.
How many messages per second can it handle before saturating?
How many concurrent API calls will cause it to crash?
How many parallel user actions will bring down the frontend?
The process to answer these questions is simple:
Start by injecting all incoming flows at once using “normal” load levels: typical file sizes, one message per minute, standard user activity.
Then, gradually increase each load:
a larger file
a message every 30 seconds
more users
more API calls
Step by step, you increment each stream, observing how the system reacts — until it breaks.
This allows you to identify the least resilient flow, which you can then optimize or, at the very least, understand its limit.
Once known, you restart the process while staying just below each threshold.
The result is a precise map of stress limits, and targeted monitoring for every input flow.
This isn't just performance testing — it's a resilience strategy.
Knowing your system’s true limits means you can:
Understand its behavior
Anticipate peak scenarios
Build a robust architecture
And determine how long it will take to catch up on a full day of data after a crash
That’s the approach I would choose today.
What about you — how do you test your systems?
In 2006, Charmayne Cullom and Richard Cullom published the document "Software Development: Cowboy or Samurai" and 18 years later, it remains relevant.
Here is a summary of the main points for those familiar with the content or pressed for time:
Cowboy Programmer
- Characteristics: Cowboy programmers work independently, often ignoring standard practices and procedures. They prioritize quick results and problem-solving by any means necessary, valuing completion over adherence to organizational standards.
- Corporate Impact: While cowboys can deliver quick solutions, this often comes at the cost of long-term maintainability and stability. Their behavior can create a cycle where organizations rely on them to fix problems they partially caused, reinforcing their necessity.
Samurai Programmer
- Characteristics: Samurai programmers follow a disciplined, ethical, and systematic approach to software development. They adhere to a code of conduct similar to Bushido, emphasizing loyalty, duty, and continuous improvement.
- Principles:
- Loyalty: Commitment to the team, customer, and project, ensuring all stakeholders are satisfied.
- Discipline: Attention to detail and constant improvement of skills and practices.
- Duty: Responsibility for completing projects on time and within budget.
- Ethics: Making decisions that respect the technology, organization, and people, avoiding shortcuts that could harm long-term goals.
- Corporate Impact: Samurai programmers contribute to creating a trust-based organization. Their focus on maintainable and high-quality software supports long-term profitability and customer satisfaction.
- Environment: Agile methodologies align well with Samurai principles, emphasizing teamwork, continuous delivery, and adaptability, requiring disciplined and ethical behaviors.
Conclusion: Where are the Vikings Programmer?
An IT Organization needs both profiles. Cowboys may seem like solo players, but in situations of stress, crisis, or emergency, they can provide quick-win solutions. However, the organization should ensure they regularize their delivery by sharing knowledge, producing documentation, and defining long-term solutions to replace the quick-win. Organizations should also avoid operating solely in emergency mode.
On the other hand, while Samurais are ideal for delivering quality, they may not be suited for emergency deliveries due to potentially heavy processes and workflows.
This analyze is also not considering 2 profiles :
- Ninja Shadow IT: This type of developer quickly creates "temporary" solutions, often in Excel. These solutions can persist for years, becoming critical systems that need maintenance and evolution long after the original developer has left.
- Viking: This kind of team disrupts the organization, replacing the IT landscape with their own developments. This can be for good reasons (obsolete technology) or bad reasons (lack of understanding of legacy systems).