Gotoldn eoin woods header

Evolution in Action: Software Arch. as Systems Dissolve

The way we build systems is changing. From our history of monolithic systems, then distributed systems, to Internet connected systems, we are now entering the era of cloud-hosted, microservice based, pay-for-usage system development. What does the history of software architecture tell us about the challenges of this new environment? And how does our approach to software architecture need to evolve in order to meet them?

Introduction (0:00)

Hello, I’m Eoin Woods. Most of my talks are how-to talks for giving architects a better understanding of how to get a system to production safely. This talk is more philosophical and less how-to, but hopefully still interesting.

Over the last couple of years I’ve stepped back from designing systems every day. Now I look at systems being designed. Some fundamental things are changing in the way we need to build systems, but if you look at the historical context, we shouldn’t be surprised by that.

I’m the CTO of Endava, a mid-size technology services company. I spent about 10 years in product development, which brings me from a product engineering background. I helped to build the Sybase database, the Tuxedo Transaction Processing Monitor, and some of the foundational digital-rights manifest systems that we used in the early days of the Internet.

Prior to Endava, I spent about 10 years working capital markets for UBS and BGI. I’ve always been a software engineer by profession, and I’ve spent a lot of time on architectural work. I’ve also done all the other jobs at some point.

About two years ago I joined Endava as their CTO. That’s given me a slightly different perspective now. Although I do get involved in projects, I’m no longer personally responsible for the architecture of a system every day.

History of Architecture (2:51)

I have observed in the last few years that I think systems are dissolving. We are now creating systems out of less and less tightly-bound pieces. I observed this from a certain amount of history.

I’ve been in this industry for some time, and I have actually worked on a mainframe. I was part of the disruptive UNIX revolution. We started off building systems that were very tightly packed, where everything bundled up together with procedure calls being our inter-component communication mechanism. Of course, we had databases and other stuff, for there was some networking.

Fundamentally, software developers thought about stuff as being pretty firmly packed together through the age of the mainframe, the minicomputer, and the standalone PC. Then we started breaking those systems apart, because we hit all kinds of practical problems in dealing with very large monoliths. They were all very expensive, and even though things were moving apart, everything was still pretty firmly wired.

If you look at first-generation distributed systems, client-server-type systems, you know what the pieces are. They’re not going to vary too much at runtime. In fact, varying’s a big deal. You’ve only got a couple of types of pieces, so you know where all the connections are.

Today an architect or a designer or a team might say, “That’s my system, and here’s the hard boundary.” This is getting difficult because actually we’re composing systems out of pieces that are selected at runtime. I’m alluding to things like microservices.

Ages of Software Systems (4:33)

In general, the code that we’re relying on at runtime now is much looser bound together. I think that’s causing, if not problems, certainly a reason to pause and think about what good architectural practice looks like. This has happened, I realized, due to historical cycles, because we’ve been through five ages of software systems.

We had the monolithic age, where everything’s sort of well-understood, and runs on proprietary stuff, including even the early Unix systems. It’s all very well bound together, all in one place.

After that came the distributed monoliths. This is your classic client-server. It was distributed, but cranky. The bits were pretty big, with very big chunks of code talking to other very big chunks of code. You did not have lots of little pieces.

After distributed monoliths came Internet connected. Many of you came into the industry when the Internet was a given. I remember the excitement of connecting systems to the Internet early on, when we really didn’t know what that meant. Having Internet-connected systems meant we had to solve a lot of problems quickly, because there were all sorts of quality concerns, as in systemic qualities, that we didn’t know how to solve, because we’d never done this before. There was no Google to go and copy, which is what we do today.

How do you solve a problem with scale? A good example from before Google is eBay. The first platform eBay ran on was IBM WebSphere. IBM WebSphere was connected to the Internet to run a large scale auction site. That was iteration one, because that’s all we knew. Admittedly, that decision was changed fairly quickly.

Now, the era I think we’re living in today is where the Internet is the system, with apologies to those of you old enough to have revered Sun Microsystems, where the network was the computer.

I think, today, the Internet is actually part of our systems, and we often connect parts of our systems together using the Internet, sometimes implicitly, but many of our systems use the Internet to connect to some portion of the system. It’s no longer some kind of external bastion, and that there’s the big, wild Internet out there. We’re using it, and we’re harnessing it, as part of our environment.

I think what we’re moving towards are intelligent connected systems, where it will no longer be good enough to have a highly-scalable, high-quality distributed system available all over the world. We’ll need to have systems that are in some way intelligent.

What do we mean by “intelligent”? And what are we talking about when we talk about cognitive computing and its limits?

Evolution of Architecture (7:48)

What I have observed is that software architecture practice has evolved in conjunction with the historical phases we just discussed, and that’s not an accident. Of all disciplines, software architecture is pretty pragmatic. It develops because there’s a challenge; we get better or we change the way we do architecture to meet the challenge.

Get more development news like this

Architecture has changed as we’ve gone through those periods. We started off in a monolithic era. Fundamentally, the monolithic era asked: How do you structure computer programs?

When I entered the industry we were starting to talk about systems, but I think the people not much older than me were from an era of having thought mainly about how you structure computer programs. This is the area of languages like FORTRAN, COBOL, and Assembler. If you were looking into the distance, you used to talk about things like Algol and Pascal. The major design problem was, how do we modularize systems effectively?

As an example, imagine I’m working on some project where I have more than a couple of hundred lines of code. There’s more than one or two people working on it in an industrial environment. I need my code to live a long time and I need to constantly change it. How do I modularize large amounts of code?

This is a pretty well-understood problem today. It’s sometimes instructive, to look back at where we’ve come from; we have actually solved a lot of problems in a relatively small amount of time. Information hiding, composition, and concurrency all emerged.

There were some pioneers at this point who were not really talking about architecture, but about structuring and solving the problems of their age. It’s people like Nygaard and Dahl working on Simula, which must have seemed like something from outer space, and they introduced it to a world that thought Assembler was the state-of-the-art. There was also Dijkstra and Hoare, working on computational fundamentals and on structuring, and Michael Jackson, whose work was quite fundamental in how we approach the design process.

If you’ve got some primitives for modularization, how do you actually use them? Donald Knuth did a lot of work at the micro level. Pamela Zave, who worked extensively with Michael Jackson, worked in formal methods, requirements, and many other foundational things as well at AT&T.

What I learned from this era was that architecture was a vendor concern. I worked for a vendor at the end of this era, and into the beginning of the next, and it really was our concern. You got your architecture by buying a vendor’s platform, and the architecture turned up with it. If you were z/OS - or, as it was then called, MVS - or if you were an IBM mainframe customer, and you bought some IBM products like CICS and DB2, you got the architecture given to you for free by IBM. There was only one way to structure a CICS, IMS DB, or VTAM system.

I worked for Bull, a big French manufacturer, and we had three product lines. We had a big mainframe product line called GCOS8. GCOS used to intersect with UNIX through the password field which used to be called the GCOS field. When they developed UNIX, they had a Honeywell Bull mainframe in the labs, which they had to communicate with, and they had to store credentials for it in the password file. Bull also produced the SMP version of IBM’s AIX UNIX.

When you thought about architecture, it was implicit. In fact, when people talked about “architecture,” they normally meant the chip architecture. Software architecture really wasn’t something people understood. They had it, but only as a byproduct of buying a vendor platform.

Distributed Monoliths (12:34)

Then the distributed monoliths arrived. It was very exciting, because now, rather than having one kind of thing - a computer with software on it - we had three kinds of things:

  • Clients
  • Servers
  • Databases

This is the classic client-server era. A lot of the batch stuff was starting to be referred to as “online” or “real-time,” meaning interactive, and the major vendors of this period were people like Sybase.

Sybase was very disruptive because they entered the market with the vision that actually everything didn’t run on one computer. For enterprise systems, that was quite visionary. Languages like Visual Basic and PowerBuilder that enabled graphical user interfaces are when software architecture emerged.

If you look at the timelines and when the field of software architecture started to be discussed, when the first books and papers were written, it’s around this time. There’s a famous architecture conference here called WICSA, and the first one was held in this period. Software architecture doesn’t quite yet explode, but starts being talked about seriously in mainstream industry about this time.

But why? It’s because people now have some architectural concerns. They haven’t got a lot of stuff to play with, but rather than being told by the vendor, “You write your code, you put it here, you compile it, and it’ll all run, trust us,” the vendors are now saying: “Well, we’ve got some bits, so if you want to plug them together, it’ll all be fine. Nothing can go wrong.”

Of course, as soon as you put a bit of wire between two things, stuff can go wrong.

This is what drove the idea of us perhaps needing to think a bit more about how we put a system together at the macro level. The vendors realized they were starting to lose a bit of control, because if you bought the Ingres database, some customers had the audacity not to use Ingres’s 4GL, but to want to use Visual Basic or C++ instead for the client. The vendors realized they no longer controlled the end-to-end, and if they weren’t very careful, some of their customers were going to go elsewhere.

So, they started to try and control what we call the architectural style. The architectural style is the kinds of pieces in the architecture and the rules for how to compose them.

I spent some time working on customer white papers and customer training courses when I was at Sybase on the client-server style. We told clients how to do client-server, which given the way some clients did client-server, it’s just as well we did tell them.

The reason vendors did this, of course, was not purely to make the world a better place, but to keep on controlling the architecture narrative. Where do we fit in the architecture story? So that, as your challenges change, you keep buying our stuff, because buying our stuff is what we’re worried about.

Internet Connected (15:35)

In the early 2000s, the Internet arrived and replaced some of those clunky user interfaces with web-based ones. So we thought, all we have to do now is connect them to the Internet. We thought people will just access us across the Internet and we’ll all be very rich.

Of course, what happened was, we took our distributed monoliths and we just attached them to the Internet. We didn’t really change them very much, and so we got a whole lot of system quality nonfunctional demands suddenly placed on our systems. The whole world actually realized that they were connected to the Internet. Rather than just being online, as in, we need to be there from 8:00 to 6:00 and we need to be quick most of the time, we realized we actually had to be there all the time, which was a bit of a shock for most people who’d been building computer systems up to that point.

If you hadn’t been working in telephone switching, then the idea of never being able to take your system down was a bit of a shock. Most of the system software we were using, like databases, had all been built assuming there’d be periods in the week or the day or the hour when you could take them down for a bit and nobody would mind. There were a whole lot of new challenges, and of course, some of this ended rather badly.

There was a hugely hyped clothing store called boo.com which was going to revolutionize the way the Internet worked, and they would have, if they’d launched in 2011. Unfortunately, they launched in 2001, and after a massive amount of hype and more investor money than had ever been spent, went fully in and out of business in about two years. They wrote a very interesting book called Boo Hoo, which was about how it went wrong.

(What’s really hilarious is, there is now an online clothing company called boohoo.com. They’re the same people. It’s the same folks who have gone back and can now say, “You see, we were right all along! It works really well! All we needed was a completely different generation of hardware, software, and consumer. Apart from those details, we were absolutely on top of it.”)

Then there was pets.com. They wanted to sell pet food across the Internet from their high-value location in Silicon Valley. That didn’t end well, either.

This is when I really started doing software architecture professionally. There was an explosion of interest in it. There were books written, methods being discussed, lots more conferences, and a real effort to understand the business.

I was working in the Valley at this time, and there was a really strong focus in nearly every engineering team about the non-functional. If you get those right, you can fix the functions later. If you get them wrong, you won’t be in business to fix the functions. Suddenly, that’s when the whole architecture discipline developed a fixation on non-functionals, which I think is absolutely correct.

People started realizing that, although these were hard problems, there were repeatable solutions, so styles and architectural patterns started being written down. People started having to write down relatively complicated architecture, so they developed some simple, pragmatic approaches, such as architectural viewpoints.

It was about this time that Philippe Krutchen’s 4+1 model came out. Then people like Rebecca Wirfs-Brock, people like the SCI guys, people like Martin and Krasinski and I and many others, were thinking about, how do we help people who don’t want to spend all day thinking about how to do architecture, actually do architecture?

The reason for this was, suddenly, we’ve got a load of problems, but the techniques that we today call architecture were not solved. It wasn’t random, and it wasn’t a pure, academic exercise. The industry had a big problem, and architecture emerged to fill the void.

What were the vendors doing? The vendors, at this point, were mainly trying to fix the non-functionals, because they realized that architecture was key to them staying part of the story.

They were trying to sell you magic solutions - preferably hardware solutions, because those have the best profit margin - for achieving at least one non-functional. Cisco built that business pretty well during the Internet boom on solving non-functional problems which were, fundamentally, scalability and security. Those were the two propositions Cisco used to sell you a whole lot of boxes.

Similarly, Sun made a massive amount of money out of selling a single line of computers: a vertically-scalable system called the DM10000. For a short period, you could not buy those for love nor money, until the crash came, and then you couldn’t give them away. I knew engineers who bought them and put them in their basements. Sun just could not make enough DM10000s, because the DM10000 was the scalability solution for the dot-com boom. It was a vertically-scalable computer, but basically could bring UNIX processing to the power of a mainframe. Of course, it had a reassuringly expensive price tag, which, if you’ve just had a massive investment of venture capital, you’ll be needing to spend it on something, so how about a reassuringly expensive set of computers?

Current Era (21:42)

We then moved into the current era, with the Internet as the system. A public API and a mobile user interface are the default ways into systems today. We’ve all got a web interface.

I was realizing the other day, I went to the bank. I went to the bank website and logged on, because the mobile app doesn’t quite give me the same view of transactions. I realized it was the first time for several months I bothered to go to their main website, and they’d redesigned it. “Ooh! Everything’s changed!” That’s nice, but actually I realized how irrelevant that is to me, because I nearly always use the mobile app. “Always on” has become “access from anywhere.” It’s not just about being available 24/7. It’s about being compliant, sensitive, internationalized, and in all these different markets, which is obviously quite difficult.

More dynamic architectural styles are emerging to allow us to deal with change, because there’s a lot more competition. I became interested in where microservices had come from, because in a sense, it’s not a new idea, but suddenly it’s become the thing that absolutely everyone has to do, similar to DevOps.

When I talk to clients about it, they often say, “Well, clearly we’ll be needing a DevOps project right about now, and we’re going to rebuild the whole system in microservices.” Fantastic, because our guys love working in that stuff, and we’re really quite good at it. So I ask them what the business drivers are. They look at me very blankly and go, “Yeah, anyway, about the DevOps, if we can just kind of get that done, that’ll be super.”

We think it’s a good thing, and presumably whoever in your organization signs the check is satisfied that it’s there for a good reason - and I think it is - but I’m not sure people think through that architectural driver.

The vendors, of course, also want to be part of the narrative. They’ve realized that selling bits of stuff, be they bits of software or be they bits of tin, isn’t working anymore. A number of them have realized they’re about to be disintermediated by the rise of open source, which is part of it, but also by companies reducing tolerance for dealing with what you might call “technical accidentals.”

If you’ve got the essence and the accidents, people know they have to deal with the essence, but they don’t want to deal with the accidental stuff. Vendors realize that actually their value is not producing the components, but producing platforms from the components that people can use really easily.

I think that the vendors are right in thinking the value is in assembling platforms, not selling bits, to allow customers to do very expensive engineering to put the bits together themselves.

Architectural Drivers (25:03)

We ended up here because of some architectural drivers. We need to keep on developing the systems really quite quickly. Quarterly releases don’t work so well anymore. You can even see the big banks starting to realize this.

I was talking to somebody at a huge international asset manager yesterday, and they’re having a new consumer-facing website for investors built by a third party. He talked about needing an on-site team of really good web developers who can work with what the vendors are producing, because for various reasons, the vendor’s unhappy about shipping new stuff into production for them on a day-to-day basis. They want to have more of a longer cadence, along the lines of weeks or months.

They realized that their competition, some fintech and some traditional, are changing their mobile apps and their website on a week-to-week, or even day-to-day basis. He’s going to take a platform from the vendor, who will keep on developing it in the background, but he’s actually going to have close-to-business developers who are putting in small features all the time.

You’ve got to be there all the time, or people will go somewhere else.

User Requirements (26:43)

Traditionally, how do we get our user requirements? We talk to users. We’ve got on-site customers. We’ve got requirements engineering. We’ve got tight integration of IT and the business.

What happens when your users are all over the planet, and they only visit your website once a month? They’re really hard to interact with. What happens if you give them a survey?

I think you’ll find the people who fill out a survey are your fan base. People who are unhappy won’t take the time to fill out a survey. In short, you don’t really know what anyone thinks anymore, and there’s no one to ask.

We need to move into a completely different paradigm where business analysts are as much statistical analysts as they are human analysts. They look at actual behavior as well as running focus groups, doing traditional interaction models, and so on. These unknown users produce unpredictable demand. We can’t predict what happens in the real world, and the real world affects our systems, so we have to have a much more dynamic response to load.

One of our clients runs a major golf championship. We do the website for that. They’ve been very concerned about Cloud, because some of their golf competitions are their real crown jewels, so if their websites aren’t there, it makes a massive difference in cash flow, but now they’re kind of getting it.

We actually cannot predict how much traffic we’re going to get for a particular golf competition in a particular hour. We should have a much more dynamic platform. We’re part of the Internet, and so, generally speaking, that means we’ve got to be components of the Internet. We’ve got to be consumable by other systems.

That’s where our API requirements come from. Unfortunately, we’re visible from anywhere. The great thing about the Internet is it links people together. The only snag is that that means that there’s a man in China with too much time on his hands, has direct access to your public web interface, so we are constantly being attacked.

Bruce Schneier, a very famous former cryptographer, said: when you’re building systems, bear in mind there’s an almost infinite supply of informed, intelligent brainpower, which will be ranged against your system. That’s quite a frightening thought, if you’re a security engineer, so we at least partly rely on stupidity some of the time.

New Age Principles (29:12)

Finally, you’re accessed globally, so you’ve got to be compliant everywhere, with regulations you barely understand. That led me to think, what are the principles of this new age, the architectural principles? I think there are six which leap out:

  1. We’ve got to be able to evolve continually. This is something you’ve got to start thinking about early, unfortunately, and most of us don’t think about doing it all the time.
  2. We’ve got to respond to change dynamically. What I really mean there is things like load and scale and security.
  3. We’ve got to get into a period of analyzing rather than asking. When we want to know what people like, what people need, we’re probably going to have to get that from data rather than from individuals.
  4. We’re going to need an API on everything. That’s partly because we’re stuck on the Internet, and people would like to communicate with us by APIs. It’s also because our software is being used in different ways all the time. I’ve seen this in the large enterprise environments in particular. Systems are built, to some degree, that mirror the organizational structure. It’s a bit like Conway’s Law for enterprises. If you’ve got three divisions, you’ll probably end up with three clusters of systems. However, if you then split two of those divisions and combine with another company, you might end up with four-and-a-bit divisions. Suddenly, being able to evolve means being able to use the software in different ways, and that really means being able to attach the functional components to each other differently to form different business processes. That means you’re going to have to put an API on it, because that’s really the only way to do that relatively quickly.
  5. We’ve got to be secure by design. You’ve got to use a static security analyzer, and you need to have formal secure software development training.
  6. We need to internationalize instinctively. Most people do not internationalize instinctively; it’s quite the opposite. We think about our local context, because we’re human beings and that’s what we do. Being able to internationalize in the broad sense, thinking about a very wide possible usage of a piece of software, becomes really important in a corporate environment when you’ve got a lot of M&A going on. At Endava we recently acquired a company that’s based in Holland, and they have operations in Romania, where we are, and they also have operations in Bulgaria, which we have absolutely no knowledge about. They’re going to have to come onto our enterprise platform. Suddenly we have to be able to deal with Bulgaria. In fact, we probably won’t support Bulgaria in the platform; they’ll use English. It’s internationalization in the broad sense of your systems. Can they be used in different cultural and legal environments?

Principles are all very well, and are very useful things to keep in mind, but what are the actual implications of these?

Implications (32:41)

I think we’ve got to design in continuous delivery from the start. I don’t mean that every system has to deliver to production continuously. That’s not appropriate and completely wrong for some systems. What I really mean is, as architects, if we’re building systems today in this environment, we have a professional responsibility to make sure that we have removed the obstacles that tend to stop continuous delivery working.

For me, they’re nearly always about automation, testing, and deployment. Have we got the ability to automate, the ability to test in a way that allows continuous delivery? Did we accidentally design something into our architecture that’s going to make that really rather awkward?

We need to allow modular evolution. By this, I do not mean that every system has to be microservices. I think modular evolution is absolutely possible with something that is currently rudely referred to as a monolith. It’s not always a bad thing to have a monolith. There are certain problems that go away with monoliths, even though monoliths do bring problems of their own.

We are not necessarily talking about microservices the way that an esteemed organization like ThoughtWorks would be able to help you absolutely define what a proper microservice is, but you’re looking for the capabilities of something microservice-like. Where you can change a small part and deploy it quickly without wrecking the world. Can you build systems, even if they’re relatively traditional, that allows modular evolution?

I think, today, we have to assume Cloud deployment. By Cloud, I do not mean, you must put your system on Amazon. What I mean is that for Cloud deployment to be successful, there are certain things about it likely to be true. There’s the “cattle not pets” principle. We don’t expect a load of hand-crafted servers and a lot of manual interaction to build environments. We don’t want one-off special cases. We don’t want static config bound into the application. We want it all externalized.

Of course, a good start for this is the 12factor.net advice, which I think is all really good. There’s nothing in there to argue with, but I think there is more to it. I think a lot of it’s at a quite micro level. I was talking to a chap from one of the big banks who was saying, yes, they’ve been looking at 12factor, and their conclusion is that it is necessary but not sufficient. 12factor is really useful advice, but don’t stop there.

Put measurement into the system. An observation I have of older systems, is that if somebody said, “I’d like to do a lot of analytics on the activity going on in this system, and I’ve got half a million lines of code already”, it would not be clear to me how I would give them those analytics, because we didn’t really think, when we were building those systems seven or eight years ago, about measurement. Yes, we’ve got production measures and CPU and transactions, but we want to know what’s going on in the system, so that we can run analytics on that, so we no longer have to go and interview users. Would it be possible to do that with your architecture or add a significant block to it?

Structure around public APIs. I must admit, I’ve believed in this since back in the ’90s. Every API you produce, think about it as if it’s a public API, and secondly, every component you create should have an API. It’s just the easiest way to build systems for the long term, rather than having lots of special cases.

I’ve called it the Amazon pattern, because it may be an apocryphal story, but apparently, Jeff Bezos at some point sent a memo around Amazon saying, “If you don’t put a public API that’s potentially accessible by the world on your components, I’m going to fire you.” It seems like he’d be an awfully busy guy if people didn’t actually do it. He’d have to run around a lot of people.

Apparently Amazon uses this right through their infrastructure, everything you build, you stick an API that, in principle, could be exposed to an Amazon customer, and it’s apparently one of the ways they get agility in their products, because they don’t expose everything to begin with. They expose a small number of things, but if they find that there’s a genuine good customer demand for another bit, well, they’ve got the API already. I’m sure they need to tidy it up, think through the security again, make sure that it works with the others, but they can expose it. It was built to be exposed.

Design and build to be securable. I’m not saying that everyone has to do security scanning on every piece of code all the time. You don’t all have to do threat modeling all the time, but development teams need at least a nodding acquaintance with security practice, at least a basic understanding of threats, threat modeling and risk analysis. In this era, when the Internet is part of the system, they’re basic professional skills which should be there along with TDD, clean code, an understanding of network concurrency, being able to use different database models or continuous integration.

There’s stuff we all expect each other to be able to do. If you’re going to connect to the Internet, there needs to be at least a basic understanding of how to build a secure system.

The Future (38:38)

What does the future look like? I think it’s intelligent connected. What I’m really interested in is what the future of the software architect looks like. I think that data and algorithms are going to become quite important for achieving your architectural qualities.

A number of the architectural qualities we’re talking about require some sort of analytics. That suggests data and algorithms suddenly become much more important. My observation, being a software architect in the previous era, is we tend to push data and analytics under the rug and say, the sort of details which the architect shouldn’t have an opinion on, that should be delegated into the empowered teams who are responsible for pieces of the system.

If you’re going to get meaningful analytics across the system, that’s probably not going to work so well. You’re going to have to take more ownership. Collectively, I’m not prejudging your architectural model. When you come to do architecture, there’s going to have to be a bit more thought about what sort of data we have, what sort of algorithms could we run on it meaningfully, and what do we need to run on it?

I think architecture’s becoming more and more runtime emergent. This isn’t the emergence that Neal Ford talks about with Agility; that’s emergence at design time. This is emergent at runtime. This is the stuff that Adrian Cockcroft talks about a lot in microservices. You do not know what your architecture is. You know what the principles behind it are. At any point in time, if someone asks you what is connected to what, you have enough instrumentation in place that you could answer the question, but actually, you don’t know until you go and look. The architecture evolves over time at runtime.

Of course, these vendors aren’t going to go away, and I think you can already see this with IBM’s massive marketing of intelligence on Bluemix. The vendors are going to be selling us intelligent behavior.

You can use SaaS-based services already for machine learning. I think that’s going to get far deeper, and these PaaS platforms are going to have a lot more intelligence built into them, if you feed them the right data. Our “access from anywhere” becomes intelligent assistance.

Intelligent Connected (40:57)

How does this actually affect us day-to-day? I think we’re going to be doing less detailed structural design and more data and algorithm design.

I think our defined structure is going to become more emergent, so we’re going to be only putting principles and patterns in place. I think we’re going to be defining specific structures quite a bit less. That means that we will still be making decisions, but we’ll also be putting a lot of principles, policies, and algorithms in place, rather than point decisions.

Rather than saying, “these three things connect in this way,” we’re going to be saying, “the general interaction patterns that we are going to use are the following, go and use them,” and the people will auto-wire stuff. Maybe software will auto-wire itself; I’m not quite sure. Slightly scary, but I think, perfectly possible.

Today, we try and deal with a certain world. I think sometimes it’s an illusion. We try and reduce a lot of problems down to very scientific, certainty-based trade-offs. I think we’re actually going to be dealing with probability. If you don’t know who your users are, you’re connected to the Internet, you’re not quite sure what your architecture is at run time, you’re basically in a probability game rather than a point-decision business.

Today, I always encourage architects to get deeply involved in operations. There was a great quote at JAX: Software is complete when it leaves production, not when it enters production. I’m a great believer in this. Architects need to be deeply embedded in the operational cycle.

However, I think the whole operational world is going to be moving from, very structured, defined, up-front stuff to much more policy and automation. We will have certain automated ways of changing the production environment, and a lot of policies governing them, so that’s what actual architects need to get their heads around. You need to step away slightly from a detailed process, and understand what’s possible through automated, policy-based technology, which I must admit is something I’m still learning about.

At Endava, we are really quite advanced in our thinking on this. I’m still getting to grips with this myself, and I think most development architects I talk to aren’t actually aware of what’s possible today, and they’re definitely not aware of what’s coming next. We need to get better at that.

Although finance isn’t my favorite topic, today we tend to talk a lot about Capex. I need this much money over this period, it’ll depreciate like this. It’s all moving to Opex, or operational expense. It’s all consume-as-you-go.

One of our big concerns is, if I’ve got a credit card, I can now buy anything. We’re going to have to be thinking about costing models and justifications and ROIs, which I think is quite important for architects to understand, much more an Opex model than a Capex model. This means making friends with your finance group, because Capex is pretty straightforward and Opex can be quite complicated. Go talk to them. Take a cup of coffee, because it may take some time.

Conclusion (44:11)

It’s worth looking at the past if you want to predict the future, because monolithic computing led us to start fixing structural problems. Distributed computing led us to software architecture. Internet-connected computing led us to the explosion and the maturity of the architecture we’ve got today.

Each era develops solutions to the problems it’s facing. That’s why industrial practice often leads academic practice in software architecture.

I think the Internet as the System era needs continuous delivery, modular evolution, Cloud enablement, analysis, public APIs, and first-class security in your architectural thinking. If you’re missing any one of those six bits, I think now’s the time to be plugging that into your knowledge and making sure that the systems you’re building comply with those.

In the next era, we’re going to have learn some new stuff, which is fun, but it’s something we’re going to have to start planning for right now. We need to learn more about data and algorithms. We need to think more about what architecture is in a dynamic world, principles, policies, and patterns.

A lot of us, myself included, think in the single-instance concrete. We have to think much more about policy and pattern and replication. We have to think about operation at a huge, varying scale, which will be a lot more policy-based than process-based. There’s quite a big gap for us to step across. There’ll be these new economics of systems, Opex-based economics. We need to know all this stuff to be ready for the next era.

Q&A (46:17)

Q: What has happened to security non-functionals?

A: What has happened is the same thing that has always happened. Senior managers assure us that they’re absolutely critical, and then refuse to pay for people to go on security training courses. That happens repeatedly.

Probably as long as I’ve been in the business, the security engineering business has continued to become Balkanized, so they’ve got their own language, their own priorities, their own approach to problems. There’s very little intersection. There’s a lot of good knowledge, experience, practice and theory out there and it’s often not applied, which is where it goes wrong.

One cause is prioritization, because for some reason, people don’t think it’s going to happen to them, but it’s happened to JP Morgan, Target, Sony, and Hacking Team. When it can happen to Hacking Team, it can happen to you. They’re a company that produces hacking technology, and they got turned inside out by another hacking organization. So, people don’t think it’s going to happen to them, and the security community, to be fair to us, are not brilliant at coming in and speaking our language to help us actually do the right thing.

Next Up: Android Architecture Components and Realm

General link arrow white

About the content

This talk was delivered live in October 2016 at goto; London. The video was transcribed by Realm and is published here with the permission of the conference organizers.

Eoin Woods

Eoin Woods is CTO at Endava, the European IT services company. He is an author, a conference speaker and an active member of the London software engineering community. His main technical interests are software architecture, distributed systems and computer security.

4 design patterns for a RESTless mobile integration »

close