Driving Change in the London Insurance Market
by Juan de Castro, COO, Cytora
This a shortened version of Making Risk Flow podcast, episode: “Driving change in the London Insurance Market”. In this episode, Juan is accompanied by Sheila Cameron, CEO of Lloyd’s Market Association, to discuss digitising the London insurance market. Over the course of their discussion, the pair talk through how the London insurance market is changing, ways to drive change in the complex environment, and the importance of standardising data across the industry.
Listen to the full episode here
Juan de Castro: Hello. My name is Juan de Castro, and you’re listening to Making Risk Flow. Every episode I sit down with my industry leading guests to demystify digital risk flows, share practical knowledge and help you use them to unlock scalability in commercial insurance.
Welcome to another episode of Making Risk Flow. In previous episodes, we talked about digital risk flows but focused on the general insurance market and we haven’t done any episodes specifically on the progress in the London Market. London Market is frequently considered as the most traditional and old fashioned insurance market. But as we’ll discuss today, that is changing. And who better to discuss this topic than with Sheila Cameron. Sheila is the CEO of the Lloyd’s Market Association and an old colleague of mine when we were both on the fifth floor back at Hiscox many, many years ago. So Sheila, first of all, thank you so much for joining me.
Sheila Cameron: Thank you, Juan, an absolute pleasure to be here. Thank you.
Juan de Castro: Let’s start with an overview of your background, role and what the London Market Association is.
Sheila Cameron: As I said, my name is Sheila Cameron, I’m the CEO of the Lloyd’s Market Association, but I started life in an undergraduate training program at Accenture. So started very much in the project management and change management space, specialising in finance and performance management. From there I moved to Xchanging, which is where I learned all about the London Market and the back office of the London Market, where I was the COO of the claims business. So that is the back end of the process. It is when the claim has to be paid and it is the mechanics of making sure that the claim gets paid by all of the various parties and is appropriately recorded on the various systems. After a number of years there, I then moved to Hiscox, which is where you and I met Juan. So I had various roles at Hiscox, including again change in project management type rules as well as strategy rules and regulatory rules around overseeing the implementation of solvency two within Hiscox. So lots of different areas and a fascinating place to be in a fascinating group, that is Hiscox. At the end of that, I then decided I wanted to get back into the Lloyd’s marketplace in particular. Lloyd’s is a bug that bites you and never lets you go in my experience. So I went and joined Navigators, which is now Hartford, where I was their Head of International Operations. So that was all of their operations across the UK, Continental Europe, LATAM and Asia, which was a fascinating role. But from there I then got approached for this role of CEO of the Lloyd’s Market Association, which is the trade body that represents all of the managing agents or insurers in Lloyds. There are 55 different managing agents and I see our role as very much trying to make the market a better place. And we do that in three ways. First of all, we do it through our technical expertise. So many of you will probably see wordings like LMA 1234 or 789,10 whatever. We have a team who sit there and crank out those wordings all day long, together with working together with various market experts on the wording side of things. Secondly, we have a training academy that we call the LMA Academy. And that academy focuses on technical training in the Lloyd’s marketplace. So, for example, this week, yesterday, we just launched our syndicate business planning course, which is our top level course aimed at future C-suite leaders. And it puts them through setting up a syndicate themselves from scratch and presenting that through to Peter Montanaro, who’s the head of the CPG, which is the Capital Planning Group, and pitching their syndicate to him and trying to get it through the Lloyd’s Business Planning approval process at the end of all of this, It’s a fascinating course. So that’s the second thing we do around making the market a better place is the training. First is the technical expertise, and then third is around bringing communities of interest together. So that might be, for example, we’ll bring the CUO community together to figure out what do the CEOs want to work on, how do they want to improve the marketplace, or maybe the claims community or the operations community, whatever the various communities are, bring those together and figure out where they would like to improve the marketplace. So our role involves a huge amount of lobbying and influencing into both the Lloyd’s governance structure, but also the regulatory structure. So I spent a lot of time with FCA, PRA, Westminster, which is a whole different ball game that has been quite challenging, but fascinating all of the same. It is a fascinating role from that perspective. But one of the things that the LMA Board asked me to do this year over the last couple of years has been focusing on the future at Lloyd’s program and within future at Lloyd’s. One of the key things that insurers wanted to push forward quite strongly was around data and data standards and how can we drive towards a digitised data driven marketplace. And that’s the key thing that we want to see as Insurers out of the future of Lloyd’s program. So as a result of that, I spent a lot of time working with John Neal as the CEO of Lloyd’s around how do we get this, how do we move this forward, how do we bring the market with us? Because we can say whatever the standard is and all the rest of it. And that’s the easy bit, frankly. The hard bit is getting adoption and getting people to actually do it. So as a result, I’ve worked with John to set up the data council from there. And that’s been a key objective of ours around the whole future at Lloyd’s program overall, because fundamentally, that’s back to our purpose, which is around making the Lloyd’s marketplace and London marketplace better, both for our individual members, the insurers, but also for the market as a whole.
Juan de Castro: So driving change is always difficult but in the Lloyd’s context, it’s not just about driving change. It’s about orchestrating change across many different managing agents and actors. So let’s focus on this data council initiative that you just mentioned. What is the goal of that data council?
Sheila Cameron: So the goal is twofold of the data council. First of all, it is to drive digitisation of the London marketplace overall, and that’s across those money firms. It’s not just the insurers at Lloyds. It’s also the insurers outside of Lloyds. So roughly, we’re talking over 100 insurers, and we’re talking over 200 brokers. So we want to drive digitisation of the London marketplace as the first step, and the second step then is to drive adoption. And we’re driving adoption across five key areas. So, first of all, it’s around data standards. So let’s agree what our data standard is, and we’ve agreed that it is Acords GRLC standard. GRLC stands for Global Risk and Large Commercial. It’s the basis of Rüschlikon as well. We’ve agreed what our standard will be and that we will all drive our data to be in this GRLC Acords, GRLC standards. Secondly, we will drive adoption of what we’ve called a core data record. Now, the purpose of the core data record is to facilitate the back office processing in terms of accounting and settlements, tax and regulatory reporting, and claims matching across those three areas. That’s the purpose of the core data record. It’s not to write a risk. It’s to meet those three areas that I’ve outlined, because those three things are things that we all have to do no matter what. We all have got to figure out how to pay the claim and account and settle for the risk and premium payment itself. We all have to do tax and reg reporting, and we all have to be able to match a policy to a claim. So those are the three key things, because otherwise I would end up in a list in terms of what data do I want to write a risk? Well, if you’re an aviator versus a mariner, you have a very, very different set of requirements when it comes to data in that respect. So we’ve focused on the core data record of what’s needed at the end of the process, and what do we all need all of the time in terms of accounting settlement, tax and reg reporting and claims matching? From there, then we’re moving on to what we’re calling computable contract. But where we’re starting with is a standardised market reform contract called an MRC. And we just launched that recently, an MRC version three, which seeks to make the contract both machine and human readable. So to date, this has been very focused on being readable to the individual. And what that has meant is that in the back office, what’s happened is that we’ve had to have loads of people to sit there, to wade through oceans of words to figure out where the actual data elements are. So what we have done is try to make that much more tabular and much more focus on an easy ability to extract the data. So that has been our MRC version three and later this year we’re going to move on to well let’s move on now to computable contracts and what do we actually mean by computable contracts and how do we drive that forward? The fourth step when it comes to adoption is all around the process. So who is responsible for submitting what piece of data, to what degree of quality and at which point in the process. And this is where the rubber really hits the road, is around process because when you’re dealing with, as I said, 100 different carriers, 200 different brokers, all of whom have different views about who’s responsible for what piece of data, when, where, how. It can get quite challenging. This is the part which was always going to be the most difficult and the most challenging. And we had a consultation earlier this year, it’s just closed. We’re just going through all of the feedback from that consultation on the responsibilities. It’s called process rules and responsibilities. So we were just working on rules and responsibilities and getting that over the line and some high level process principles. The fifth and final step then around driving adoption will be APIs. So what will be our API standards and how will we communicate those and how will we get those adopted. But as I said, we’re going through that in a five step process. We’ve got the Accordo over the line, we’ve got CDR over the line for open market risks. We will now move on to other areas including treaty reinsurance, delegated authority and claims. So we need to work through all of those. As I said, we started with MRC version three on the contract side. We’ll move on to computer contracts later this year and we’ve already started on the Process Rules and Responsibilities and started that, completed that first consultation. There will be more consultations to follow. I think one of the keys to your original question, one of the key points for me around adoption, has been engaging people in this process. So we’ve had literally, when we did the CDR originally and the MRC, we had thousands, literally thousands of pieces of feedback into it. No, we don’t think you should do it this way, or we think it should be that way, or you’ve missed this, or we think it should be this, that or the other. But trying to stay true all the way through to our principles around the CDR only does three things, accounting and settlements, tax and regulatory and claims matching. So we all love lots of other things. But let’s focus on what our goal is and what we’re trying to achieve with the CDR, which is those three things.
Juan de Castro: That makes total sense. You don’t want to boil the ocean and say, okay, what’s any and every data point you might need but rather start with a use case. You mentioned the three of them, right? Identify what data points you need to support those across the market and then that schema or that record will grow over time as I would imagine you add new use cases and one of the things you touched on was the process. Obviously in such a complex environment, one of the complexities is you’ve agreed on the schema and the structure. Who’s the first actor in the chain that starts populating the first few fields? Because I guess that schema gets built up along the process, is that right?
Sheila Cameron: Completely. When you first have a quote, you certainly don’t know the various tax codes. The broker will not know what the tax codes will need to be for a particular risk that comes in. And that’s where the complexity tends to be, particularly around tax and regulatory. That’s what adds complexity to all of this. But we start slowly. We will build our way through the process in terms of building up the data. The complete CDR is not required until the very end of the process, at the point at which the risk is bound, which is necessary. At that point, from a contract certainty perspective, from the FDA’s viewpoint, they would expect all contracts to be certain at point of bind. But there is no way, as I said, at a point of quote, where you know what the tax code is going to be for your risk in a tiny proportion of your risk that is in some very small country somewhere very far away. And that’s fine, we will work our way through it. It is not until that point of finding that the data is required. We’re trying to encourage people to build it up as they go through. So one of the things that we define as digital processing is submitting a record through to the back office. So the back office being the joint venture, that what used to be called exchanging, and it’s been happened by DXC, they’re building a gateway for us. And what we want to be able to do is submit data to that gateway. And that’s what we’re defining as digital processing. Today, we simply in various and many formats, we submit a paper, word-based documents or contracts. It’s not on paper anymore, generally tends to be scanned and submitted in, but you have to have people there to then crawl through that document to figure out where all of the data elements are. So what we want to get to is we submit a data record straight to that Gateway and it will need countless people sitting behind the front doors of DxE to figure that out and to figure out where the individual data items are. That’s where we want to get to.
Juan de Castro: I guess for those people listening who are not that familiar with the environment and they hear about the Gateway, they hear about placing platforms like PPL or white space. Can you give us a brief overview of how all that fits together with the common thread of that data record flowing through?
Sheila Cameron: So the purpose of the Gateway, as I said, is to validate the data that comes through. It will also enrich it and it will also be a data store. So people will be able to extract data back from the Gateway when it is complete. But that’s the purpose of the Gateway. It becomes almost the front door to our centralised back office processing. And because we have multiple brokers and multiple carriers, and very often on an individual risk, you will have multiple carriers, usually one broker, but not always. So it’s that complexity of connections and ecosystem makes it really challenging. So the Gateway becomes the front door to validate and enrich the data that is presented. But we describe that there are three ways in which the data can be presented. There’s first of all, what we call the document to data route. So I mentioned earlier, we have a new standard template for our contract which is called the MRC version 3, the Market Reform Contract version three, and what the process is defined as. People will brokers will create that contract, agree the contract in that format with the individual insurer. They will then submit that contract through to PPL. PPL will then extract the data and submit it onwards to the Gateway. Because it’s in an MRC platform, it can therefore be extracted. Of course, there’s also White Space is available too, and I’d say PPL and White Space are the two main, what we call third party placement platforms. So that’s the document to data route. Then what we’re also working on is what I call the data first route. So the data with the document comes at the end of the process and is still provided to the client. Just like when you buy your home insurance online today and it produces a PDF document at the end, but it is based on data initially in terms of assessing the risk that might be your home insurance or whatever the case may be. We describe that there are two routes, two data first routes. First of all, a carrier or broker could submit data straight to the third party placement platform, ie, PPL or White Space. They can submit data straight to that platform and that platform pushes it onto the gateway. But ultimately, where we’d love to get to is the broker and or carrier will agree the risk themselves and then send the data straight to the gateway from their own platforms. That’s the dream. That’s where we want to go. So we have moved away completely from paper. The contract comes produced at the end of the process for the purposes of the client. But we assess and trade on the basis of data between each other. And then the data just gets sent to the gateway for recording purposes, for what I’ve just described of accounting and settlements, tax and reg, and for claims matching. So that’s where that data will be stored. But we want to be able to ultimately be able to trade on the basis of data. And that’s not to say we’ll never have a face to face environment again. That’s not how this works. When I think about major airlines like Lufthansa or VA or whoever, their risks don’t get done like that on a simple website. That’s not how this works. There are very complex risks with multiple assets in multiple locations, multiple tax and regulatory jurisdictions, multiple sanctions regimes that we need to adhere to. So there is huge complexity in some of our risks. We set up the data council. One of our guiding principles will be that we will take an 80/20 approach to all of this. We accept that there are some hugely complex risks out there that will never fit into this standardised process, and that’s fine. We’re not trying to remove that. We’re not trying to remove that from the client. We’re not trying to remove that from the brokers and carriers involved. What we are trying to do is to take an 80/20 approach here. We could make this process much more efficient, we can make this process much more data driven, which ultimately will help to drive down all of our costs, both broker and carrier costs, with a benefit to the client. At the end of the day, that’s what our aim has to be. And our primary driver here has got to be that we prioritise the clients needs.
Juan de Castro: The common belief is London market is behind the general insurance in digitisation, but the fundamentals are very much the same, right? You talked about the data record, which is data schema, which is a common characteristic of any digital risk flow, you talked about the Gateway enriching with other data to make sure that you’ve got all the data in the schema required to support those use cases. You talked about data validation, making sure early on in the process that you’ve got the right data and ultimately you talked about being a data-first market, right? Which is, right now, the same as in general insurance. Often the starting point is a document and you have to convert from the document into data. But you want to evolve to a model where it’s data first. Definitely at the end of the process that data might be represented as a document, but really what flows through the market is data, just schema right? And that is what enables digitisation across the value chain.
Sheila Cameron: Completely. But the core data record, which just fulfilled those three use cases I mentioned, that is 230 fields in the London marketplace. Now, I’ve checked this and we went and looked at Amazon. And what does it take to do one thing to your own home address, on your own account on Amazon takes 30 fields. So even though it sounds quite a large number, actually our most complex risk that we can come up with is about 180 fields. On average, it’ll be around that 80 to 100 field mark. So we’ve tried to make it as simple as we possibly can, but this is a complex marketplace. If it were easy to solve, it would have been done already, I promise. If it had been easy to digitise this marketplace, it would have been done many, many years ago. The difference with this time around, I think, is that we’re not trying to come up with, here’s the solution. You must adopt this platform, or you must put in this new system. We’ve put it on it, turned it on its head and said, you must adopt these standards. How you execute it is entirely up to you. But this is the standard. This is a standard process. This is the standard in terms of data. This is the standard in terms of the document for the contract, for the client. But one of the key things I really want to do through this data council is to enable innovation to define the standards, which is what we will do and what we have been doing, but then let firms execute it in the best way possible for them and their clients. And that was a very clear message that I had from the LMA Board of many of the insurers within the Lloyd’s marketplace, saying, just define the standard. Don’t make us adopt system A, B, and C and D and E and F and all the rest of it. We’ll figure that out ourselves. Because our own systems, they all have different risk appetites, they all have different standards. Okay, I’ll set the standards, but you guys are going to need to adopt it. One of the key things I keep saying to firms now is, have you got your data in the Acord standard? Have you gone and gone through that process? Have you gone and made sure your data aligns? Because if it doesn’t, you’re not going to be able, when you set up that API between carrier one and broker one, you have to be able to talk the same language. The really simple example I always give is the year of accounts. What’s the year of accounts of the contract? Well, if you do it in two digits, carrier as 23, the broker does it in four digits, as in 2023, you will receive 20 and the broker will receive 23. So can we just all align on this is the standard, this is what it looks like. It is four digits in that particular case. We’ve never had those standards before and it’s really simple stuff when you think about it. But it’s hard to execute and it’s hard to adapt. There are so many firms that will have their data standards dictated by their group head office, which can be anywhere Japan, Switzerland, the US. And they will have different data standards because they will have different accounting regimes and different regulatory oversight. And I get that. But if you want to talk and if you want to trade on a data basis within the London Market, you have to adopt this data standard. And the data standard we have is Acord GRLC and it’s already used by reinsurers globally for the Rüschlikon. So this is not rocket science. It is perfectly achievable. It has been done. We just need to expand it and run with it and go with it. And I have to call out Acord on all of this. They have been brilliant and could not have done more to help us. They’ve helped us on the CDR when we define the CDR fields. I mean, there are some people in the court who have just spent weeks and weeks helping us, making sure that the CDR meets the GRLC standard. What did we need to change? How did we need to change it? They’ve been fantastic.
Juan de Castro: Great. And you’ve given the overview of the ambition, the different components of the initiative, where are you today? Where is the market today? What is next?
Sheila Cameron: So we have just launched CDR and MRC. So the things that I tell people, what can you be doing now? Is for brokers, we need brokers as are the ones who create the contracts. So one of the things is very often today those are just created in Word. Okay, here’s your new contract, here’s your new Word template. Adopt your template, put the template into your systems and start using it. So we have set a target of that by the end of September this year. All brokers need to start producing client documentation in this standard. So typically in the London Market there’s about a three-month run-up phase of pre-renewal packs get created three months in advance. So we want to ensure that everything from one one next year, keeping in mind the 80/20 rule that the vast majority of contracts created with a one one renewal date next year are created in this template. So number one, brokers please get the MRC template into your system and start creating contracts in that way. Number two, adopt the GRLC standard, make sure your data is in this standard. And number three, start working out what your API strategy is going to be because there are multiple parties here. Let’s start thinking about where does the majority of your business come from? Are you going to build separate individual APIs into every broker that you deal with? Are you going to push everybody through your third party trading platform of choice? For example, White Space or PPL? How are you going to do this? Who are you going to connect with in order to make this work? Those are three steps I tell firms to start doing now in terms of looking forward then to the rest of the year. We have started, as I said, CDR and open market. The MRC for open market we have four areas, three other areas we need to look at: claim, open market, claims, data standard what does that look like? It will be GRLC again but what’s the subset? What do you need to record a claim advice and what do you need to record a claim settlement? So an advice on a reserve update from X to Y or a notification of a new claim or to pay a claim? What information, what core data do we need in order to facilitate that accounting and settlement process for claims? We will start on that. We’re also going to start on treating reinsurance because the rush to the common standard already exists. We wanted to work again with the cord around. How do we move that forward? What does that look like in the London market and how do we adopt that more broadly? So we will start looking at treaty claims and treaty will be the main next thing. Two things that we do and then thereafter we will move on to delegated authority and again it’ll need to be what’s the contract look like, what’s the CDR look like, what does the claims process look like and what are the core data needed in order to drive those forward. So we have no shortage of work to do but we have started with open market and that’s where we want to go to. Let’s get that working. There’s a huge burden on brokers here. The adoption really does fall on brokers, which is one of the key reasons on the Data Council, I think I’ve got eight broking firms on the Data Council so I’ve got Aon, Marsh, Willis, Heiser’s, AmWINS Housing, Gallagher’s, Arduino so we want to have eight brokers on there and we have six carriers on there as well, as well as our various partners like DxE and Acord and PPL. So we’re working hard with our partners but it’s the broker where the responsibility to start this lies. So we’re working very hard with those brokers around. Let’s ensure we get into this template and let’s start moving things forward because until the gateway starts to receive structured data and the third party placing platforms receive it in a format in which they can reconstruct it into structured data, this is going to be an uphill battle. So let’s get these things right step by step by step.
Juan de Castro: That is fantastic. I mean, obviously the level of ambition at the same time in a very clear roadmap which, obviously, it’s one of the key drivers for a successful initiative. Alot of what you talk about is about adoption – how do you ensure the whole community is making progress from lessons learned perspective? What have you seen? What works to drive change in such a complex environment.
Sheila Cameron: So first I look at what PPL did pre- COVID and I know you had Bronek previously on the podcast. One of the things Bronek did was he set a mandate. So he worked with Lloyd, set a mandate that said and ratcheted that mandate up over time. That said, here is the target, here’s what you have to achieve. And in my experience in this marketplace, it’s a very competitive marketplace. So nothing like a league table to enhance that competition and to say who has met the mandate, the quickest, who has had the most amount of contracts through in the standard, and so on like that. So we will get to that point. But where I want to start with is let’s get firms over the line of support firms to do this in the right way to start with, and let’s support them to get to where they need to get to. But ultimately, where we will get to is we will have league tables of performance. We will most likely have mandates, but not yet and not until we want people to sign up to this and to move forward because they want to digitise this marketplace. This is not trying to digitise everything. There will always be the 20% that just falls into the does not fit here bucket and fine, but let’s aim for the 80 and let’s aim for it to get those to where they need to be. But for us, adoption is most people want to digitise this marketplace. It was surprisingly easy to get people to join the data council. I’d expected a lot of pushback. Nobody wants to adopt. That means investment. And we need to invest in our systems. We’ve got to invest in a project manager to do XYZ. And all the rest of us, there was next to no pushback. People want to digitise our marketplace. We don’t want to be pushing paper around forever more. That’s not the right way to handle our risks. We need to move to a digitised data driven marketplace. So step one is do people want to do it? And the answer has been an overwhelming yes. And then, as he said, well, let’s do it then together. Let’s do it in a structured way. Let’s agree what that structured way is, and let’s do it in a phased approach. Ultimately, as I said, we will set mandates. We’ll work with laws around setting mandates, and we’ll work around lead tables and publicising, in particular publicising who’s doing this really well. I’m not particularly out for, let’s point, a great big shiny licence in somebody’s eyes who’s not doing particularly well. But there are those firms who will adopt quickly. And I want to highlight those, and I want to put them up in lights and go, wow, you’ve done this really, really well. All of your contracts are in this standard. You are submitting data. You’re getting ready to move towards data first. You’re getting ready, you built the APIs out of your system straight to gateway, and you’ll be able to submit data from a broker via carrier into the gateway instantly as soon as the risk is bound. So that’s where I want to get to with this. I want to be singing the praises of those that go the extra mile and publicise how brilliantly that they’re doing. That’s what I want to do with lead tables in due course.
Juan de Castro: That’s a brilliant takeaway on how to drive change, people embrace change when they see the value, even if the process is painful. But people join when they see clear value at the end of the initiative, and there’s nothing better to accelerate it than a bit of competitive tension. Sheila, it’s been fantastic. Thank you so much for joining me today and giving us an overview of the initiative you’re leading at the LMA. Just keep it up and cannot wait to see what it looks like in a couple of year’s time.
Sheila Cameron: Thank you. Thanks for the opportunity. Well, I’ve really enjoyed it. Thank you.
Juan de Castro: Making risk flow is brought to you by Cytora. If you enjoy this podcast, consider subscribing to Making Risk Flow in Apple Podcasts, Spotify or wherever you get your podcast so you never miss an episode. To find out more about Cytora, visit cytora.com. Thanks for joining me. See you next time.