Date

Attendees

This is a weekly series for The Regulatory Reporting Data Model Working Group. The RRDMWG is a collaborative group of insurers, regulators and other insurance industry innovators dedicated to the development of data models that will support regulatory reporting through an openIDL node. The data models to be developed will reflect a greater synchronization of data for insurer statistical and financial data and a consistent methodology that insurers and regulators can leverage to modernize the data reporting environment. The models developed will be reported to the Regulatory Reporting Steering Committee for approval for publication as an open-source data model.

openIDL Community is inviting you to a scheduled Zoom meeting.

Join Zoom Meeting
https://zoom.us/j/98908804279?pwd=Q1FGcFhUQk5RMEpkaVlFTWtXb09jQT09

Meeting ID: 989 0880 4279
Passcode: 740215

One tap mobile
+16699006833,,98908804279# US (San Jose)
+12532158782,,98908804279# US (Tacoma)
Dial by your location
        +1 669 900 6833 US (San Jose)
        +1 253 215 8782 US (Tacoma)
        +1 346 248 7799 US (Houston)
        +1 929 205 6099 US (New York)
        +1 301 715 8592 US (Washington DC)
        +1 312 626 6799 US (Chicago)
        888 788 0099 US Toll-free
        877 853 5247 US Toll-free
Meeting ID: 989 0880 4279
Find your local number: https://zoom.us/u/aAqJFpt9B


Attendees:

peter antley

Mason Wagoner 

Nathan Southern 

Jeff Braswell 

Sean W. Bohan 

Ken Sayers 

Ash Naik 

Reggie Scarpa

Kevin Petruzielo

Kelly Pratt

Susan Young

Dale Harris 

Brian Hoffman

Lori Munn 

Matt Hinds-Aldrich

Hicham Bourjali

James Madison 

Jenny Tornquist

Susan Chudwick 


Minutes

I. Introductory Comments - Peter 

II. Peter - recap on recent progress - noted that auto project is the highest priority, but also working mobile homeowners & property.

A. Wrt residental property - they have test data for dwelling properties and homeowners, and moving ahead with this.

B. They are also aware of the AWG work - with the multiple workstreams. We're seeking end-to-end production w/auto (personal auto).

  1. On this we set up a carrier node, a test net, set up data schema, loaded some test data into the carrier, and validated a data call. 
  2. Working on full demo - wrapping all this up

C. Secondary project - led by PA - is general regulatory reporting overall. Moving into residental property - seeking a wider base to process all stat plans.

D. Auto

  1. Peter producing a demo video proving/demoing auto extraction pattern
  2. Peter also working with Jeff and talking to carriers to develop and get auto test data set up on test network. PA working on script to take existing stat data, modify it and sanitize it - this is forthcoming next week.

E.  Also doing work on catastrophe research - key point of discussion today.

  1. We need a primary key to reference event when looking at claims
  2. Now: one paid service is being used to identify claims
  3. PA: thoughts from group on how to track catastrophes effectively?
    1. PA: currently ISO has a service where they give primary key to a catastrophe. Various insurance companies buy access to be able to use reporting keys - to request all claim records tied to given catastrophic event
    2. PA: Question - should we use the same ISO service here that we're currently using?
    3. JT: Consistency is critical, industry wide.
    4. PA: further clarification on this service per an attendee question.  Property Claims Service from ISO/Verisk. When a disaster happens, ISO alerts all of their subscribers to a certain code to associate with the claims, so that these claims can be singled when reporting is done. Basically, an identifier. A standard naming convention through ISO. 
    5. JB: can one get this service without submitting claim data?
    6. PA: Instinct is that we could harness this service - no need to reinvent the wheel.
    7. BH: Will ISO contractually allow members to pass the CAT code to another service to be utilized in statistical work? Critical question. 
    8. JB: because of the coverage PCS is worth considering, but we need to determine if the code can be re-used. 

II. Additional Thoughts/Concerns/Questions

A.  JB: As important as other efforts are, primary goals remain getting the stat report and POC working and making progress on this

B.  PA: the big difference between CAT calls and standard calls is just the identifier to separate it from the other calls at that time. Same tables, same logic. Whole stat plan is not needed.

C. JB: Is there not much less data involved in the catastrophe reporting than in the other areas? Isn't auto significantly bigger? 

D. PA: Much of the reporting he's seen comes down to policies, claims, etc. all break into same patterns. Catastrophe is focused mostly on claims in a restricted area and time frame. But claims from whole state calculated in much the same way using most of the same tables.

E. PA to JM: asked if The Hartford has any questions on test data. Pointed out that we've allowed HDS to be a large umbrella. Company ID doesn't have to match big umbrella company ID - so they can definitely change reporting #s - these don't need to be the same. (JM: comfortable with where we are going).

F. JT: Questions wrt type of year in the reporting we've done - re: NAIC guide  - appears that most lines can be done by either calendar year or calendar acts in a year. However states such as NJ require calendar acts in a year. Other lines: policy year. 

  1. Clarification: policy year = premium and losses in a year the policy occurred. This is the basis that most actuaries use in ratemaking. 
  2. SC: this is essentially the same thing as giving all activity that occurs within the year? JT: sliced differently. Sliced on a policy year basis. (Unsure if this ever came up in the past).
  3. SC: This has never arisen in the past. When a policy becomes effective in a x year, the written premium on that policy is all recorded in that year. Any losses that occur against it in x year also are recorded in that year. (All transactions are recorded). 
  4. JT: "Policy year" spans two years - so it is lower but basically has a higher development factor. This refers to the fact that potentially losses and earnings can come in on a premium for 12 months following the initial calendar year.
  5. JT: with this in mind, the gap is that the stat plan lacks a policy effective date on the loss record.
  6. PA: There are still gaps - even w/auto there is still not great (sufficient?) granularity around dates at all. As we move forward we will be improving on this.
  7. SC: Is concerned about some things missing from the stat plan, as discussed around the middle of last year. Companies need time to make changes - too late for 2022. Impractical to think of every possible change we need. Some have already been submitted for auto stat data. We do need to get some of missing elements such as 'policy effective date' out there if it wants to be captured for 2023 (PA & JT agreed).
  8. JT: Significant impact on systems (depending on what missing fields are). Her team is making a significant effort in this area. Determining what options are, and the process will absolutely include feedback to external carriers.
  9. SC: another possibility is to add optional dates - so for instance it could be that certain fields could be done quicker than others. 

III. Adjournment - Meeting adjourned at 1:28pm


Discussion items

TimeItemWhoNotes
5minAgenda itemName
  • Notes for this agenda item




Action items