Date

Attendees

peter antley 

Sean W. Bohan

Brian Hoffman

Josh Hershman

Jeff Braswell 

Mike Nurse

Greg Williams 

Dale Harris 

Susan Young

Reggie Scarpa

Susan Chudwick 

Brian Mills

Ash Naik 

Kelly Kemp

Michael Payne

This is a weekly series for The Regulatory Reporting Data Model Working Group. The RRDMWG is a collaborative group of insurers, regulators and other insurance industry innovators dedicated to the development of data models that will support regulatory reporting through an openIDL node. The data models to be developed will reflect a greater synchronization of data for insurer statistical and financial data and a consistent methodology that insurers and regulators can leverage to modernize the data reporting environment. The models developed will be reported to the Regulatory Reporting Steering Committee for approval for publication as an open-source data model.

openIDL Community is inviting you to a scheduled Zoom meeting.

You have been invited to a recurring meeting for openIDL

Ways to join meeting:

1. Join from PC, Mac, iPad, or Android

https://zoom-lfx.platform.linuxfoundation.org/meeting/93089763014?password=f0e7ee48-7c14-4906-b1c1-3013f0aae7a1

2. Join via audio

One tap mobile:
US: +12532158782,,93089763014# or +13462487799,,93089763014

Or dial:
US: +1 253 215 8782 or +1 346 248 7799 or +1 669 900 6833 or +1 301 715 8592 or +1 312 626 6799 or +1 646 374 8656 or 877 369 0926 (Toll Free) or 855 880 1246 (Toll Free)
Canada: +1 647 374 4685 or +1 647 558 0588 or +1 778 907 2071 or +1 204 272 7920 or +1 438 809 7799 or +1 587 328 1099 or 855 703 8985 (Toll Free)

Meeting ID: 93089763014

Meeting Passcode: 850916


International numbers: https://zoom.us/u/alwnPIaVT

Attendees:

GMT20230726-150241_Recording_1920x1080.mp4

Minutes

I. Introductory comments - LF Antitrust and Welcome - Peter Antley

II. Agenda

A. Update - OLGA - capturing statistical records of data submitters

  1. They have been working on the first interface for reviewing errors from the validations - taken from SDMA but modernized
    1. PA - presented examples of errors that were found. Zip code error, program code error, a coverage code error. 
    2. It will also give the number of times that the error is present in the data set.
    3. If one wants to examine errors, the capacity exists to investigate them individually and trace their origin. Possible to make updates, and save, and it will update the record. 
    4. Individual error correction is possible; Peter's team is working on the capacity to do bulk error updates and this should be ready by the next RRDMWG meeting.
    5. An error report will be done as a downloadable excel spreadsheet
    6. It was asked if there is a mechanism in place to validate the individual corrections, and there is - albeit with some delay because validation happens on server side. Any record that is updated is rerun through the Drools system as a check mechanism.
    7. Joseph is working to connect back end with the website - more will be coming with validations and rules and report generations in the near future
    8. In tandem Peter will refine the data with respect to a count of errors and the corresponding descriptions.

B. Outside of OLGA - Process that AAIS Uses to Generate Earned Premiums and Incurred Losses - Peter ran through this

  1. Specific functions/ways calculated internally at AAIS
  2. Peter is in the process of recreating this set-up - so that after this they can go and use all the existing AAIS report/extraction patterns that they are using today
  3. This should be done mid-fall 2023
  4. Once this is done they will be able to produce a lot of reports so that almost any stat data they have in the system will be compatible to produce reports etc.
  5. With stat reporting - many bespoke ways are used to do losses and earn premiums. One example: if reporting is being done for a given year, an accident falls in that year but a claim falls sometime later, one still counts claims during the given stat report after the calendar year. Instead of reevaluating every single report that is done, the first approach Peter and Dale developed → let's try to reproduce what is happening within AAIS to deliver a larger number of reports. Peter is working on this - much broader updates forthcoming in coming in coming months. 
  6. This isn't taking data from the data lake, but inside HDS, a schema exists that takes stat records and keeps them in same format/schema as AAIS's internal data lake. Use same reports and therefore no need to rewrite 2000+ extraction patterns.

C. Catastrophe Reporting Discussion - Peter and Jeff (review of document they shared yesterday)

  1. High level overview provided of NE Cat Proof of Concept/Data Call - focus on data from Hurricane Isaias in 2020 - encompassed entire Northeast Zone per NAIC classifications (Maryland to PA to ME)
  2. Review of Cat Event - Peter opened example of Cat reporting form used for LA for Hurricane Zeta. Someone fills out what companies are reporting for, info on how to contact person who made it. Report is issued multiple times at different cadences
  3. It provides cumulative claims data as of a specific date
  4. We will say, given x zip code, broken up by county, how many claims have been reported, how many were closed with payment, how many were closed without payment, what is paid loss, and what is incurred loss
  5. To make this as painless for everyone is possible - everyone doing (homeowners) stat reporting with AAIS has the ability to get data into this format.
  6. They will use a minified version of the claim record for homeowners to hold this cat data and pass it back and forth, to fill the extraction pattern and satisfy the report.
  7. The data they need is: line of insurance, accounting date, state code, county code, transaction code, loss amount, claim count, cause of loss, accident date, zip code, claim number, claim identifier, and catastrophe indicator.
  8. At this time, with these attributes, they can go and accurately fill out this catastrophe report
  9. The definitions are taken from the stat handbook and therefore are fairly familiar.
    1. Line of insurance - focusing on homeowners
    2. Accounting date - same one used in AAIS - one data per month in the middle of the month - so only necessary to provide month and year.  Probably want to see all of different payments that occur w/i a given accounting period
    3. State Codes
    4. Transaction Codes - #1 is a premium record, if a negative amount on a premium record, it can implies a cancellation - but not necessarily (noted that this could be a change or many other things) - difficult to know if it is a cancellation or simply a change - so this needs to be corrected (PA)
    5. Loss amount - amount that is lost - can be turned in with ASCII format, or can be done as a negative # (optionally).
    6. Claim count - could be a payment or an outstanding - whichever comes in first. This column can be added up and know how many total claims there are.
    7. Cause of loss - done as loss codes
    8. Accident date - specific date in question of incident
    9. Zip code
    10. Claim #
    11. Claim identifier
    12. Catastrophe indicator - this is a big discussion point.  They don't want to use the ISO cat codes - no access to it, no interest in going through process of thinking more about it. Also no interest in 1:1 match with iso codes. Discussions have been happening internally with various stakeholders. AAIS wants to define cat indicators by taking region + line of business + date range + cause of loss. Indicator will be assigned based on that. All records should have the cat code on them. Distinction also made between catastrophe indicator and cat code. (We're talking about the first and not the second).
  10. For the PoC - all claims will be pulled in and then extraction pattern will sift through and pull the ones that do match the cause of loss, region, zip code, etc. The result will essentially be a subset of the larger data set. 



Discussion items

TimeItemWhoNotes




Action items

  •