Date

Attendees

This is a weekly series for The Regulatory Reporting Data Model Working Group. The RRDMWG is a collaborative group of insurers, regulators and other insurance industry innovators dedicated to the development of data models that will support regulatory reporting through an openIDL node. The data models to be developed will reflect a greater synchronization of data for insurer statistical and financial data and a consistent methodology that insurers and regulators can leverage to modernize the data reporting environment. The models developed will be reported to the Regulatory Reporting Steering Committee for approval for publication as an open-source data model.

openIDL Community is inviting you to a scheduled Zoom meeting.

Join Zoom Meeting
https://zoom.us/j/98908804279?pwd=Q1FGcFhUQk5RMEpkaVlFTWtXb09jQT09

Meeting ID: 989 0880 4279
Passcode: 740215
Hicham Bourjali
Jeff Braswell 
Allen Thompson
Jenny Tornquist
Dale Harris 
Susan Chudwick
Joseph Nibert

Ken Sayers 

peter antley 

Nathan Southern 

Reggie Scarpa
Kelly Pratt

Susan Young

Lori Munn 

Mason Wagoner
Mike Nurse
Greg Williams
Patti Norton-Gatto 
Sean W. Bohan 

One tap mobile
+16699006833,,98908804279# US (San Jose)
+12532158782,,98908804279# US (Tacoma)
Dial by your location
        +1 669 900 6833 US (San Jose)
        +1 253 215 8782 US (Tacoma)
        +1 346 248 7799 US (Houston)
        +1 929 205 6099 US (New York)
        +1 301 715 8592 US (Washington DC)
        +1 312 626 6799 US (Chicago)
        888 788 0099 US Toll-free
        877 853 5247 US Toll-free
Meeting ID: 989 0880 4279
Find your local number: https://zoom.us/u/aAqJFpt9B

Attendees

Artifacts:

example-rules.xlsx

AAIS Rules Engine Documentation.docx



Minutes

I. Introductory Matters

A. PA - welcome to attendees

B. PA - acknowledgement of LF antitrust statement.

II. Agenda

A. Recap

  1. Excellent discussion last Friday: stat plan and related documentation.
  2. A copy of this revised will be available a week from today.
  3. A couple of minor internal checks to do before it is fully open sourced, but last week was helpful - when many of the commercial auto references were removed from the personal auto stat, in-meeting.

B.  Today's business/points of discussion

  1. Validations and how the rules engine/rules are working
    1. Peter began looking into rules as they are and open sourcing them and sharing them with the group.
    2. The company specific rules are now co-mingled with the ones not specific to AAIS. His riding assumption is that companies with specific rules will not want those shared with other companies. (The others in the meeting strongly agreed).
    3. For today, Peter pulled out a small subset of rules to go through them, with accompanying documentation
    4. Over the next week, the plan is to go through auto, remove all the company-specific rules, and then look at all the rules.
    5. Two documents to review today:
      1. AAIS rules engine documentation. (Name is tentative subject to change) - will be rebranded for openIDL.
        1. There is a set of validation rules for each line, and some validations for dates and for geography and common validations.
        2. Rules get saved into an excel template. That tests for four errors. Positive identification of errors in lieu of positive identification of correct values. 
        3. There is one column for each position in the stat plan - e.g., LOB. 
          1. Example of this: rule 9.1 - affects LOB 56 and will say re: Program Code '<>blank, 0,3,5,C,F' meaning it can't be blank, but can be 0,3,5,C, or F. Drools syntax.
        4. A unique error code is associated with each rule (.e.g, Error code 9.1., 9.2., 9.3) - there is also a human readable error message corresponding to this along with 1, 2, 3, 4 id #s to make them trackable.
        5. It is possible to have multiple rules about any column.
        6. Different errors will emerge if it's blank vs. Not blank but with a restricted value.
        7. May come up as 'missing,' 'invalid,' or in some cases, 'state-specific.'
    6. When the SDMA runs, it pulls from these documents as a configuration, to enable running the rules against the stat records.
    7. For reference: 'Rule Sets Setup' describes how the rules are written. 
    8. Question arose (M. Nurse) re: reference tables w/corresponding codes and if they will be introduced. Peter confirmed that this is forthcoming, and that some decoding-specific tables are already in the database, that he wishes to connect internally. 
    9. high level of functionality - touched on by Peter in the meeting: can do lists, ranges, integer ranges, numeric ranges, alphanumeric ranges, sets. All detailed in corresponding explanatory table, along with the sources of the various kinds of errors and individual functions
    10. Between now and next Friday, Peter will be getting the ruleset we actively used, and removing all company-specific rules, and bringing it back to a generic ruleset.
  2. Error report - shared by Mason
    1. Last week, Reggie spoke highly of the downloadable error report associated with SDMA
    2. Peter and Mason retrieved report
    3. Breakdown of sample report - a combination of randomized/sanitized claim data
      1. Each row populated with an ID
      2. Each column populated wrt whether it pertains to that specific line of data or not
      3. Anything highlighted in yellow is an error that needs to be addressed. 
      4. A couple of small nuances - only applies to personal auto not commercial auto so some of these fields might be blank
      5. If it's a policy versus a claim, exposure will populate field given the nature of the specific data point
    4. Questions solicited from group about anything people here want to see different in next report iteration. Suggested by group that state IDs be included. Do we show any of the decoded values to make the report easier to read? (PA)
      1. Group said having additional coding on the record would be more confusing - since this is the basic stat record that will be used.
      2. Having both would be distracting and would demand multiple pages. 
      3. Main customers will not change and customers will be familiar with this basic stat plan view.
    5. Everyone will be given access to the reference codes
    6. It was suggested that we add dropouts from the column headings of the expected values (popout charts with data on all 50 states), but others disagreed, and felt this would equate to an overload of information and that it would not be possible to get all of the corresponding info in a popouts. Also complicated when one is working with different lines of business.  Perhaps (suggested) - just for enumerated fields?
    7. Pointed out that auto is straightforward but nuances and variations and interrelationships become very complicated for the various personal lines.
    8. Peter noted that he's doing the reverse of it - has the ability to take an auto record that comes in as a stat plan and a sql query that produces a view and decodes it.
    9. This is the detail but another report is used to show what needs to be done to get 'under tolerance,' by state.
    10. Detail itself can look long and laborious when only a couple of errors need to be corrected.
    11. It was noted that in the report the errors are typically singled out at the top, but one needs the state summary, with the information detailed by state, which is a separate document.
    12. Also suggested by group that a 'C' in front of select error codes will identify which are critical/need to be corrected, versus which are informational.
    13. Rules are very positionally-based, so if they aren't in the same place on two different stat plans, you can't use the same rule across lines. (There are some in the same positions across lines)
  3.   Looking ahead
    1. Peter will get summary report and work through workflow next week.
    2. He will also get the auto rules cleaned out of company-specific rules; the group will take a deeper look at this.
    3. The common ones will be done the week after
    4. Wrt extraction pattern for producing the report - they are in serious QA at this point. Peter has some time booked next week to begin validating with internal actuaries vis-a-vis Dale's spreadsheet
    5. Peter will go through/explain every line of SQL - out of the raw development stage of first extraction pattern, with what should be the final v. 1. data model. This was a key success from last week
    6. Next week we will have the stat plan necessary to open source personal auto

C. Brief discussion about possibly moving RRDMWG from Fridays to another slot during week (weekdays) to accommodate AAIS's shorter summer hours. He proposed 1-2pm EDT on Wednesdays. Works for some but not others. This would be moved at some point in May.  Group tentatively looked at 11am EDT Wed., as an alternative. This would begin the first Wed. of May, 5/3/23.  To be decided during a following week with an official vote.GMT20230414-170324_Recording_1920x1080.mp4

Discussion items

TimeItemWhoNotes




Action items

  •