Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Validations and how the rules engine/rules are working
    1. Peter began looking into rules as they are and open sourcing them and sharing them with the group.
    2. The company specific rules are now co-mingled with the ones not specific to AAIS. His riding assumption is that companies with specific rules will not want those shared with other companies. (The others in the meeting strongly agreed).
    3. For today, Peter pulled out a small subset of rules to go through them, with accompanying documentation
    4. Over the next week, the plan is to go through auto, remove all the company-specific rules, and then look at all the rules.
    5. Two documents to review today:
      1. AAIS rules engine documentation. (Name is tentative subject to change) - will be rebranded for openIDL.
        1. There is a set of validation rules for each line, and some validations for dates and for geography and common validations.
        2. Rules get saved into an excel template. That tests for four errors. Positive identification of errors in lieu of positive identification of correct values. 
        3. There is one column for each position in the stat plan - e.g., LOB. 
          1. Example of this: rule 9.1 - affects LOB 56 and will say re: Program Code '<>blank, 0,3,5,C,F'  meaning it can't be blank, but can be 0,3,5,C, or F. Drools syntax.
        4. A unique error code is associated with each rule (.e.g, Error code 9.1., 9.2., 9.3) - there is also a human readable error message corresponding to this along with 1, 2, 3, 4 id #s to make them trackable.
        5. It is possible to have multiple rules about any column.
        6. Different errors will emerge if it's blank vs. Not blank but with a restricted value.
        7. May come up as 'missing,' 'invalid,' or in some cases, 'state-specific.'
    6. When the SDMA runs, it pulls from these documents as a configuration, to enable running the rules against the stat records.
    7. For reference: 'Rule Sets Setup' describes how the rules are written. 

-Rules get saved in template. As laid out, it tests four basic errors. Seeks to positively identify errors as opposed to positively identifying correct values. One column for each position in the stat plan. 

    1. Question arose (M. Nurse) re: reference tables w/corresponding codes and if they will be introduced. Peter confirmed that this is forthcoming, and that some decoding-specific tables are already in the database, that he wishes to connect internally. 
    2. high level of functionality - touched on by Peter in the meeting: can do lists, ranges, integer ranges, numeric ranges, alphanumeric ranges, sets. All detailed in corresponding explanatory table, along with the sources of the various kinds of errors and individual functions
    3. Between now and next Friday, Peter will be getting the ruleset we actively used, and removing all company-specific rules, and bringing it back to a generic ruleset.
  1. Error report - shared by Mason
    1. Last week, Reggie spoke highly of the downloadable error report associated with SDMA
    2. Peter and Mason retrieved report
    3. Breakdown of sample report - a combination of randomized/sanitized claim data
      1. Each row populated with an ID
      2. Each column populated wrt whether it pertains to that specific line of data or not
      3. Anything highlighted in yellow is an error that needs to be addressed. 
      4. A couple of small nuances - only applies to personal auto not commercial auto so some of these fields might be blank
      5. If it's a policy versus a claim, exposure will populate field given the nature of the specific data point
    4. Questions solicited from group about anything people here want to see different in next report iteration. Suggested by group that state IDs be included. Do we show any of the decoded values to make the report easier to read? (PA)
      1. Group said having additional coding on the record would be more confusing - since this is the basic stat record that will be used.
      2. Having both would be distracting and would demand multiple pages. 
      3. Main customers will not change and customers will be familiar with this basic stat plan view.
    5. Everyone will be given access to the reference codes
    6. It was suggested that we add dropouts from the column headings of the expected values (popout charts with data on all 50 states), but others disagreed, and felt this would equate to an overload of information and that it would not be possible to get all of the corresponding info in a popouts. Also complicated when one is working with different lines of business.  Perhaps (suggested) - just for enumerated fields?
    7. Pointed out that auto is straightforward but nuances and variations and interrelationships become very complicated for the various personal lines.
    8. Peter noted that he's doing the reverse of it - has the ability to take an auto record that comes in as a stat plan and a sql query that produces a view and decodes it.
    9. This is the detail but another report is used to show what needs to be done to get 'under tolerance,' by state.
    10. Detail itself can look long and laborious when only a couple of errors need to be corrected.
    11. It was noted that in the report the errors are typically singled out at the top, but one needs the state summary, with the information detailed by state, which is a separate document.
    12. Also suggested by group that a 'C' in front of select error codes will identify which are critical/need to be corrected, versus which are informational.
    13. Rules are very positionally-based, so if they aren't in the same place on two different stat plans, you can't use the same rule across lines. (There are some in the same positions across lines)
  2.   Looking ahead
    1. Peter will get summary report and work through workflow next week.
    2. He will also get the auto rules cleaned out of company-specific rules; the group will take a deeper look at this.
    3. The common ones will be done the week after
    4. Wrt extraction pattern for producing the report - they are in serious QA at this point. Peter has some time booked next week to begin validating with internal actuaries vis-a-vis Dale's spreadsheet
    5. Peter will go through/explain every line of SQL - out of the raw development stage of first extraction pattern, with what should be the final v. 1. data model. This was a key success from last week
    6. Next week we will have the stat plan necessary to open source personal auto

C. Brief discussion about possibly moving RRDMWG from Fridays to another slot during week (weekdays) to accommodate AAIS's shorter summer hours. He proposed 1-2pm EDT on Wednesdays. Works for some but not others. This would be moved at some point in May.  Group tentatively looked at 11am EDT Wed., as an alternative. This would begin the first Wed. of May, 5/3/23.  To be decided during a following week with an official vote.

View file
nameGMT20230414-170324_Recording_1920x1080.mp4
height250
-Rule 9.1  - LOB 56, it shows that program code should not be blank for multiple fields. Unique error code associated with each rule. Also provides description, and an ID for tracking. 

Discussion items

TimeItemWhoNotes




...