This is a weekly series for The Regulatory Reporting Data Model Working Group. The RRDMWG is a collaborative group of insurers, regulators and other insurance industry innovators dedicated to the development of data models that will support regulatory reporting through an openIDL node. The data models to be developed will reflect a greater synchronization of data for insurer statistical and financial data and a consistent methodology that insurers and regulators can leverage to modernize the data reporting environment. The models developed will be reported to the Regulatory Reporting Steering Committee for approval for publication as an open-source data model.
openIDL Community is inviting you to a scheduled Zoom meeting.
One tap mobile +16699006833,,98908804279# US (San Jose) +12532158782,,98908804279# US (Tacoma) Dial by your location +1 669 900 6833 US (San Jose) +1 253 215 8782 US (Tacoma) +1 346 248 7799 US (Houston) +1 929 205 6099 US (New York) +1 301 715 8592 US (Washington DC) +1 312 626 6799 US (Chicago) 888 788 0099 US Toll-free 877 853 5247 US Toll-free Meeting ID: 989 0880 4279 Find your local number:https://zoom.us/u/aAqJFpt9B
Discuss adopting AAIS current stat plans and strategies for doing so (using table and model suggested by Peter for prototyping)
Discuss homeowners plan and comparison to get a cross-section (day 1 auto issues etc.)
Peter opened @ 1:11pm with the LF Anti-Trust Policy.
Ken began discussion about AAIS stat plan
Stat plan for auto that we have has been approved in 49 states - everywhere we report.
We are asking what is the most agile way for us to test the relationship between the block chains and the nodes
Clarification of definitions (shared slide accordingly - data model) and timing
Relationship to harmonized data server.
Clarification record format and schema Ken created and how data is laid out line by line. VINs displayed at end of lines. Follows auto stat plan.
Slide presented to illustrate layout with corresponding codes.
Stat report extraction pattern
Suggestion: ideal way to come up with attributes we really want - broken up into what we can do today versus tomorrow. Two critical points
How do we unpack some of the fields represented in the code?
Clarification of objectives re: openIDL: reduce cost associated with stat reporting and increase efficiency. Do data calls in a less bureacratic, swifter, more streamlined way.
An internal query about necessity of complex tooling itself. Process should be governed by someone else otherwise there is no value to him. "Way too much tooling for what we want to do."
It takes considerable time to get data through. How can we prove that this technology works?
Discussion emphasized the criticality of statistical reporting. Less concerned with data calls than statistical reporting functional through IDL.
Q: But is it being overcomplicated? A: Can probably be ironed and worked out in the architecture group.
Some of the concerns raised - benefits, etc. are valid concerns. Cost reduction, efficiencies. Data models will evolve to support. Staging situation - where some advance at different rates than others.
We need to shift left features that can give us Day 1 Value. We need to simplify the back end.
We also need to prove that multiple parties (stakeholders) can work together. This is critical. Focus on future potential of this. We need to show key values including partnerships to keep moving forward.
Architectural concerns are valid. Possibility that blockchain needs to be proven again (Nick) 'in a production type setting will be ideal.' Putting it more in an official capacity. (Broad consensus on this point).
First step is to have a data model we can archetype around. How are we going to have that data and make the report?
We all agree there needs to be a standard data model. Tech end of this that follows gets somewhat cloudy.
Outlining of functions that need happen within the carrier.
Two kinds of data extractions. Stat plan, can be done from relational tables. 2nd is data calls that make use of larger abilities to query. Stat plan can be produced from a simple data model.
Day 1, statistical reporting. Day 2: Data calls. Plumbing needs to be built first before additional fields are defined. Augmentation can happen over time. Normalization process is critical for his organization given some of data fields they have.
We can ingest data, normalize it and put it into a table.
Baby steps key, necessary to get something set in stone.
Outlined concept of -versioning- as an answer.
As soon as Dale can produce stat reporting for auto and prove that it works we can move to additional levels.
Information must be current - this would be a huge win.
the challenge is an inevitable delay in the amount of time required to get the information. However: value add - in stat plan - there is a lot of data we can get sooner. Maybe we should push for quicker data despite resistance from carriers. However, transition from Day 1 to Day 2 should be trivial. Start with minimizing number of moving parts to do basic data calls against the stat plan.
We need to codify role of AAIS as a statistical agent.
Day 2 doesn't mean "two years from now." Start with statistical reporting as it is today, then we have plumbing built prep for statistical reporting through idl. Following auto we can work on additional lines of insurance.
We need to get enough auto carriers onboard by the end of the year that we can produce data through OpenIDL.
Privacy concerns briefly discussed - distinction. raw data vs. stat plan.
Question: will POC (as proof of value) precede production stage itself. In other words we show value first, then move into production stage.
We know we can move forward with Day 1 Model now. Day 2 in near future. Auto needs to be proven first per the request of several stakeholders. How do we find a critical path to get a POC for auto completed? Then production can come later. (David agreed with this point).
Dummy data vs actual data - latter could take up to 24 months. Clarification of definition re: "production."
Discussion of timing to finalize architecture and execution of POC.
Discussion of continued value or RRDMWG working group (David, Dale, etc.) -how to continue to derive value form this group. David outlined suggestions for how to do this.
If POC doesn't work we are not moving past that.
If we get some lift from auto we can get more value sooner.
We need to get initial stat plan in place and see how it generates value - Homeowners et. al. later. additional fields. Pointed to TSC as next step.
Discussion of meeting with TSC - problems resulting from two silo groups, we need to bring everyone together (Truman). Proposal of joint meeting. Broad conversations w/all concerned involved. Will make alignment easier. Regulatory participants & also other participants as well.
Note about Architecture Working Group calls that are forthcoming, additional TSC meeting, etc. Some room for in-person meetings before next TSC. In person may be necessary for scoping discussion.
Question: can we merge some of meetings? One suggestion: reusing meeting next week exclusively for a scope conversation - broader invite for next week. Took consensus. No objections at all. This clarified agenda for next Friday.
Tech guys can be invited to meeting next Friday.
Suggestion that we must discuss both critical path and scope at meeting on 4/1. Broad consensus.
Purpose of POCs defined in line with Linux Foundation's precedents.
Suggestion of drafting a basic POC in preparation for scope discussion next Friday 4/1. Others agreed. Peter will get requisite documents to do this.
Multiple perspectives outlined within block chain framework - need to be considered and weighed. Differences of opinion, perspective, etc. Need to be resolved. This is critical.
Discussion clarifying the complexity of the extraction done for initial POC per concerns discussed earlier.