This is a weekly series for The Regulatory Reporting Data Model Working Group. The RRDMWG is a collaborative group of insurers, regulators and other insurance industry innovators dedicated to the development of data models that will support regulatory reporting through an openIDL node. The data models to be developed will reflect a greater synchronization of data for insurer statistical and financial data and a consistent methodology that insurers and regulators can leverage to modernize the data reporting environment. The models developed will be reported to the Regulatory Reporting Steering Committee for approval for publication as an open-source data model.

openIDL Community is inviting you to a scheduled Zoom meeting.

Join Zoom Meeting

Meeting ID: 989 0880 4279
Passcode: 740215

One tap mobile
+16699006833,,98908804279# US (San Jose)
+12532158782,,98908804279# US (Tacoma)
Dial by your location
        +1 669 900 6833 US (San Jose)
        +1 253 215 8782 US (Tacoma)
        +1 346 248 7799 US (Houston)
        +1 929 205 6099 US (New York)
        +1 301 715 8592 US (Washington DC)
        +1 312 626 6799 US (Chicago)
        888 788 0099 US Toll-free

        877 853 5247 US Toll-free
Meeting ID: 989 0880 4279
Find your local number:


Discussion items



Meeting Minutes

I. Introduction - Peter Antley commenced meeting @ 1:03pm EST. Read standard Linux Foundation Antitrust Agreement.

II. Business at Hand

A. Overall Recap.

1. Mr. Antley noted that they spoke with Mr. Bradner, with Mr. Lowe, and Ms. Crews regarding their expectations for openIDL.

2. Mr. Antley also noted that RRDMWG had taken some time to solidify some of their work.

3. Mr. Antley also pointed out that he's been developing an extraction pattern based on some of the preestablished data models. .

B.  Today's Agenda  - This meeting will begin with a review of what he's been working on, and then segue into related questions

1. Review of data models for initial MVP Reporting.

a. Mr. Antley pulled up two different records on the screen that were anonymized.

b. Called group's attention to the data model on the left, and initiated an extended discussion of this. The Alabama Auto Coverage Report. (contains liability, comprehensive, collision, types of coverages, broken out by subcoverages, limits and deductibles, etc.)

c. Model built on data dictionary they came up with originally. (Noted that claims data excluded). Mr. Antley here ran through the various categorizations (represented in columns - earned premium, limit loss incurred and excess, etc. these are common to almost every report). Explained 'car years' category as "specific aggregation for auto line."

d. Mr. Antley explained basic template for these reports - each one has a state, a line of business, and coverages into which each one gets broken out.

e. Mr. Antley noted that they spoke with Mr. Bradner and Mr. Lowe, who expressed a desire to run queries that approximate this same pattern.

2. Data dictionary and related conversions from stat data

a. Brief recap to data dictionary from two meetings ago, that we sought to populate based on our stat records. Several main objects: policy object, coverage object, driver object, vehicle object, and claim object.

b. Here Mr. Antley discussed the claim object in detail and broke it down. Premium vs. loss transactions, etc. Slight differences.

c. Mr. Antley: Mr. Sayers developed a processor that takes the raw stat data in the coded files brought in to match documentation, and converted into a very simple, straightforward version of data dictionary.

d. Mr. Antley broke policy object down into calculable objects based on existing stat data.

e. Key lapse: didn't have expiration data or effective date (in sample 2012 coverage record) and this gap needs to be addressed. Noted: record still has coverage code, as it works well with next level of processing. Has most of prerequisites for coverage.

f. Mr. Antley noted that "N/A" signifies admitted in stat record and avail, vs. blank which is not in stat record at all.

g. The code we have present now is adequate to calculate historical earned premiums and present full reports, and handle standard time-sensitive queries (eg four months, six years, etc.)

h. Proposal: originally talking about 'let's just use the stat plan as it is.' But as data is currently ingested, taking the accounting date and assuming it's close enough to coverage effective date, and this is how we're doing math. Mr. Antley pointed out that this needs to be rethought. We would be in a better position if we have the coverage effective date when we're ingesting the data to begin with. We could take also take the data a JSON format in lieu of record count. 

At this point Mr. Antley precipitated a discussion of these proposed modifications, within the group.

3. Discussion within Group - Thoughts, Insights, Etc.

a. Mr. Lowe pointed out that this is earned premium, which requires dates to calculate allocate it to correct calendar year. Expressed confusion at absence of dates - is that simply this particular record? Or is this not something we're collecting here?

b. Mr. Antley responded that with stat plan collections as presented, we are not collecting the expiration and coverage dates. Mr. Lowe in response asked: so how does stat plan define accounting date?

c. Mr. Antley: as of now, this is the date the policy took effect. Mr. Petruzielo responded - at the Hartford, they treat 'accounting date' - if it's a premium record - as the quarter in which they had an endorsement from the previous year. For instance, if it's the first quarter of 2022, maybe they had something from the previous year in enforced policy. Mr. Antley: not the SME on this - processors were built before his time and were audited. His focus has been decoding the stat plan daily.

d. Mr. Lowe: three months of calendar exposure, so we're effectively dealing with/looking at a quarter of data. Mr. Petruzielo agreed. "At The Hartford, 'first quarter 2022' is any premium transaction that came into our books during that year, this is the statistical aspect of it so balanced back to our financials that will report to everybody." (Mr. Harris clarified for that quarter, correct? And Mr. Petruzielo agreed. Mr. Harris: not a quarter's worth of exposure, but a year's worth or whatever time indicated. Mr. Petruzielo: this is directed and not earned premium.

e. Mr. Antley: we are grabbing it as the written premium and then we're calculating earn premium based on the written premium.

4. Return to Discussion about Future Usage of Records (in lieu of how we are using them now) - reiteration of suggestion about incorporating coverage and expiration dates into records

a. Mr. Harris pointed out that because statistical files follow reporting by two years, there is no difference between 'written' and 'earned.' When we want to start doing something more current and calculating loss ratios, that's where we need effective and expiry dates for policies. Especially pertinent to have more information as we get into day 2 and 3 fields. But we have to work with what we have for now, to make sure plumbing works.

b. Mr. Braswell: using a more flexible format is preferable. Mr. Antley: stat plan is fixed starting point. Agreed per Mr. Harris's request to map their statistical data into JSON. But does it make sense to add three add'l elements? ASL, coverage and expiration dates. Making processor work in a more sophisticated way - it might mean throwing initial version out, but he does see the value of 'testing the plumbing.' It will mean more work on the part of Travelers.

c. Mr. Petruzielo: at the Hartford, they are fine adding 3 new fields to a fixed format; less enthusiastic about repiping everything into JSON.

d. Mr. Braswell asked if it would be possible to take the utility that does convert the fixed to Json and make that available? Mr. Antley agreed. A utility that carrier can use. (Mr. Antley pointed out that such a utility is already available on Github).

e. Mr. Harris objected to this idea as it will create complications with different carriers. "This is an openIDL thing."

f. At this point Mr. Antley consented to merely adding the three fields in question to avoid rocking the boat.

5. Challenge of Using Existing Architecture in a Single Run to Combine Various Policies together to calculate Earned Premium for a policy.

6. Questions - Mr. Antley has been working on processor to take all of the individual records and roll them up into a single policy-based schema. Presented what he has developed, and then raised some key business questions about what he has been developing - presented to group

a. Question one: If I have an auto policy, and a homeowners policy, can I have both grouped under same policy identifier? Or is this two different things? Mr. Petruzielo and Mr. Harris: separate generally, but base policy # might be the same with different prefixes for auto vs. homeowners. (Group pointed out that Addition of annual statement line might help rectify this.)

b. Question two: We have policy, line of business, coverages and coverage records. Is this a decent Way to group what we're doing? Or will this create additional issues down the road? Mr. Naik sought and Mr. Antley offered clarification on the breakdown, here - and Mr. Hoffman helped clarify further.

7. At this point, Mr. Antley pivoted and asked Mr. Williams to lead the discussion w/an exploration of various reports - Mr. Williams added yes a few to share.

a. AAIS 2010 Calendar Year Private Passenger Auto Accident NAIC Coverage Report

b. 2016 Washington Homeowners' Report

c. AAIS 2010 Calendar Year Private Passenger Auto Accident NAIC Territory Report

8. Mr. Antley suggested that we get 18 months of solid auto policy data to use and share - opened this request up to the group to see who can help with this. (Dummy data, or deidentified data). Needs to be an ingestible format. Laid out rationale. (Discussion ensued within group about ways to anonymize this data). Premium and loss records. No one immediately responded to this query so Mr. Antley left the question open.

a. Mr. Hoffman clarified: 18 months of statistical data? Mr. Antley agreed.

b. They are seeking data coverage dates, ASLs: wants to make it work the way that we have it now. "18 months of stat data would be excellent."

c. Mr. Hoffman: would prefer a spec on what that data looks like so Travelers can walk it through all of its standard protocols re: allowing third parties to use data. Requested this spec from Mr. Antley. Mr. Antley agreed to get this to Travelers by Tuesday 6/21.

d. Mr. Harris's only concern: if it's coming only from Travelers it will look like Travelers and at that is no longer anonymized. (Mr. Hoffman agreed).

e. Mr. Antley agreed to try to pull a smaller subset, run a filter, etc.

Meeting adjourned at 1:43pm EST.