Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Reconciliation (Manual day1?)

Extraction error detection & handling

  • SC - Reconciling statistical data (in HDS) to the financials, financials of company would be NAIC reported financials, final check that stats matched financials and if they didn't: why
  • KS - time to get it right
  • SC - company TRV size takes a while due to volume, try to do it on the 1/4, time depending on when it is due, did come up with Regs, todday a lot of stat agents dont ask for reconciliation until end of following year, question: how soon could you get an auto report and one said "June" and she said "does that inc reconciliation?" stat agent said "we have 'edits' that give them comfort data is reasonable, but ult how do you know if you have everything if you cant reconcile back to something"
  • KS - reconcile data to x numbers, ask openIDL to recon against these #s or give the data
  • SC - down to the role of openIDL and who is taking responsibility, today Stat Agents take resp the data is valid, the role of stat agent is to add level of review/authenticity
  • KS - stat agents resp for reconciliation
  • SC - NAIC info is public, openIDL or stat agent could get independently
  • KS - when and can it enter HDS un-reconciled? it has to be, attestation is some report that requires it to reconcile before run report on that data - if we do stat reporting, stat report wont do recon, will have to be reconciled before, 50 stat reports wont all reconcile,  "assume it has been reconciled" or it couldn't be run 
  • PA - trying to have data in a correct form available, traling by a 1/4, best of carriers ability its avail, traling by 1/4
  • KS - less than 5% errors and reconciled by financials
  • PA - do we have to do reconiliation? VErisk said they could do some reporting faster than a year (didn't need yellpw book)
  • DH - you need to do reconiliation as part of stat handbook
  • SC - brought up reconciliation, verisk said "have other ways to show data is accurate", could report $10MM of auto but you have $15MM, under 5% and reasonable - not sure what ways verisk has
  • JB - differences between financial report vs data from stat reports, reconciliation try to explain that difference - explanation to what difference are - recon a explanation of differences (small) - info in financial report is high level, 
  • PA - stuff coded from one value to a diff value, sometimes can be very large, not always small reconciliations
  • SC - mult reasons for errors (missing data, bad input, fat fingers) then reconciliation there are reasons you would differ, NY has "free trade zone credit", dont report statistically, one entity it is 1/2 of auto premium in NY, big but a reason for it
  • PA - how do we see reconciliation happening in a mature state, what is AAIS' role in it, right now very inv w/ carriers, helping to bring together, set up in a mature state so it is "self-service", AAIS not too involved, highlights where there is a problem or do we need stat agent heavily involved
  • KS - dont make that decision yet, reconciliation process supported by openIDL, get data from NAIC, comms process for carriers to explian why, process that lets carriers attest to data being reconciled
  • PA - if carrier had table w/ yellowbook info for 1/4s and line, could have exract pattern comparing HDS to stat records to yellow book - where do we want to keep yellowbook info
  • KS - public data - maybe access thru API, may need a spike to see whats the best approach
  • PA - as of now, aside from yellowbook, haven't needed to go out of network to answer question, can load yellowbook data in, keep inside network, some kind of external source
  • KS - now? where?
  • PA - NAIC
  • KS - phys? API, file?
  • PA - 10 page CSV, loaded quarterly, all 2k carriers on it, all lines, metrics for said lines
  • KS - worth digging into, find best approach (align to reconcile data, when avail)
  • PA - will run down this week
  • SC - financial data reported quarterly to NAIC
  • PA - reached out to NAIC, this year have quarterly data

Data Quality

Extraction error detection & handling

  • Everything needs to be edit-able
  • Fixes don't happen in current month (monthly correcting and then moving on)
  • Latency of error correction could be a year
  • need to make sure we have facility to capture corrections made while NOT bastardizing HDS
  • internal or architectural? DR is aware
  • SC - Errors:
    • missing information (on record provided)
      • current environment vs future
      • today - flat file from upstream, flat file submitted with missing limit, info passed to AAIS, flagged by AAIS, returned to carrier (can see instantly by state), these 2 states need fix made, go into SDMA to make fix then submit, AAIS approves, loaded by AAIS
        • it had already gone thru the edit
    • DH - load into SDMA, not approved yet, Susan makes corrections, goes thru edit again once Susan made corrections (see right away if fix worked), if in tolerance it is "approved" by AAIS
    • PA. - doing upload to SDMA, staging area, AAIS not running load until it is approved (edit package engaged)
    • SC - loading it to AAIS system, told to fix errors, fixes, then "officially submitting" and AAIS "approves"
    • PA - can't go to HDS until "approved"
    • DH - where within process is edit package? where is facility to correct the errors, if HDS is supposed to be matching to source systems, then we shouldn't be making changes to HDS for other purposes beyond StatReporting - decision in ArchWG - how handle error corrections and fidelity of HDS
    • PA - direction, making update, go about making corrections of data already inside HDS, first example - data before HDS, different Error type
    • JB - case Dale mentioned, HDS is out of sync with source system, SS has error, needs time to fix, copies of DB with errors to be corrected - would suggest errors corrected get corrected in HDS but a log to inform source system of corrections as made - instead of lots of copies of collected data
    • JM - crossing boundary - doesn't care what carriers do - where do we stop caring - only thing, HDS has to be right, up to the carrier how they get it right
    • JB -yes but instead of making fix and a copy of DB it seems it should be fixed in HDS
    • SC - internal issue, AAIS needs to edit data, thats their job , if they say "2 errors" and they get fixed she says "done" and pushes to HDS - conflict with source system is something SHE deals with
    • JB transferred to AAIS for edit checks, 
    • PA - held before data lake until adter corrected
    • JM - cant occur until content is in it
    • PA - edit pre ETL
    • JB - do it 2x, if you correct HDS need to run edit in that environ
    • PA - how do we have chick-egg issue
    • JM - policy vs implementation ? - HDS is great cutoff point, everthing inside, up to the carrier to get it right in HDS - BUT Edits tell you whats right - carrier accountable up to HDS, if accountable on the carrier side and can verify before HDS, do the edits, send to HDS - what if I say "right, but edit stuff can't run iutnil other side - alrwady loaded to HDS - now what do I do? - accountability? where run edits is key question
    • PA - edit package run today, run on etl on load, no knowl on load - 2nd part AAIS does reconciliation after, sometimes errors arise
      • error type 1 - pre HDS , edit package fails on load - but what if loaded in HDS what is the recon process and what the process for that
    • JB - financial types of reconciliation
    • PA - yellowbook #s, compare #s submitted vs financial #s and due to granularity things come out wrong, financial reconciliation before stat reporting
    • JB - 1x year vs monthly
    • JM - reconciling financials? where?
    • SC - public info
    • PA - reach out to team with gap analysis, grey areas in codeing vs what they have, validate where /why numbers are off
    • SC - those arent errors, do reconcile, out of process doesnt become errors, differences and reasons why page 14 doesn't match - but NOT errors
    • PA - validity AAIS gets turning in reports on carriers - not only passed edit package but biz data matches fin data and a reason if it doesn't - why states listen to AAIS, how are we ensuring we are doing stuff correctly
    • JB - diff record exception 
    • JM - annual value add - edits? HDS needs two stage?
      • think its right but flag then run edits and get "ok/not ok" - question - who runs the eidts? in principle edits run on anyone centralized db
    • JB - copy of edits made avail to all
    • DH - one body resp for edits, not every single carrier 
    • JM - you put data in HDS, centralized code runs on all dbs, puot into HDS in some manner "this is not fully approved/edited" and decision: edit in place or is it a 2-stage thing?
    • SC - even if every carrier ran edit package themselvess, ult AAIS HAS TO RUN EDIT PACKAGE - resp lies with statistical reporting partner
    • PA - extract patterns to un T/F that a package was run - do test on clean or dirty data
    • JM - edits form of extractPattern, is it sufficient if it checks all the data
    • PA - regulator! 
    • JM - need feedback - run edits, if answer wrong, accountability to get it right
      • phys load or set flags
    • PA - should be running edits before load,
    • JM - WHERE? edits have to be consistent lang, thing needs to be well-defined structure
    • PA - rules engine, java, repackage rules engine as step in process going thru load (pass/no pass) 
    • JM - engine has to run against well define struct - b/c our data runs against well defines struct, now you are in HDS? put it into well def struct to run the rule that is the post-edit vers of that structure
    • PA - messaging format of HDS - stat plan, objects, run edit package against that 
    • JM - stat loading and knowl, if run edits against that, once passes - put it somewhere else or flag it - 2 concepts pre and post - saying to all carriers it needs to be PRE data but it has to have a shape - HDS? JM perceives when you demand "struct in diff way" and sees it as HDS
    • PA - diff pipleline but sees why it is outside of HDS
    • JB - data standard for saying how data will be considered, keep in mind dist arch, AAIS can't run anyting on db at carrier - raw, wont be sent to AAIS
    • PA - collections of stat records, running rules against them, if HDS is stat plan JSONified, run EPs, passed valdiation and legit extract
    • JM - HDS is JSONified stat stuff, edits, things all can see are ALL HDS in his mind - if prescribing shape b/c edits won't work, first place carriers have to do that
    • PA - pipeline A before HDS, where prescribed the data hits first
    • JM - widget shape here, then ep - prescribing shape, set of edits then HDS -  pipeline A is a prescribed shape, do whateve it takes to get it right, once edit passed drop into HDS
    • DH - wants to have DavidR weight in
    • PA - Pipeline A (infra before HDS), need to pull rules engine, before how much do we want to control creation? JM talking about HDS being a larger thing, where does the balloon around openIDL begin? PipelineA is infra, carrier does all before? will still design load up to plugin
    • JB - think of pipeline A as data format
    • PA - wont process and give feedback
    • JB - need data format to be standard to run rules against, gives flexibility to reconstruct design with same format (transit from flat file to whatever). 
    • PA - docker image with initial process? where is the official inbound point of openIDL community vs carrier
    • JM - one step at a time - HDS in the dark (far right), run extract patterns on - before HDS has to pass edits - edits need to be centrally maintained, id DRules expecting  something - pipeline A - already in that shape - sayig to carriers, prescribe format of HDS, to be right prescribe the edits, has to hit a prescribe shape here - carrier can do whatever to get into that form, that form is prescribed, java thing, json, all prescriptive, no flexibility
    • PA HDS, cna write queries against, layering other things not HDS
    • JM - centralized group do edits, carriers get it into that shape, must be part of standard of stuff to be prescribed
    • PA - meat of Drules, lot of it is testing stat plan, start ingesting as json, checking positionality
    • JM - thou shalt not load HDS until edits passed, edits managed, approved format, carrier must get data into shape - reload until passed and THEN move to HDS
    • PA - can we have a bucket, fire lambdas against it, won't move to secondary bucket until passes
    • DH - suppose use HDS for other things, communicating with reinsurers, something outside of stat reporting, now that HDS not necessarily reflects source systems
    • JB - source consistent, take time to get corrected, logically - more correct versus HDS
    • PA - HDS more right than source system
    • JB - fixed at HDS but not at source
    • JM - policy, carrier accountability, edit finds something wrong, iterates on changes, if it takes 6 months to get back to source, for next 6 months other reports don't reconcile - accountability in governance statement "if you find an error you are accountable to reconcile"
    • JB - consolidated data in HDS for other purposes, if corrections were in HDS the right place to do it
    • JM - better that doesn't line up is wrong
    • JB - log for where / when changes done
    • JM - carrier accountability - more right data - where is accountability to carrier? whatever it takes upstream - tell us changes you made requirement - log that says "to get this loaded here are 7 edits" - accountability to make it transparent
    • PA - meta on each row with last update date and what changed
    • BH - if systems don't reconcile - BAD - what else are we doing with it? problem to be solved, may be a log, sounds painful
    • SC - reality - keep a log today (she does of every change made) - most cases data SC didn't get on her file (stat file) - is it really diff from source system? she didn't get it on her file due to mapping upstream -know zip code is wrong or vin is wrong don't change things in her file or tell source system theres too many (agents inputting) - ok if under 5%
    • JM - practical question - do edits - syntactically and semantically: find alpha, don't know if someone mistyped VIN, but no idea T/F in real world - HOW RIGOUROUS DO EDITS NEED TO BE? - even if edits flag error? can we accept it?
    • SC - happens all the time, might get edit "limit on policy is $1MM and you got something else - not an error"
    • JM - 2 levels of edits? showstopped (dead) and one we accept
    • SC - wont ignore fact error was received, will go and looks "did I have the right limit" - edits help und if there is a problem - is it internal edits ?
    • JM - what is the purpose of an edit? don't edit more than you have to - what is the purpose in this context - all sorts of mech for internal correction - don't edit more than you need to without purpose - some things you have to fix, principle: only put in edits b/c hardcore reason to do it (not just clean data"
    • JB - work to be done - application and analysis and insight, not policy-level corrections
    • JM - do edits have levels? severity of error (which means will it be addressed)
    • JB - sanity check errors vs record format errors - can and will catch but WHERE in process
    • DH - gut check for AAIS as stat agent on how rigorous they need to be
    • JM - levels - showstoping and scary and "oughta check"
    • JB - accuracy in general (THRESHOLD)
    • JM - confidence scores from address cleansers - 
      • showstoppers (break system)
      • competency score (".7 good enough? yaaay")
    • JB - data quality scores, pick battles
    • SC - basic: does every field get a val - current and future, if not ABCD - if that field is filled? if so whats in there, nebulous - stat agents bear resp of "data is reasonable", know it is not garbage, how much has to be "good" - what does "good" mean (every field filled w/ reasonable value"
    • JM - mTable that does this - argument - for every field "type, table, range, = score"
    • SC, come across something, didn't meet the threshold, kick it back?
    • JB - levels determine response
    • JM - governance ? - value, string, etc. - don't measure if you aren't gonna govern it - if you are gonna put a rule in there, must have governance polity - arch has to provide for edit layer and series of thresholds to get a score and governance policies by score
    • JM  - pass/fail and scoring
    • PA - extra metadata for user queries
    • KS - "close out the quarter", might go back and add to it - close out means can't change later, cant put in records that apply in that 1/4 later if you "close out" - do we need some way of sensing we are opening up a 1/4 again and need to re-assert it is ok?
    • SC - have had situations where we discovered issue and "need to fix year" for a line or situation, b/c today timeline is so stretched out - takes too long - go back and adjust the year b/c the reports hadn't been issued - recent sit in MASS where they wanted to change format of something, had to refile and had to insure when refiled the dollar hadn't changed at all b/c they closed out the quarter already - nice to say "over/done", this case money wasnt part of the problem, but if discovered issue with $, must be some threshold, why would you go ahead with an annual report KNOWING there is missing $ - can update #s quarterly for up to 2 years, as necessary, REGS want quick/soon data, how long keep something open - "close it out" - can't just say "ill make sure under 5% at end of year" at the very least 1/4 has to be finished as best of your knowledge
    • DH - does AAIS have to close out quarters
    • PA - getting better for the future, like what TRV doing smaller slices, update by quarters for all
    • DH - do we really need to close a quarter?
    • SC - maybe not "close" but maintain data integrity 
    • DH - is there metadata that needs to be est that says "ok, data thru Jan is within Tolerance", accumulates over course of time
    • KS - across all data, individual state? 
    • SC 5% has to be BY STATE BY LINE
    • KS - if you don't load anything over 5%, dont allow, cant be over - closing out, interesting, discussing "attestation" - attest as of date: date range is good, edit package says "quality data". attestation range of data "as of X date, data for Qx is good, use for reporting now)
    • DH - some time in the future, find something wrong with Qx, "as of today I can say last month's data is correct
    • JB -change month or period, re-run that check
    • KS - update data to be in sync with source systems (HDS Not source of truth) - any time problem with upstream or ETL, requirement: closing out a quarter by attesting "as of x date, quarter 3 has been loaded and any change to that must be re-attested" - simple as "up to this date"
    • DH - other than ETL issues DR described beofre, something funky happended between source and HDS, diff than what SUsan is describing with "true errors" - when fixing those errors will be a new set of transactions, new load with corrected info, as done will be run thru edit package and maintain 5% tolerance
    • KS - if a transax is changing data that would come. from a report that would have gotten diff data need to reattest
    • PA - Regs want us to strive to make the data better, not a req to repro report when it was generated
    • KS - this req: I changed data and reattesting it is ok, changing the data just saying, not saying reproduce - CLOSE OUT: loaded data, ready for reports to run, now changed data, needs to be auditable, data that was there and attested to, changed and been closed out
    • DH - go back as a req - do we need to close anything out? dont see purpose to having it "close", policy this year will have claims for next 10 years. I can't close 2021, can close data for 2021, not sure what "closing" means
    • JB - not the same as closing a financial report - this is a data qual check to make sure threshold still valid for a time period, re-attesting - can still add data
    • DH - making a glitch correction vs fixing data, SC's example: not changing data but adding new records to fix whats out there, transactions thru edit, wihtin tolerance
    • KS - data thats ready by a report in that time period will get different data - does it matter - close or re-attest
    • JB - get rid of idea of "closing"
    • SC - be careful, "closing" is semantics, at some point to produce timely reports needs to have deadline, today report monthly to AAIS, report monthly to other stat agents that req monthly, needs to be there 45 days afte rmonth ends OR 45 days after end of 1/4 (AAIS), regardless of when sent needs to be in and under 5% by May 15, to produce reports, have to be timelines, dont wait until end of year
    • DH - small carriers who only load 1x a year
    • PA - due to old contracts, moving them to openIDL on a diff cadence
    • SC - good example, if only report annualy and now report Feb15 for prior year, runs thru edit package find errors - is it in and under by Feb15? clean by? longer you go the longer you push out when you do reporting
    • PA - diff in the future, spring 2021 lots were turning in stuff late, no repeat of that
    • KS - assumption: nothing in HDS above 5%
    • DH - needs to be architected, is there a precursor to HDS where info loaded, read into edit package, correction then HDS or is HDS a landing point and a secondary DB for stat reporting that has the correct info - how do we put that plumbing together
    • PA DIAGRAM
    • DH - many erors dealing with are omissions, coming from plumbing, ult source into data files used to create stat files, where info has not been provided that should be, while stat file may not rep "truth" the corrections should rep TRUTH
    • KS -attesting data loaded in HDS is TRV ability to tell the truth, wont match source sys for reasons, but attesting it is the data you can puit into HDS, for stat reporting
    • DH attesting to " good for stat reporting"
    • Everyting in HDS is usable for stat reporting
    • DH - outside of HDS do we need metadata that says "as of aug20, info in the hds, the last load was in tolerance and sequence of loads into HDS are within tolerance" - do we need to inc control mechs (policies, premiums and losses)
    • KS - opinion regarding claims vs policies, cant use for loss data up to this date, certain years old before used for loss reports
    • DH - "accident year" wont close for sev years, have info, "incurred losses" is what they THINK it will be may change over time
    • KS - attesting that data in HDS is good up to this date
  • 08222022
    • DR - Can't start making changes to HDS directly, gets out of sync with source system, can end up not matching sourcr systems then State Management problem, hairy, load new data, what edits already made? (not better than used to do) - doesn't think you can edit directly in place, HDS in his mind still design tenet one: faithful rep of back end systems... Dale made clear need to have facility to make changes, cant do on fly and takes time and needs to be done - solution something with foreign Key to a CORRECTED table or a federal other store of view, updated or changed as needed and as processes improved goal would be thing is short lived, alive for corrections and next extract - HDS can't be anythign but rep of systems of record
    • KS - Edit package not based on completed report
    • JB - if there were errors that came from source systems, had exceptions (fatal nature) and couldn't accept data and had to edit source system, takes time to correct something in source systems, easier to extract
    • DH - clarify - errors they had runnign thru SDMA, most instances (not a lot) had 486 instances to correct stuff/169 of those were "liability limits missing" - feed was not providing the approptiat liability limint, doesnt mean source didn't have it/ correct just NOT being fed to them
    • KS - ETL is wrong or source system is wrong, have to keep what was fed from the source system and when making vhange has to be sep place understood that this was changed, can go back and find changes made and fix them
    • DR - situation where, limit wasnt there, in source system BUT in HDS, whatever reason, new record in and fixed (ETL is fixed) and somehting is there, how to handle mismatch? which to trust? One in HDS is prob right? FOrces making decisions as to what to do when reload HDS, code a lot of judgement in or precode decisions in how to update. 
    • KS - keep it simple, see if patterns, automate where can, track what changed to (dont lose previous) and deal with it when refreshed
    • DR - obvious prob is bloat, shadow versions of everyting
    • KS - 480/10MM not bloat
    • DR - 2 assumptions: A. Not a lot of changes, B architectected to take adv of fact not a lot of changes, make it in a way to not hurt you and way to automate processes, so bloat becomes ephemeral, 
    • JB - do something like that, HDS has correct info so queries are correct and audited record. keep track of what did change
    • DR - HDS can never be edited in place, cant be something to keep track of something that diverges from downstream systems - only SoR is Downstream SoR, cant maintain business logic of having to decide what to update, preferable: the edits will be referenced, 
    • JB - complicated Extract Pattern, looking for exceptions
    • KS - do views to accomplish that, want to make exception hard, not easy path, make whatever it is, keep both in mind, when you have few then sattelite table rather than core table and then deal with view idea, as long as keep consistent pattern not bad
    • JB - run reports against HDS, 
    • KS - extraction has to see corrected data otherwise why make corrections at all, 
    • DR - too challenging to write "HDS is faithful rep of Core system" but an edit needs to be made, pull that data, easier someone does EP that does nothing but that table with edits applied (convenience function: first thing build corrected table, build EP, run extraction)
    • DR - Ephemeral Bloat
    • PA - 2 weeks ago on with jamesM, ETL pipelines, edit packages - scoring errors - talking about two 
  • Everything needs to be edit-able
  • Fixes don't happen in current month (monthly correcting and then moving on)
  • Latency of error correction could be a year
  • need to make sure we have facility to capture corrections made while NOT bastardizing HDS
  • internal or architectural? DR is aware
  • SC - Errors:
    • missing information (on record provided)
      • current environment vs future
      • today - flat file from upstream, flat file submitted with missing limit, info passed to AAIS, flagged by AAIS, returned to carrier (can see instantly by state), these 2 states need fix made, go into SDMA to make fix then submit, AAIS approves, loaded by AAIS
        • it had already gone thru the edit
    • DH - load into SDMA, not approved yet, Susan makes corrections, goes thru edit again once Susan made corrections (see right away if fix worked), if in tolerance it is "approved" by AAIS
    • PA. - doing upload to SDMA, staging area, AAIS not running load until it is approved (edit package engaged)
    • SC - loading it to AAIS system, told to fix errors, fixes, then "officially submitting" and AAIS "approves"
    • PA - can't go to HDS until "approved"
    • DH - where within process is edit package? where is favcility to correct the errors, if HDS is supposed to be matching to source systems, then we shouldn't be making changes to HDS for other purposes beyond StatReporting - decision in ArchWG - how handle error corrections and fidelity of HDS
    • PA - direction, making update, go about making corrections of data already inside HDS, first example - data before HDS, different Error type
    • JB - case Dale mentioned, HDS is out of sync with source system, SS has error, needs time to fix, copies of DB with errors to be corrected - would suggest errors corrected get corrected in HDS but a log to inform source system of corrections as made - instead of lots of copies of collected data
    • JM - crossing boundary - doesn't care what carriers do - where do we stop caring - only thing, HDS has to be right, up to the carrier how they get it right
    • JB -yes but instead of making fix and a copy of DB it seems it should be ficec in HDS
    • SC - internal issue, AAIS needs to edit data, thats their job , if they say "2 errors" and they get fixed she says "done" and pushes to HDS - conflict with source system is something SHE deals with
    • JB transferred to AAIS for edit checks, 
    • PA - held before data lake until adter corrected
    • JM - cant occur until content is in it
    • PA - edit pre ETL
    • JB - do it 2x, if you correct HDS need to run edit in that environ
    • PA - how do we have chick-egg issue
    • JM - policy vs implementation ? - HDS is great cutoff point, everthing inside, up to the carrier to get it right in HDS - BUT Edits tell you whats right - carrier accountable up to HDS, if accountable on the carrier side and can verify before HDS, do the edits, send to HDS - what if I say "right, but edit stuff can't run iutnil other side - alrwady loaded to HDS - now what do I do? - accountability? where run edits is key question
    • PA - edit package run today, run on etl on load, no knowl on load - 2nd part AAIS does reconciliation after, sometimes errors arise
      • error type 1 - pre HDS , edit package fails on load - but what if loaded in HDS what is the recon process and what the process for that
    • JB - financial types of reconciliation
    • PA - yellowbook #s, compare #s submitted vs financial #s and due to granularity things come out wrong, financial reconciliation before stat reporting
    • JB - 1x year vs monthly
    • JM - reconciling financials? where?
    • SC - public info
    • PA - reach out to team with gap analysis, grey areas in codeing vs what they have, validate where /why numbers are off
    • SC - those arent errors, do reconcile, out of process doesnt become errors, differences and reasons why page 14 doesn't match - but NOT errors
    • PA - validity AAIS gets turning in reports on carriers - not only passed edit package but biz data matches fin data and a reason if it doesn't - why states listen to AAIS, how are we ensuring we are doing stuff correctly
    • JB - diff record exception 
    • JM - annual value add - edits? HDS needs two stage?
      • think its right but flag then run edits and get "ok/not ok" - question - who runs the eidts? in principle edits run on anyone centralized db
    • JB - copy of edits made avail to all
    • DH - one body resp for edits, not every single carrier 
    • JM - you put data in HDS, centralized code runs on all dbs, puot into HDS in some manner "this is not fully approved/edited" and decision: edit in place or is it a 2-stage thing?
    • SC - even if every carrier ran edit package themselvess, ult AAIS HAS TO RUN EDIT PACKAGE - resp lies with statistical reporting partner
    • PA - extract patterns to un T/F that a package was run - do test on clean or dirty data
    • JM - edits form of extractPattern, is it sufficient if it checks all the data
    • PA - regulator! 
    • JM - need feedback - run edits, if answer wrong, accountability to get it right
      • phys load or set flags
    • PA - should be running edits before load,
    • JM - WHERE? edits have to be consistent lang, thing needs to be well-defined structure
    • PA - rules engine, java, repackage rules engine as step in process going thru load (pass/no pass) 
    • JM - engine has to run against well define struct - b/c our data runs against well defines struct, now you are in HDS? put it into well def struct to run the rule that is the post-edit vers of that structure
    • PA - messaging format of HDS - stat plan, objects, run edit package against that 
    • JM - stat loading and knowl, if run edits against that, once passes - put it somewhere else or flag it - 2 concepts pre and post - saying to all carriers it needs to be PRE data but it has to have a shape - HDS? JM perceives when you demand "struct in diff way" and sees it as HDS
    • PA - diff pipleline but sees why it is outside of HDS
    • JB - data standard for saying how data will be considered, keep in mind dist arch, AAIS can't run anyting on db at carrier - raw, wont be sent to AAIS
    • PA - collections of stat records, running rules against them, if HDS is stat plan JSONified, run EPs, passed valdiation and legit extract
    • JM - HDS is JSONified stat stuff, edits, things all can see are ALL HDS in his mind - if prescribing shape b/c edits won't work, first place carriers have to do that
    • PA - pipeline A before HDS, where prescribed the data hits first
    • JM - widget shape here, then ep - prescribing shape, set of edits then HDS -  pipeline A is a prescribed shape, do whateve it takes to get it right, once edit passed drop into HDS
    • DH - wants to have DavidR weight in
    • PA - Pipeline A (infra before HDS), need to pull rules engine, before how much do we want to control creation? JM talkign about HDS being a larger thing, where does the baloon around openIDL begin? PipelineA is infra, carrier does all before? will still design load up to plugin
    • JB - think of pipeline A as data format
    • PA - wont process and give feedback
    • JB - need data format to be standard to run rules against, gives flexibiltiy to reconstruct design with same format (transit from flat file to whatever). 
    • PA - docker image with initial process? where is the official inbound point of openIDL community vs carrier
    • JM - one step at a time - HDS in the dark (far right), run extract patterns on - before HDS has to pass edits - edits need to be centrally maintaineed, id DRules expecting  something - pipeline A - already in that shape - sayig to carriers, prescribe format of HDS, to be right prescribe the edits, has to hit a prescribe shape here - carrier can do whatever to get into that form, that form is prescribed, java thing, json, all prescriptive, no flexibility
    • PA HDS, cna write queries against, layering other things not HDS
    • JM - centralized group do edits, carriers get it into that shape, must be part of standard of stuff to be prescribed
    • PA - meat of Drules, lot of it is testing stat plan, start ingesting as json, chekcing positionality
    • JM - thou shalt not load HDS until edits passed, edits maanged, approved format, carrier must get data into shape - reload until passed and THEN move to HDS
    • PA - can we have a bucket, fire lambdas against it, won't move to secondary bucket until passes
    • DH - suppose use HDS for other things, communicating with reinsureres, something outside of stat reporting, now that HDS not necessarily reflects source systems
    • JB - source consistent, take time to get corrected, logically - more correct vers HDS
    • PA - HDS more right than source system
    • JB - fixed at HDS but not at source
    • JM - policy, carrier accountaibility, edit finds something wrong, iterates on changes, if it takes 6 months to get back to source, for next 6 months other reports dont reconcile - accountability in governance statement "if you find an error you are accountable to reconcile"
    • JB - consolidated data in HDS for other purposes, if corrections were in HDS the right place to do it
    • JM - betteer that doesnt line up is wrong
    • JB - log for where / when changes done
    • JM - carrier accountability - more right data - where is accountability to carrier? whatever it takes upsteam - tell us changes you made requirement - lof that says "to get this loaded here are 7 edits" - accountability to make it transparent
    • PA - meta on each row with last update date and what changed
    • BH - if systems dont reconcile - BAD - what else are we doing with it? problem to be solved, may be a log, sounds painful
    • SC - reality - keep a log today (she does of every change made) - most cases data SC didn't get on her file (stat file) - is it really diff from soiurce system? she didn't get it on her file due to mapping upstream -know zip code is wrong or vin is wrong dont change things in her file or tell source system theres too many (agents inputting) - ok if under 5%
    • JM - pracical question - do edits - syntactially and symantically: find alpha, dont know if someone mistyped VIN, but no idea T/F in real workd - HOW RIGOUROUS DO EDITS NEED TO BE? - even if edits flag error? can we accpt it?
    • SC - happens all the time, might get edit "limint on policy is $1MM and you got somehting else - not an error"
    • JM - 2 levels of edits? showstopped (dead) and one we accept
    • SC - wont ignore fact error was received, will go and looks "did I have the right limit" - edits help und if there is a problem - is it internal edits ?
    • JM - what is the purpose of an edit? dont edit more than you have to - what is the purpose in this context - all sorts of mech for internal correction - dont edit more than you need to without purpuse - some things you have to fix, principle: only put in edits b/c hardcore reason to do it (not just clean data"
    • JB - work to be done - application and analysis and insight, not policy-level corrections
    • JM - do edits have levels? severity of error (which means will it be addressed)
    • JB - sanity check errors vs record format errors - can and will catch but WHERE in process
    • DH - gut check for AAIS as stat agent on how rigourous they need to be
    • JM - levels - showstoping and scary and "oughta check"
    • JB - accuracy in general (THRESHOLD)
    • JM - confidence scores from address cleansers - 
      • showstoppers (break system)
      • competency score (".7 good enough? yaaay")
    • JB - data quality scores, pick battles
    • SC - basic: does every field get a val - current and future, if not ABCD - if that field is filled? if so whats in there, nebulous - stat agents bear resp of "data is reasonable", know it is not garbage, how much has to be "good" - what does "good" mean (every field filled w/ reasonabvle value"
    • JM - mTable that does this - argument - for every field "type, table, range, = score"
    • SC, come across something, didn't meet the threshold, kick it back?
    • JB - levels determine responce
    • JM - governance ? - value, string, etc. - dont measure if you aren't gonna govern it - if you are gonna put a rule in there, must have govenance polity - arch has to provide for edit layer and series of thresholds to get a score and governance policies by score
    • JM  - pass/fail and scoring
    • PA - extra metadata for user queries

Reconciliation (make sure report is correct based on request - reasonability check on the report - NOT financial reconciliation)

...

ID

Tenet

1Data will be loaded in a timely manner as it becomes available.
2HDS will track the most recent date that is available to query for pre and post edit package data.
3Data owners will correct any mistakes as soon as they are made aware of the issue. 
4Data owners will follow current practices for logging policy and claim records as they do today. A new record will be created for each event. All records will be loaded in a timely manner after the creation event. 
5There will be a distinction between edited and unedited records. (Successfully gone thru edit package)
6HDS data is attested to, some way to attest to meta data, date range up to which its good, capture some info about hDS "data in HDS is good up to now for this purpose"

Non Functional Requirements (to be moved to requirements doc)

...