Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Have it or don't by time period
  • Assumption - run report, everyone is always up to date with data, loading thru stat plan, data has been fixed in edit process, ask for 2021 data its there
  • Automated query cant tell if data is there, may have transax that haven't processed, dont know complete until someone says complete
  • Never in position to say "complete" due to late transax
  • If someone queries data on Dec 31, midday. not complete - transax occur that day but get loaded Jan 3 - never a time where it is "COMPLETE"
  • Time complete = when requested - 2 ways - 1 whenever Trav writes data, "data is good as of X date" metadata attached, Trav writes business rules for that date, OR business logic on extract "as long as date is one day earlier" = data valid as of transax written
  • Manual insertion - might not put more data in there, assume complete as of this date
  • Making req on Dec 31, may not have Dec data in there (might be Nov as of Dec 31)
  • Request itself - I have to have data up to this date - every query will have diff param, data it wants, cant say "I have data for all purposes as of this date"
  • 2 dates: 12/31 load date and the effective date of information (thru Nov 30)
  • Point - could use metadata about insertion OR the actual data, could use one, both or either
  • Data bi-temporal, need both dates, could do both or either, could say if Trv wrote data on Jan 3, assumption all thru 12/31 is good
  • May not be valid, mistake in a load, errors back and fixing it - need to assert MANUYALLY the data is complete as of a cert time
  • 3-4 days to load a months data, at the end of the job, some assertion as to when data is complete
  • most likely as this gets implemented it will be a job that does the loading, not someone attesting to data as of this date -where manual attestation becomes less valuable overe time
  • as loads written (biz rule, etc.) If we load on X date it is valid - X weeks, business rule, not manual attestation - maybe using last transax date is just as good - if Dec 31 is last tranx date, not valid yet - if Dec 31 is last transax date then Jan 1
  • Data for last year - build into system you cant have that for a month 
  • Start with MANUAL attestation and move towards automated
  • Data thru edit and used for SR, data trailing by 2 years
  • doesn't need to be trailing 
  • submission deadline to get data in within 2 years then reconciliation, these reports are trailing - uncomfortable with tis constraint
  • our ? is the data good, are we running up to this end date, not so much about initial transax than claims process
  • May have report that wants 2021 data in 2023 bug 2021 data updated in 2022
  • Attestation is rolling, constantly changing, edit package and sdma is not reconciliatioj it is business logic - doesnt have to be trailing
  • As loading data, whats the last date loaded, attestation date
  • sticky - go back x years a report might want, not sure you can attest to 
  • decoupling attestation from a given report (data current as of x date), 
  • everything up to the date my attestation is up to date in the system
  • "Data is good through x date" not attesting to period
  • Monkey Wrench: Policy data, our data is good as of Mar 2022 all 2021 data is up to date BUT Loss (incurred and paid) could go 10 years into future
  • some should be Biz Logic built into extract pattern - saying in HDS< good to what we know as of this date, not saying complete but "good to what we know" - if we want to dome somethign with EP, "I will only use data greater than X months old as policy evolves
  • Loss exposure - all losses resolved, 10 years ahead of date of assertion, as of this date go back 10 years
  • decouple this from any specific data call or stat report - on the report writer 
  • 2 assertion dates - one for policy vs one for claim
  • not saying good complete data, saying accurate to best of knowledge at date x
  • only thing changing is loss side
  • saying data is accurate to this point in time, as of this date we dont have any claim transax on this policy as of this date
  • adding "comfort level" to extraction?  - when you req data you will not req for policies in last 5 years - but if i am eric, wants to und market, cares about attestation I can give in March

Exception Handling in LOADING

  • Account for exception processing
    • What is an exception? 
    • PA - loss & premium records, putting stat plan in JSON, older data didn't ask for VIN, some data fields optional
    • KS - exceptions can be expected, capturing & managing situations to be dealt qwithwith, not "happy path", need to have error codes and remediation steps, documentation for what they all mean and what to do about them (SDMA has internal to edit package) - things like "cant get it in edit package b/c file not correct", etc. - standard way of notifying exceptions throughout system, consistent, exception received and what to do about it
    • PA - ETL stuff, exceptions based on S&S topics, whats the generalize way to handle? or specific except cases?
    • KS - arch needs way to report and document and address/remediate exceptions (consistent, notifying, dealing)
    • PA - options: 
      • messaging format, 
      • db keeping log of all messages
      • hybrid approach of both
    • KS - immediate feedback and non-sequential (messaging or notification feedback)
    • JB - data loading transfer of data or into HDS? 
    • KS - data loading starts with intake file in current statplan format, ends when data in HDS
    • JB - lot of exceptions local to this process loading data, reported to anyone or resolved or level of implementation of who is reporting data,
    • KS - some user interface, allows you to load a file and provide feedback, but a lot is asynchronous, no feedback from UI
    • JB - gen approach to be shared across 
    • KS - consistent way to handle across system (sync/asynch, UI vs notification)
    • PA - 2 lambda funct loaded in, 2 S&S topics (1 topic per lambda), seems like nice granular feedback, as we get more lambdas throughout node would be unweildy, master topic to subscribe to resources
    • KS - too deep for now
    • PA - one general exception thread or thing to subscribe to, get large amount of exceptions as opposed to making the QA team to ind subscribe to each resource (some kind of groupings?) - lot of components throwing exceptions and dont want to sub to each component 
    • KS - do we want to audit exceptions? Likes/Unlikes, Consents, etc. - are there exceptions we want that to be captured on ledger or somewhere to be audited later?
    • PA - consent to data call and dont have data required that should be recorded/captured/to chain, etc. (consented to participate and no data)
    • KS - funct in exception handling, getting close to NFR (disaster recovery, continuity, reacting to scalability, etc.) need to get there at some point - digging deeper, specific exceptions will have different decisions 

...

  • SB - Receipt that the Regulator has received/downloaded report - given to Reg, Carrier and Stat Agent
  • DH - maybe PULL w/in UI
  • KS - automatic? follow link, you know you received it
  • DH - notification report is avail, you go in and pull the report, as pulling report UI could then update to reflect report was taken
  • KS - assumption you are not following link from notification
  • DH - is the report so big it can't be emailed or delivered
  • PA - question, requirement, we dont want to make a file b/c we dont want it to be shared, KS said "not be able to download it"
  • KS - struggle w/ how we can not have a file
  • JB - have a file, encrypt it
  • DH - want file protected so not editable
  • KS - once downloaded impossible to prevent sharing
  • DH - some form of report
  • KS - know who downloaded
  • DH - if info is anonymized, are you telling Regs who participated?
  • KS - option the data call could have, who participated, when does it make sense for Reg to know who participated
  • DH - ok if carriers are listed on a report IF boundary/threshold for data (%) met - how much of market are REGs going to have included in any data request
  • KS - how would they know?
  • DH - asking provider "how much of the market is this representing?" - might ask Stat Agent (AAIS in example) as intermediary
  • KS - NAIC? is that the NAIC #, the total market
  • DH - NAIC page 14
  • SB - what are the elements of the receipt?
  • DH - Data Call Name (whatever its called), who received it, when received (time/Date), a receipt for every individual accessing (downloading?) a report - indicate WHERE they are from (VA DOI ELowe = who and where) - include organization
  • PA - if resp for reports want to know who provided data for report and who downloaded, when filed - can stand up to rigorous audit, like to know who turned it down (DALE doesnt want) - doing auto coverage report for TRV in State X, when they say "we dont want to do this report (whatever reason)" needs to say that was travelers not participating
  • KS - scenario for data calls, not req to do reporting for TRV (might not use openIDL) for stat reporting, req to do the reports, contracted to do stat reporting for carrier, can tell if they werent on the list, know they COULD have and can tell if they are on the list-
  • DH - Stat Report SHOULD HAVE, Data Calls COULD HAVE
  • SB - doesn't want receipt with Hartford, Hanover, and NOT_Travelers
  • DH - carrier's business how they respond
  • KS - how public is the list of contribs to a data call or stat report, just list? other contribs see other contribs - if EL asks for a data call he can see participant but can other participants see who else - Can Carrier1 see who else participated?
  • DH - public info once hits Insurance Dept, gets to point of aggregated and anonymized, could be public, but  a lot of data calls under confidentiality agreements
  • PA - turn in report, list of #s responsible for, turn in list of participants and non-participants 
  • DH - maybe not participate over tolerance? why sign up with AAIS for stat reporting and not use for a given state
  • PA - some companies work with AAIS do to Stat Reporting, have legacy systems, may not write all the lines (dont write coverage, or have team and do analysis themselves)
  • KS - targeting carriers? how does a regulator say "I want TRV to respond TODAY"
  • DH - maybe not do it over openIDL, can make data call to TRV, obligated to respond w/in their level of authority
  • KS - in the system?
  • DH - envision, similar to sign up for AAIS or stat agent, say "these lines are open to openIDL doing our data calls"
  • KS - that notification, new data call comes in 
  • DH - could envision a REG request a data call to only one Carrier, wouldn't want to entertain going thru openIDL if JUST TRV, data already there, can pull it themselves, who creates EP, formats report, all that stuff, maybe own
  • KS - over time used to doing things thru openIDL, skillset to build report and continue
  • DH - just their data, may want more eyes looking at it, verifying whats being reported, resp for the info going out, just them, nuances in results you. make sure management is aware of, aggregated and anony, that concern doesn't exist - build this more for where it is a wide spread call, not 1:1 carrier:regulator - future build 1:1 into system but for now they have well-defined processes

Auditability/Traceability

...

Exception Handling in Reporting

  • PA - what kinds of exceptions
    • report generation fail
      • Combiner logic failed
        • way combining stuff, got logic, EP, put it all together - combining logic for each report will be unique, EP implemented and attached to data call but haven't talked about combiner logic and attached to EP
        • Potential where each report has slightly different combiner logic
        • define EP, not all the code in a report, then there is formatting report after combined, diff from report to report
        • potential there is bespoke report logic for a particular data call or stat report - could vary and could be same
        • how much do you put into the data call - will have learnings
        • take a look at combiner logic
      • report generation failed
        • Diff then combiner logic
        • retry report gen
    • didnt meet % threshold (company wont participate)
    • data in PDC does not match expected format (something went wrong with EP)
    • data in PDC does not pass edits 
      • possible
      • tolerances - specific record may not pass edit but w/in tolerance, how to handle?
      • ex - NC state w/ SC zipcode, under 5%, include/not include?
      • "for a record they do not have a limit", on a couple of records, 
      • if doing report, missing stuff, then omit record (acceptable solution?)
      • visible in the EP, when you do EP, aggregate and ignore records would be visible in that code, hard to see, not in text

Auditability/Traceability

  • DH - entire comms module needs to be auditable 
  • JB - requests and transactions on-chain, how do we inc email notifications
  • KS - audit a report was received, what do we do with the system that isn't auditable
  • JB - benefit of common channel, audit trail for interactions there
  • KS - utilize and put things like receipts next to data call (JSON object in ledger,), receipts, contributors, etc
  • PA - can't inc raw data (no raw data)
  • DH - timestamps of when pulled w/in each carriers node
  • KS - consent timestamp
  • JB when deliver data itself, private data collection is hashed and thats a record (on chain) - inc into a consistent scheme - sone application that uses it, info on chain
  • KS - all updates to data call itself are auditable, on blockchain a given - who has access to whats in the audit trail/traceability? unearth some things not sharing - hanover could see who consented even if they didn't - 
  • PA - hash the stuff on chain and only give keys to consented
  • KS - if a company needs some audit report, managed by admin of the network, should not make audit info avail by default
  • KS - attributing things orgs touch to them, part of audit trail, consent: who, when, destination, history of data call, who updated data call, when EP run by consent

Notification

  • KS - notification report completed
  • DH - notification of data call (new)
  • PA - external to UI or outside of UI
  • JB - within network, preferred way, channel comms request
  • KS - inbox expectation?
  • DH - push or pull
  • JB - default channel, comm of request, responses, likes - same mechanism
  • KS - what it does now - find data call, def not robust UI, push-pull?
  • DH - push preferred
  • KS - subscription model, subscribe to notifications
  • JB - application looks for those events and pushes notification
  • KS - pull worth considering, inbox items you should consider, if Dale gets a push "you have report" , has to find emails he needs to respond to, otherwise logs into system, notifications in Inbox
  • DH - what has been sent and disposition of those items in the UI, perhaps a delete option, acknowledgement (instead of inbox filled forever)
  • KS - notification management (delete /archive)

...