You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Deliverables:

  • KS
    • Architecture we can implement
      • Simple deployment plan on top of Arch (devops)
      • JB & PA process of standing up node, starting on deployment pattern
      • Not simple, end goal should be easy to stand up and take a look at
    • What are the artifacts we can work on  
    • Arch Diagram, Technical Architecture, lot of opinions on what is useful (interaction diagrams, etc.)
    • Thoughts?
    • Scenario Review
    • How to capture - notes as we go will suffice and org as we go into somethign that works for all
    • Satish-Hartford, David/Dale - Travelers, LF (network)
  • DR - not go too far w/o buy-in/check from folks at Hanover, only other entity besides Travelers and Hartford, need engagement / buy-in
  • SK - process, already had some baseline architecture defined/started then went into requirements, thinking go back, this is what we had/reqs and define gaps?
  • KS - wouldn't consider what we had a thing to migrate from (then-now), not in production, dont think start from old arch as start from what we can accomplish
  • DR - utilize what has been done but as we need pieces as proven out, but dont reference old at the beginning
  • SK  - need to lay out the, define, bottom up or top down? Top down, intentional arch (dir we want to head towards / Bottom up look at each component, build up component by comp
  • KS - dont have components to build up w/ right now - have to start from high level what are we trying to accomplish, put rough comp in place, address sequentially, too many arch really hard, lots of directions, get thru problem statement, what accomplish and then draw the diagram, dig in from the top
  • DR - interaction diagram, stem from that, how works, then neccessitates top down, dont want any mandates from component level that translate up
  • SK - how we approach, making sure, if we go top down, we normally create intentional diagram
  • KS - no assumptions - state them
  • SK - intentional architecture - this are the major pices of the puzzle and how laid out, broad brush, ex: Carrier node, what does it contain, HDS what does it contain and do, Analytics Node components
  • KS - application arch here, not enterprise arch, little different than funct breakdown, extract common pieces out, figure out funct components from scenarios (load data, access data, create report
  • DR - more of where you have cross disciplinary or cross enterprise to standardize of unify - little overkill for this, dont get too bogged down in nomenclature and methodology, lets go as low overhead as possible - what we want it to do and do it, dont be too methodoloy-driven not dogmatic
  • DH - there are subscriptions to data calls, agreeing to do it, annual, repeated each year (some 1/4 some monthly), have many similar data calls across states, state dictating what and how they want it
  • KS - diff channel? direct? stat reporters?
  • DH - use data call team, sep from Stat Reporters
  • PA - broken out because asking questions not in stat plan?
  • DH - stat plans so much info, hard to navigate thru (sources)
  • PA - easier to answer question than go thru Stat Report team
  • DR - funct, subscription is automating a piece of the arch - consent could add feature (annualy, quaterly, annual, etc.) - captured by some consent and how given - automation on both sides, person issuing could automate theirs, recept automate theirs - doesnt change underlying arch, same pipes and worker stuff
  • DH - does order matter?
  • DR - menu of data so to speak, lot of data calls designed to leverage whats there, data might be there before subscription, 
  • KS - data call model, tenet of the openIDL is the data is there why build it again
  • DR - could see Subscribe to Report occur after data load, then again, we will load data before subscribe
  • PA - loading something about ABS breaks on every record
  • KS - wouldnt load any data if you didnt sub to any reports, not sequential set
  • PA - ETL and networking ops too disconnected processes - process of loading and making data avail is sep from scenarios of "how to utilize openIDL to fulfill data requests"
  • KS - disjoined, dont have to do one or the other but CANT do reporting without loading data
  • DR - open account you dont intent to use, you could do, doesn't make sense but you could - start w/ scenario, state assumptions - simplify - "HDS is created and some data in it" good base assumption? 
  • KS - workfliow? pre and post condition? 
  • DH - scenario determines whats in HDS
  • DR - a check, is data there "Y/N", scenario is and is not, assume that for the scenarios these are operational scenarios, assum some pipes are built, operating as intended, day1 some doesnt exist but operating on Day 2 and work backwards
  • KS - scenarios - assume all is working
  • will be a check, scnario to work thru "is data there? no? load data"
  • SK - Day 2? when does the check happen?
  • DR - to be decided, talk early that every time write data to HDS assume data is there, could make arch such that theres data in HDS, maintained to some level of accuracy, when extracted a check "Y/N", there will be trivial cases for stat reporting where most likely that will be trivial (all plannkng for it) bu there will be scenarios unknown
  • JB - specific ation of a data call - stat reports repeat, know there, when comes to servicing data calls, one of your upfront cases is how do you define? 
  • KS - sep between data calls (asserted to being there) and stat reports (based on known data)
  • DR - simplifies to the trivial, process the same, expected success is higher, extend to a check to show data is there, not neccessarily leads to a data call - if someone makes a data call, makes request, checks meta data or funct we need, way for us to allow that
  • JB - inquiry, 
  • DR - ping to see if there, has data needed? might be funct helpful to regulators to benefit of carrier to tailor data calls to whats there - these elements satidy answers Reg wants, every carrier has it, can do that w/o Carriers exposing data early, telling them it is ppossible
  • DH -say data is there fields populated, but "as of Jan 1"
  • DR - if data call says "I need data Q3 Q4 2019 - data call provides success criteria - funct would have to live in carrier environmewnt, dont expost data until proper auth, dont want to do, is dont want to burden Regs to get buyin from all carriers if thats possible and demand "keep putting more data in" and see "i can answer my question" without having to put more data in - are there enough carriers? if thats the only way they can tell the data is there, will make them frontload work and then ask the question but if they can ask the question in a lightweight way, not actual data - 
  • JB - would support notion of defining metadata of what is asked for
  • DR - where does that live? in the query? in the adapter? part of it
  • JB - some distributed metadata concept for R to fashion query if he knows its defined in the system
  • KS - what the "LIKE" does right now, but not tied to current arch
  • DR - kind of like the light but programmatic but deeper than a like, structured query that answers "its possible"

Scenarios

Stat Report 

Subscribe to Report (automate initiation & consent)

Load Data / Assert Ready for Report

  • What is the data? Glossary or definition? What is being loaded (stat report well-defined)
  • Facilitate semi-auto inquiries, metadata management scheme
  • Assumption - stat plan transactional data, metadata is handled by spec docs as yet to be written
  • Day 1 - PDF uploaded somewhere
  • Have it or dont by time period
  • Assumption - run report, everyone is always up to date with data, loading thru stat plan, data has been fixed in edit process, ask for 2021 data its there
  • Data existing in HDS, what schema says, there to fulfill stat report, this is just data thats there, period and quant/qual of data designed to do stat report, for this purpose just a database
  • Minimal data catalog - whats the latest, define whats there (not stat report per se), whats in there is determined thru the funct described (time period, #, etc.) - diff between schema for a db and querying it, format for what could be in there
  • Minimal form of data catalog - info about whats in the data
  • Automated query cant tell if data is there, may have transax that haven't processed, dont know complete until someone says complete
  • Never in position to say "complete" due to late transax
  • If someone queries data on Dec 31, midday. not complete - transax occur that day but get loaded Jan 3 - never a time where it is "COMPLETE"
  • Time complete = when requested - 2 ways - 1 whenever Trav writes data, "data is good as of X date" metadata attached, Trav writes buisness rules for that date, OR business logic on extract "as long as date is one day earlier" = data valid as of transax written
  • Manual inseertion - might not put more data in there, assume complete as of this date
  • Making req on Dec 31, may not have Dec data in there (might be Nov as of Dec 31)
  • Request itself - I have to have data up to this date - every query will have diff param, data it wants, cant say "I have data for all purposes as of this date"
  • Schema is set but might evolve - "type of data loaded" - could say "not making assertions this data is good for a specific data call but to the best of our ability it is good to X date"
  • Data for last year - build into system you cant have that for a month 
  • 2 dates: 12/31 load date and the effective date of information (thru Nov 30)
  • Point - could use metadata about insertion OR the actual data, could use one, both or either
  • Data bi-temporal, need both dates, could do both or either, could say if Trv wrote data on Jan 3, assumption all thru 12/31 is good
  • May not be valid, mistake in a load, errors back and fixing it - need to assert MANUYALLY the data is complete as of a cert time
  • 3-4 days to load a months data, at the end of the job, some assertion as to when data is complete
  • most likely as this gets implemented it will be a job that does the loading, not someone attensting to data as of this date -where manual attestation beconmes less valuable overe time
  • as loads written (biz rule, etc.) If we load on X date it is valid - X weeks, business rule, not manyual attestation - maybe using last transax date is just as good - if Dec 31 is last tranx date, not valid yet - if Dec 31 is last transax date then Jan 1
  • Account for exception processing
  • in this scenario - wouldnt have loaded
  • when we discuss loading data is it already edited and run thru FDMA rulebase and good to go or raw untested data
  • ASSUMING thru the edit
  • Can tell how goods the data and through when
  • Start with MANUAL attestation and move towards automated
  • Data thru edit and used for SR, data trailing by 2 years
  • doesnt need to be trailing 
  • submission deadline to get data in within 2 years then reconciliation, these reports are trailing - uncomfortable with tis constraint
  • our ? is the data good, are we runnign up to this end date, not so much about initial transax than claims process
  • May have report that wants 2021 data in 2023 bug 2021 data updated in 2022
  • Attenstation is rolling, constantly changing, edit package and sdma is not reconciliatioj it is business logic - doesnt have to be trailing
  • As loading data, whats the last date loaded, attestation date
  • sticky - go back x years a report might want, not sure you can attest to 
  • decoupling attestation from a given report (data current as of x date), 
  • everyting up to the date my attestation is up to date in the system
  • rollout period - not keeping 20 years of data
  • "Data is good through x date" not attensting to period
  • Monkey Wrench: Policy data, our data is good as of Mar 2022 all 2021 data is up to date BUT Loss (incurred and paid) could go 10 years into future
  • some should be Biz Logic built into extrat pattern - saying in HDS< good to what we know as of this date, not saying complete but "good to what we know" - if we want to dome somethign with EP, "I will only use data greater than X months old as policy evolves
  • Loss exposure - all losses resolved, 10 years ahead of date of assertion, as of this date go back 10 years
  • decouple this from any specific data call or stat report - on the report writer 
  • 2 assertion dates - one for policy vs one for claim
  • not saying good complete data, saying accurate to best of knowl at date x
  • only thing changing is loss side
  • syaing data is accurate to this point in time, as of this date we dont have any claim transax on this policy as of this date
  • adding "comfort level" to extraction?  - when yuo req data you will not req for policies in last 5 years - but if i am eric, wants to und market, cares about attestation I can give in March

Create Report Configuration

Create Report

Deliver Report

Data Call

Load Data

Create Data Call

Like Data Call

Issue Data Call

Subscription to Data Call

Consent to Data Call

Mature Data Call

Abandon Data Call

Clone Data Call

Deliver Report


Application Components

Data Sources, Sinks and Flows

Decisions

Tenets

Notes:


Time

Item

Who

Notes









  • No labels