You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

Deliverables:

Scenarios

Stat Report 


Subscribe to Report (automate initiation & consent)

Identify Report

  • What is it (metadata)
    • Naming it 
    • identifier
    • Requestor
    • type of input
    • generation source
    • line of business
    • what output should look like
    • explicit math for aggregation
    • Purpose of data (what being used for)
    • similar to what is captured on a data call
    • DR - stab at making a vers of this, idea of what it should be (ref reqs), see how it looks, whats missing, etc. - find gaps as opposed to trying to be complete here  - for todays putpose some metadata along lines of reqs, would we do first req/draft of what it would look like, anything missing? (feels like reqs lite)
    • KS - info req section in reqs table, first iteration/sol will highlight gaps
    • SK - any existing samples of data calls/reqs? metadata assoc w/ request, match up, covered in the list?
    • PA and KS to discuss what will be shared, integrating w/ other depts, large list of data calls from other systems, working with ops teams to bring it together, high level looking to make big improveements on metadata and reqs
    • SK - date thinking couple (date of req, deadline data, expiration date)
    • KS - for a report these are the fields we fill in: (a la data dictionary definitions), what data call was intended to capture but inc all of details Dale pointed out, there is bridging vs pointing back to reqs, layout for report - THIS IS WHAT WE ARE TRYING TO DO/WHAT THIS REPORT IS
  • Identify Stat Reporter

Identify who is subscribing

  • Defining participants and role
    • Data Providers (Carriers)
    • Report Requestors (DOI)
    • Implementors (AAIS & etc. )
    • Stat Reporter (not necessarily same as implementor, general approved or cert stat reporter))
  • producer of the data and the receiver of the data (source and sync/target)
  • Carriers providing data, DOI creates request
  • DH - Who are the participants? Carrier, Requestor, Intermediary (AAIS? other stat agents? those building extraction patterns and formatting report), implementor of report

Connecting Subscriber and Report

  • Carriers and DOIs, want to capture that Carrier is data provider for a specific report and DOI is specific receiver for a report
  • not data itself, more metadata about report, who getting specifically
  • who get from / give to
  • Notion of give-take between implementors and carriers and DOI about the intent
  • Section about ability to communicate and improve to come to consensus it is the report we want 
  • Communicate about = user interface, carrier gets a chance to say "this one" and the abiltiy to comment on report before implemented, and then implementation and then feedback to agree to
  • Stat Reporting or data calls too? apply to both but focused on Stat Reporting and can bridge later date 
  • Reqs for stat reporting in handbook

Parameters of Subscription

  • Specific to each report (loss dates, premium dates - other variables?)
  • Some general to all reports 
  • Line of Business, Dates, Jurisdictions, 
  • Differences in report by state? Something Stat Reporting folks can answer
  • Territory, Coverage? Diff reports same time period, grouping not a filter

Editing Subscription

  • create/read/update/delete subscriptions
  • self-service or goverened thing?
  • right now, sign up thru stat reporter for reports a Carrier wants run
  • AAIS does it for them or on their own - something to be done
  • Part of governance of openIDL (members, credentialing, )
  • Audit log - auditability of subscriptions - managing subscriptions as part of openIDL - AAIS thing, funct of openIDL 
  • openIDL not a stat reporter - is there. a specific designation? AAIS is stat reporter working thru openIDL, if others join, they could be doing stat reporting on openIDL, there will be a "Stat Reporter" as intermediary, 
  • defines a seat in openIDL network (how to say "AAIS is doing X")
  • DH - Trv joins openIDL, selects which stat agent thry would do stat reporting through - could be report by report but guess all-or-nothing
  • PA - not all or nothing as AAIS doesnt do all line (work with AAIS, then Verisk, ISO - can't be complete)
  • DH - don't do MassCar and Texas w/ AAIS
  • KS - identifying report,  id stat reporter, per report detail (each report stat reported via AAIS), stat reporter per report or by line of business - per report connection covers all cases

Ending Subscription

  • Delete
  • Give subscription an end data (effective expiration on the subscription itself)
  • lead time where AAIS or Carriers want to know if they are continuing or moving to new stat agent in openIDL
  • Autorenewal

Load Data / Assert Ready for Report

080122

  • ?? Facilitate semi-auto inquiries, metadata management scheme
  • ?? Day 1 - PDF uploaded somewhere

080222

  • KS - Homework, turn the above into arch statements or drawings/tenets, not in the requirements, feel little like requirements still, how do we add progress outside meetings?
  • PA - like about reqs - key of what genre a req comes from and a unique ID - can we get a unique ID for these elemtns and a table, what refs what reqs, do homework
  • KS - components or arch elements as oppsed to reqs - talking solutioning, trying to take reqs and apply to scenarios, break out into a set of arch statements for each component (LD1 assert up to a date on the data, LD2), then consolidate  - AAIS team to org this doc into that format (due next Mon 8/8
  • SK - is the reqs based on discussions, done, next step to jump into solution design and arch? 
  • PA - jumping in makes sense, int in 2 things: interactions of network and HDS, hard to think of how data load happens w/o knowing target
  • SK - deliberated reqs, organized, next step not to re-deliberate reqs but to solidify the arch or at least start on it NOT reclassifuying this into another set of reqs
  • KS - avoid that, these are functional areas sys needs to support, not get to details of tech for a while, all the ideas that need to hold true, made progress in open ended way
  • JB - top down/bottom up - some sense going back to phases of the sys we started with, keep in mind arch we are dealing with network, not centralized data center, keep in mind org funct around aspects of that network, reflect some of the initial thinking arch needs to be supported, what are the elements for producers, processors, receivers of data
  • KS - need to be tolerant of chaos, in between meetings remove chaos and refine, brainstormer, raw material
  • PA - outlined our big boxes? 
  • KS - Data formats? Stat plan

Define Format

  • What is the data? Glossary or definition? What is being loaded (stat report well-defined)
  • Assumption - stat plan transactional data, metadata is handled by spec docs as yet to be written
  • Data existing in HDS, what schema says, there to fulfill stat report, this is just data thats there, period and quant/qual of data designed to do stat report, for this purpose just a database
  • Minimal data catalog - whats the latest, define whats there (not stat report per se), whats in there is determined thru the funct described (time period, #, etc.) - diff between schema for a db and querying it, format for what could be in there
  • Minimal form of data catalog - info about whats in the data
  • Schema is set but might evolve - "type of data loaded" - could say "not making assertions this data is good for a specific data call but to the best of our ability it is good to X date"

Load Function

  • Deeper in process of data you have getting into openIDL, details of managing
  • Process, raw data in carrier DB, turned into some "load candidate", proposed to be loaded into system, needs to go thru edit package
  • DH - before HDS?
  • KS - from your raw data to accepted HDS data (load function) and will inc other pieces like edit package
  • DH - internal loading to the carrier
  • KS - carrier resp for turning data into intake format (stat plan)
  • DR - req for "heres what data should look like to be ingested" - 
  • data model - stat plan day 1, day 2... data model
  • KS - process of taking it in, do work to make more workable in the middle, dont commit to saying "what you put in front end is exactly what ends up in HDS" - right now not putting it exactly, turning it into at least a diff syntax and never will be 1:1, semantically close, 
  • DH - more sense for decoding
  • KS - load funct part of openIDL, carrier entry point, what carrier putting into load func is stat plan, THEN run thru edit package, review/edit (a la SDMA), "go" and then pushed thru HDS - carrier not doing transform, carrier loading thru UI (SDMA), may even be SDMA (repurposed) to load HDS at end of day
  • DH - HDS w/in carrier node?
  • KS - adapter package - need to support 1 keeping data in carrier world and dont want everyone to write their own edit package and load process, agree on somethign that runs in your world that is lightweight edit package
  • DR - simplify, essentially a data model, how does it lie in HDS, may or may not be a different input data model that is whats loaded, once in HDS and "loaded" should conform and have any edit packages already run on it, all running on carrier side, dont want it going out and back - caveat, edit packages are shallow tests, not looking at rollup or reconciliations, "is it in the format intended?"
  • KS - row by row edits, not across rows, had to have x w/o errors, etc. - syntactical and internal, "if you pick this loss record cant have a premium"
  • DR - sanity checks and housekeeping 
  • after edit, push to HDS (tbd format, close to stat plan day 1)
  • PA - extensibility, adding more to end of stat plan in the future

Transform

  • whatever we need, might do some small decoding, def turn in from flat text to TBD (database model in HDS)
  • normalization? some light transformation in the beginning
  • assumes not collapsing records, like stat plan same level of granularity every record input is record in HDS (time being)? 1:1
  • decoding has reference data to lookup

Edit Package

  • Big (all of SDMA)
  • when we discuss loading data is it already edited and run thru FDMA rulebase and good to go or raw untested data
  • ASSUMING thru the edit
  • Can tell how goods the data and through when
  • pointer to SDMA functionality:
  • PA - SDMA - business level rules, large manual process for reconciliation BEFORE turning in reports (today), business and schema testing (does data match rules and schema? cross field edits)
  • KS - cross field edits - loss records, diff coverages, do have a publishable set of 1000s of rules if used SDMA will just work, just plug SDMA in - can and has been pulled out, proved it could be done, rules could be run as an ETL process - havent done, back and forth and fixing of records not part of it, run the rules as ETL process

Data Attestation (boil down to tighter discussion Ken Sayers )

  • Have it or don't by time period
  • Assumption - run report, everyone is always up to date with data, loading thru stat plan, data has been fixed in edit process, ask for 2021 data its there
  • Automated query cant tell if data is there, may have transax that haven't processed, dont know complete until someone says complete
  • Never in position to say "complete" due to late transax
  • If someone queries data on Dec 31, midday. not complete - transax occur that day but get loaded Jan 3 - never a time where it is "COMPLETE"
  • Time complete = when requested - 2 ways - 1 whenever Trav writes data, "data is good as of X date" metadata attached, Trav writes business rules for that date, OR business logic on extract "as long as date is one day earlier" = data valid as of transax written
  • Manual insertion - might not put more data in there, assume complete as of this date
  • Making req on Dec 31, may not have Dec data in there (might be Nov as of Dec 31)
  • Request itself - I have to have data up to this date - every query will have diff param, data it wants, cant say "I have data for all purposes as of this date"
  • 2 dates: 12/31 load date and the effective date of information (thru Nov 30)
  • Point - could use metadata about insertion OR the actual data, could use one, both or either
  • Data bi-temporal, need both dates, could do both or either, could say if Trv wrote data on Jan 3, assumption all thru 12/31 is good
  • May not be valid, mistake in a load, errors back and fixing it - need to assert MANUYALLY the data is complete as of a cert time
  • 3-4 days to load a months data, at the end of the job, some assertion as to when data is complete
  • most likely as this gets implemented it will be a job that does the loading, not someone attensting to data as of this date -where manual attestation becomes less valuable overe time
  • as loads written (biz rule, etc.) If we load on X date it is valid - X weeks, business rule, not manual attestation - maybe using last transax date is just as good - if Dec 31 is last tranx date, not valid yet - if Dec 31 is last transax date then Jan 1
  • Data for last year - build into system you cant have that for a month 
  • Start with MANUAL attestation and move towards automated
  • Data thru edit and used for SR, data trailing by 2 years
  • doesnt need to be trailing 
  • submission deadline to get data in within 2 years then reconciliation, these reports are trailing - uncomfortable with tis constraint
  • our ? is the data good, are we running up to this end date, not so much about initial transax than claims process
  • May have report that wants 2021 data in 2023 bug 2021 data updated in 2022
  • Attestation is rolling, constantly changing, edit package and sdma is not reconciliatioj it is business logic - doesnt have to be trailing
  • As loading data, whats the last date loaded, attestation date
  • sticky - go back x years a report might want, not sure you can attest to 
  • decoupling attestation from a given report (data current as of x date), 
  • everyting up to the date my attestation is up to date in the system
  • "Data is good through x date" not attesting to period
  • Monkey Wrench: Policy data, our data is good as of Mar 2022 all 2021 data is up to date BUT Loss (incurred and paid) could go 10 years into future
  • some should be Biz Logic built into extrat pattern - saying in HDS< good to what we know as of this date, not saying complete but "good to what we know" - if we want to dome somethign with EP, "I will only use data greater than X months old as policy evolves
  • Loss exposure - all losses resolved, 10 years ahead of date of assertion, as of this date go back 10 years
  • decouple this from any specific data call or stat report - on the report writer 
  • 2 assertion dates - one for policy vs one for claim
  • not saying good complete data, saying accurate to best of knowl at date x
  • only thing changing is loss side
  • saying data is accurate to this point in time, as of this date we dont have any claim transax on this policy as of this date
  • adding "comfort level" to extraction?  - when you req data you will not req for policies in last 5 years - but if i am eric, wants to und market, cares about attestation I can give in March

Exception Handling

  • Account for exception processing

Metrics/Reporting (process)

Data Catalog (meta data about whats in the db - some notion of whats available currently)

History Requirements 

  • rollout period - not keeping 20 years of data

Schema Migration/Evolution


Create Report Request (Configuration)

Define jurisdictional context/req (single or multi versions of same report)

How often it runs

Data Accessed

Outputs

Roles and Permissions

UI/Interface

Extraction Pattern

Aggregation Rules

Messaging

Participation Criteria

Two Phase Consent

Data Path (from TRV to X to Y - where is the data going and for what purpose)

Development Process (extraction/code)

Testing

Auditability of data

Generate Report

Rule Base for each report

Extract Data  (will involve aggregation)

Transmit Data (from HDS to analytics node)

Combine Data (various sources)

Consolidate Data (at the report level)

Traceability

Format the output

Validate against participation criteria (vs report config)

Exception Processing

Messaging

Generate Report

Auditability/Traceability

Reconciliation (Manual day1?)

Extraction error detection & handling

Reconciliation (make sure report is correct based on request - reasonability check on the report - NOT financial reconciliation)

Financial Reconciliation (Oracle? Source of truth to tie against those #s?)

Statistical Reconciliation

Auditability/Traceability

Deliver Report

Make report available (S3 bucket public/private? start private)

Permissioned Access

Deliver to participants (carriers)

Deliver to subscribers (requestors)

Receipt/Notifications

Auditability/Traceability

Exception handling

Data Call

Communications for resolving conflicts, etc.

Load Data

Create Data Call

Like Data Call

Issue Data Call

Subscription to Data Call

Consent to Data Call

Mature Data Call

Abandon Data Call

Clone Data Call

Deliver Report


Application Components

Data Sources, Sinks and Flows

Decisions

Tenets

Data

ID

Tenet

1Data will be loaded in a timely manner as it becomes available.
2HDS will track the most recent date that is available to query for pre and post edit package data.
3Data owners will correct any mistakes as soon as they are made aware of the issue. 
4Data owners will follow current practices for logging policy and claim records as they do today. A new record will be created for each event. All records will be loaded in a timely manner after the creation event. 
5There will be a distinction between edited and unedited records. (Successfully gone thru edit package)

Notes:


Time

Item

Who

Notes









  • No labels