Contributors

InitialsContributor
DHDale Harris - Travelers
DRDavid Reale - Travelers
SCSusan Chudwick - Travelers
JMJames Madison - The Hartford
SKSatish Kasala - The Hartford
KSKen Sayers - AAIS
PAPeter Antley - AAIS
SBSean Bohan - openIDL / Linux Foundation
JBJeff Braswell - openIDL / Linux Foundation

Process

The Archiecture Definition Workspace is where we as a community come together to work through the architecture for openIDL going forward.  We take our experiences, combine them with inputs from the community and apply them against the scenarios of usage we have for openIDL.  Below is a table of the phases and the expected outcomes of each.

PhaseDescriptionOutcome
RequirementsDefine the requirements for one or more possible scenario for openIDL.  In this case, we are focused on the stat reporting use case.A set of requirements.  openIDL - System Requirements Table (DaleH @ Travelers)
Define ScenariosDefine the scenarios sufficiently to gather ideas about the different steps.  The scenarios will change over time as we dig into the details.A few scenarios broken down into steps.
BrainstormingGather ideas from all participants for all the different steps in the scenariosDetailed notes for each of the steps in the scenario(s)
Architecture Elaboration and Illustration

Consolidate notes and start defining architecture details.

Network Architecture - different kinds of nodes and how they participate

Application Architecture - structure of the functional components and their responsibilities

Data Architecture - data flows and formats

Technical Architecture - use of technologies to support the application

Diagrams for the different architectures

  • block diagrams
  • interaction diagrams

Tenets

  • strongly held beliefs / constraints on the implementation
Identify SpikesFrom the elaboration phase, will come questions that require answers.  Sometimes, answers come through research.  Often, answers must come from spikes.  Spikes are short, focused deep dive implementation activities that help identify the right solution for aspects of the system.  The TSC must approve the spikes.
  • spikes defined
  • spikes approved
Execute SpikesExecute approved work to answer the question that required the spike.  Spike results documented.
Plan ImplementationWith spikes completed, the team can finalize the design of the architecture and plan the implementation.Implementation Plan
ImplementImplement the architecture per the plan.Running network in approved architecture

Deliverables:

Scenarios

Stat Report 


Subscribe to Report (automate initiation & consent - assumes stat report)

Define jurisdictional context/req (single or multi versions of same report)

How often it runs (report generation frequency)

Extraction Details / Metadata

Outputs / Aggregation Rules

Analytics Node Function (what are you gonna do with the data after combination?)

Roles and Permissions

UI/Interface

Extraction Pattern

Aggregation Rules

Messaging

Participation Criteria

Two Phase Consent

Data Path (from TRV to X to Y - where is the data going and for what purpose)

Development Process (extraction/code)

Testing

Auditability of data

Identify Report

Identify who is subscribing

Connecting Subscriber and Report

Parameters of Subscription

Editing Subscription

Ending Subscription

Load Data / Assert Ready for Report

080122

080222

Define Format

Load Function

Transform

Edit Package

Data Attestation

Raw Notes

Exception Handling in LOADING

Metrics/Reporting (process)

Data Catalog (meta data about whats in the db - some notion of whats available currently)

History Requirements 

Schema Migration/Evolution


Create Report Request (Configuration)

Looks a lot like "Identify Report"

Define jurisdictional context/req (single or multi versions of same report)

How often it runs

Data Accessed

Outputs

Roles and Permissions

UI/Interface

Extraction Pattern

Aggregation Rules

Messaging

Participation Criteria

Two Phase Consent

Data Path (from TRV to X to Y - where is the data going and for what purpose)

Development Process (extraction/code)

Testing

Auditability of data

Generate Report

Rule Base for each report

Extract Data  (will involve aggregation)

Transmit Data (from HDS to analytics node)

Combine Data (various sources)

Consolidate Data (at the report level)

Traceability

Format the output

Validate against participation criteria (vs report config)

Exception Processing

Messaging

Generate Report

Auditability/Traceability

Reconciliation (Manual day1?)

Data Quality

Extraction error detection & handling

Reconciliation (to do Mon 9/12 w/ SusanC)

Reconciliation (make sure report is correct based on request - reasonability check on the report - NOT financial reconciliation)

<SEAN CHECK RECORDING>

9/13/2022

Financial Reconciliation (Oracle? Source of truth to tie against those #s?)


Statistical Reconciliation 


Auditability/Traceability

Deliver Report

  1. Stat agent keeps index of reports by line by state
  2. Regulator - chooses which reports they want to receive - standard stat report ( otherwise a data call)
  3. Carriers - will LIKE by line item (stat agent-state-line of business)
  4. Stat Agent will attach an Extraction Pattern (EP)
  5. Carrier will consent 
  6. Generate Report
  7. Submit Report to Regulators and Participating Carriers

Make report available (S3 bucket public/private? start private)

Deliver to participants (carriers)

Deliver to Regulators (requestors)

Receipt

Exception Handling in Reporting

Auditability/Traceability

Notification

Data Call

Communications for resolving conflicts, etc.

Load Data

Create Data Call

Like Data Call

Issue Data Call

Subscription to Data Call

Consent to Data Call

Mature Data Call

Abandon Data Call

Clone Data Call

Deliver Report

Access and Roles

Permissioned Access

Roles

Application Components

Data Sources, Sinks and Flows

Decisions

Tenets

Data

ID

Tenet

1Data will be loaded in a timely manner as it becomes available.
2HDS will track the most recent date that is available to query for pre and post edit package data.
3Data owners will correct any mistakes as soon as they are made aware of the issue. 
4Data owners will follow current practices for logging policy and claim records as they do today. A new record will be created for each event. All records will be loaded in a timely manner after the creation event. 
5There will be a distinction between edited and unedited records. (Successfully gone thru edit package)
6HDS data is attested to, some way to attest to meta data, date range up to which its good, capture some info about hDS "data in HDS is good up to now for this purpose"

Non Functional Requirements (to be moved to requirements doc)

Notes:


Time

Item

Who

Notes