Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The Archiecture Definition Workspace is where we as a community come together to work through the architecture for openIDL going forward.  We take our experiences, combine them with inputs from the community and apply them against the scenarios of usage we have for openIDL.  Below is a table of the phases and the expected outcomes of each.

PhaseDescriptionOutcome
RequirementsDefine the requirements for one or more possible scenario for openIDL.  In this case, we are focused on the stat reporting use case.A set of requirements.  openIDL - System Requirements Table (DaleH @ Travelers)
Define ScenariosDefine the scenarios sufficiently to gather ideas about the different steps.  The scenarios will change over time as we dig into the details.A few scenarios broken down into steps.
BrainstormingGather ideas from all participants for all the different steps in the scenariosDetailed notes for each of the steps in the scenario(s)
Architecture Elaboration and Illustration

Consolidate notes and start defining architecture details.

Network Architecture - different kinds of nodes and how they participate

Application Architecture - structure of the functional components and their responsibilities

Data Architecture - data flows and formats

Technical Architecture - use of technologies to support the application

Diagrams for the different architectures

  • block diagrams
  • interaction diagrams

Tenets

  • strongly held beliefs / constraints on the implementation
Identify SpikesFrom the elaboration phase, will come questions that require answers.  Sometimes, answers come through research.  Often, answers must come from spikes.  Spikes are short, focused deep dive implementation activities that help identify the right solution for aspects of the system.  The TSC must approve the spikes.
  • spikes defined
  • spikes approved
Execute SpikesExecute approved work to answer the question that required the spike.  Spike results documented.
Plan ImplementationWith spikes completed, the team can finalize the design of the architecture and plan the implementation.Implementation Plan
ImplementImplement the architecture per the plan.Running network in approved architecture

Deliverables:

Scenarios

Stat Report 

...

  • Big (all of SDMA)
  • when we discuss loading data is it already edited and run thru FDMA rulebase and good to go or raw untested data
  • ASSUMING thru the edit
  • Can tell how goods the data and through when
  • pointer to SDMA functionality:
  • PA - SDMA - business level rules, large manual process for reconciliation BEFORE turning in reports (today), business and schema testing (does data match rules and schema? cross field edits)
  • KS - cross field edits - loss records, diff coverages, do have a publishable set of 1000s of rules if used SDMA will just work, just plug SDMA in - can and has been pulled out, proved it could be done, rules could be run as an ETL process - havent done, back and forth and fixing of records not part of it, run the rules as ETL process

Data Attestation

...

  • do we have an automated way to attest to data?
  • cannot attest completeness
  • Provide data attestation function.  Carrier attests to data for a particular date.  Attestation parameters? Data attested, time frame (last data of complete transactional data), level of data (must define for attestation: like stat reporting day 0, 1, 2)
  • different attestation for claims and premium data
  • Must have data formats / levels defined for attestation
  • on extraction - check last attested date.  If last attested date meets requirement of data call.
  • attesting to the quality of the data (meets 5% error constraint for data from x to y dates)

...