You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

This page covers the known issues with openIDL.

#ComponentIssue
Priority


ChunkId and BatchId are antiquated and unnecessary.The chunkid and batchid were notions introduced by IBM.  They are not needed anymore, me thinks.

Reference ImplementationNeed a bootstrap script for adding users, data calls, extraction patterns and data to hds

Provide a script that can create enough elements to get started with the system immediately.

  • Add users (if necessary)
  • Add data into the HDS
  • Add data calls 
  • Add extraction patterns



Configure file should drive the pipelines



Automate initial account setup



Responsive UIThe User Interface is not responsive.


SSO / Identity ManagementConsider using a universal identity management solution like that discussed by Chainyard.


Monitoring is missing

The IBM system did not implement monitoring.  The current scope does not include monitoring.

Here is an article on how to provide monitoring in Kubernetes using Prometheus: https://phoenixnap.com/kb/prometheus-kubernetes-monitoring

There is monitoring implemented in the reference implementation using AWS native services.




Maintenance Strategy

The system is a distributed network.  The nodes reside in foreign clouds that AAIS does not own.  In order to keep the nodes up to date, they must be managed.  GitOps is a practice that enables this.  AAIS will establish these practices for ongoing maintenance of the distributed nodes.




Data Architecture is not fully defined

The data architecture is not fully informed by the problem space.  Having distributed nodes, that include quality assurance of the data and the extraction for analytics means some of the current architecture must be reimagined.




Messaging Standard

The format for sharing data with the node is required.  This may be best served by a messaging model that is light weight, well documented and performant.  This is where the bulk data processing occurs in real time or through events.




HDS Format Standard

Once data has been ingested via the messaging format, it will be validated and cleansed to a high standard valid for use in extraction and reporting.  This is not a “transaction” format, but a logical format that fits the needs of the extraction itself.  For Data Calls a policy oriented model is more appropriate.




Data Lifecycle / Time to Live / Auditing

Data has become a major part of the cost of cloud computing.  Keeping data around forever, especially when it is derived from other data, is likely not the best choice.  The lifetime of the data must be considered and optimized for the use case.




Extraction Processing Implementation

The extraction pattern model currently uses a map reduce function in MongoDB.  This locks us into MongoDB and uses a closed environment without access to the outside world we’ll need for correlating other data like census.  The extraction capability must be reimagined.




Data Loading UI

The user interface for loading data and the ETL provided by IBM works only with stat plan data.  This functionality is shifting to the member for implementation.  AAIS can provide a reference implementation or none at all.




Data Loading Scalability

The current API inherited from IBM does not scale for large data sets.  This component must be reimagined in a way that can handle very large volumes of data.




Data Loading Hash

Since the data in the pipeline is derived from other data, it is likely to be transient.  We need to to track that data has been provided, but if the data architecture changes, then we must rethink where we take snapshots and record them in the ledger.




Data Load Quality Assurance (Rules)

The rules that validate the submitted data are currently not part of the architecture.  That which IBM provided were applied to the stat records, not the “HDS” format.  Most submission of data will not follow the stat plans.  The HDS is where we know the data will be the same and where the rules should be applied.  This must be built into the node.




Consent Processing

The consensus process does not currently work correctly.  It picks up the data from the harmonized data store upon consent, instead of when the data call expires.  This must be fixed in alignment with any other data or application architecture changes resulting from previous items.




Stat Plan Mapping

The stat plan is the current form of data submission.  Not all data will come in this form.  Hopefully, it can be retired at some point.  Until that time, any reporting via stat plan through the openIDL requires the stat plan mapping to exist.  The IBM implementation is inadequate and must be replaced.




Incomplete Analytics Node

The analytics node, as inherited from IBM, is not fully functional even for the data call.  The remaining functionality must be developed.




Inadequate Unit Tests

IBM left us with minimal automated testing, including a minimum of unit tests and no CI/CD that automatically executes them.




Scalability/Performance Tests

There is no performance/scalability tests.




Automated UI Tests

There are no automated UI tests.




Authentication

Not using the best practice of handing off to authentication provider.  Should we do this or keep in control to make it more plug and play with different providers?




Bugs

There are several bugs in the current code that must be fixed.

  • UI Count of Data Calls
  • Adequate Filtering of Data Calls / Workflow / In-box concept
  • UI icons require code change
  • Validation of expired tokens in insurance data manager? - was validating token, but not expiration
  • The UI is not responsive. It does not scale to any resolution under 1080 nor to other devices.
  • use library charts to remove dups of helm chart templates
  • put utilities into kubernetes - for creating users
  • move icons out of code
  • stat-agent should not be able to like
  • block explorer
  • view organisations or carriers is not working (from ibm)
  • remove dependency on chunkId from data loading

  • No labels