ReplyOMeter/Design.md
2024-09-09 08:48:05 -04:00

1.7 KiB

The purpose of this document is to align on the initial design of Reply-O-Meter. Everything here is very early and likely to change.

Overview

The following is a product-oriented view of all the stages that need to happen in the final V1 product:

Data Ingestions Input: raw photos Output: List of Artifacts that link the raw photo with a textual representation

  1. Ingest all of the raw data files. Photos of letters, postcards, and photos.
  2. Digitization: conversation of the data to a textual representation.
  3. Normalization: translation of all material to one internal language (e.g. English)
  4. Artifacts: creating Artifacts from joint raw material. Example: photos of person and the name/time from the photo's backside.

Data Processing Input: List of Artifacts Output: Graph of Entities (Person, Location, and Event)

  1. Metadata (move to Ingestino?): for each Artifact, extract the Metadata on the Entities that it refers to, such as Person, Event, and Location
  2. Reconciliation: Create and/or Update existing Entities based on the information from the new Artifacts.

Browser Input: Graph of Entities Output: Updates to Artifacts and/or Entities

  1. Feedback: User may correct any of the Ingestion or Processing steps, which will retrigger the rest of the flow.
  2. Tuning: Will trigger model training/tuning if relevant.

Story Creator Input: Graph of Entities Output: Story

  1. Chat Agent: User can chat with an Author to create stories based on the known Entities.

Graphic Artist Input: Story, Graph of Entities Output: Graphic Novel

  1. People: Given a style, create visual representation for each Person across their life.
  2. Supporting Material: Gather maps of the Location and the relevant Events.
  3. Create a graphic novel based on the story.