1.8 KiB
The purpose of this document is to align on the initial design of Reply-O-Meter. Everything here is very early and likely to change.
Overview
The following is a product-oriented view of all the stages that need to happen in the final V1 product:
-
Data Ingestions Input: raw photos Output: List of Artifacts that link the raw photo with a textual representation a. Ingest all of the raw data files. Photos of letters, postcards, and photos. b. Digitization: conversation of the data to a textual representation. c. Normalization: translation of all material to one internal language (e.g. English) d. Artifacts: creating Artifacts from joint raw material. Example: photos of person and the name/time from the photo's backside.
-
Data Processing Input: List of Artifacts Output: Graph of Entities (Person, Location, and Event) a. Metadata (move to Ingestino?): for each Artifact, extract the Metadata on the Entities that it refers to, such as Person, Event, and Location. b. Reconciliation: Create and/or Update existing Entities based on the information from the new Artifacts.
-
Browser Input: Graph of Entities Output: Updates to Artifacts and/or Entities a. Feedback: User may correct any of the Ingestion or Processing steps, which will retrigger the rest of the flow. i. Will trigger model training/tuning if relevant.
-
Story Creator Input: Graph of Entities Output: Story a. Chat Agent: User can chat with an Author to create stories based on the known Entities.
-
Graphic Artist Input: Story, Graph of Entities Output: Graphic Novel a. People: Given a style, create visual representation for each Person across their life. b. Supporting Material: Gather maps of the Location and the relevant Events. c. Create a graphic novel based on the story.