Hello Michael,
Thank you for your question.
There are three main ways to use our SDK to perform recognition, depending on what you want to achieve, and the effort of software development that you are ready to do:
The recognizers aim at performing real-time incremental recognition on transient ink.
They are intended to address use cases like input-method writing solutions based on iink SDK, where your application performs the strokes capture and rendering. They simplify iink SDK integration when the ink is simply an input medium for digital content. For more details, please refer to https://developer.myscript.com/docs/interactive-ink/3.2/overview/recognizers/
The editor object allows you to perform rendering driven interactivity: The principle of rendering-driven interactivity is to delegate the display management to MyScript iink SDK so that it can handle the entire rendering canvas and perform all interactivity functions for you. For more details, please refer to https://developer.myscript.com/docs/interactive-ink/3.2/concepts/ink-capture/
Today the Editor object only supports manual classification of math within raw content with the SetSelectionType method.
The OffscreenEditor object allows you maintain an interactive context, where your application performs the strokes capture and rendering, thanks to APIs allowing you to edit the content and get the updated result dynamically. This programmatic interactivity, also known as off-screen interactivity, is typically useful when you want to integrate an editable content into an existing application. All these features are provided through APIs, and your application controls all the implemented behaviors. For more details, please refer to https://developer.myscript.com/docs/interactive-ink/3.2/concepts/interactive-ink/#programmatic-interactivity
The supported content types depend on the object that you use.
So, I would say that the first question that you need to ask yourself is, do you want to rely on our SDK to do the stroke capture and the stroke rendering for your application, or do you have your own strokes capture and rendering?
Second question is: do you need the interactive features like gestures, or typeset?
Best regards,
Gwenaëlle
Michael
I'm trying to use my script to help my math students read what I'm writing on a display board. I teach middle school math so I'm drawing triangles and circles, and then labelling them with either text or math symbols as needed. A triangle for example might have a square root (3) on one leg, a letter on another and some digits on the third leg. The circle's perimeter or diameter might have 2pi where pi is the greek pi.
To achieve that goal, I'm trying to figure out myscript's architecture.
The impression I get reading the documentation is that my script gathers strokes, passes them to a classifier that attempts to guess what the strokes might be, i.e., math, text, diagram and then based on the classifier's output, passes the strokes to a recognizer.
As that is rather circular, I'm thinking I'm not getting the architecture right. Moreover, a classifier might only support text and math but the raw content recognizer can handle text, math diagrams and decorations (whatever the heck those are). A "convert" classifier seems to handle a different set of classes which adds to the confusion.
Is there a way for me to control which recognizer gets the pen strokes fed to it? I'm thinking that I would have a pallet with three buttons, math, text, shape. Before I start moving the pen, I'd click on the appropriate button, draw whatever and then the math/ text or shape editor would interpret whatever I had just drawn. In essence, I'd be the classifier and my script would interpret what I'd just told it was a specific kind of shape?