Technical FAQs
PrizmDoc™ Viewer is a solution that integrates into your current application to render and display a multitude of file types with high fidelity and speed. Due to its extreme flexibility, PrizmDoc Viewer can be supported on virtually any platform and in any programming language that supports REST API calls. Designed to run on every type of computing device, our zero-footprint viewer makes it easy for your users to work where and how they wish.
Demo: BarcodeXpress for .NET Core Demo: Barcode Xpress is a multi-language library that runs in multiple platforms including Windows and Linux-based systems. This barcode library can read and write over 30 different barcode types with high speed and accuracy. Barcode Xpress also comes with a free license to ImagXpress, one of our image processing SDKs, which supports loading and saving numerous image file formats including BMP, JPG, multi-page TIFF, among many others.
The days of manually transcribing scanned documents into an editable, digital document are thankfully long behind most organizations. Error-prone manual processes have largely given way to automated document and forms processing technology that can turn scanned documents into a more manageable form with a much higher degree of accuracy.
Much of transition was made possible by the proliferation of optical character recognition (OCR) and intelligent character recognition (ICR). While they perform very similar tasks, there are some key differences between them that developers need to keep in mind as they build their document and form processing applications.
How Does Character Recognition Technology Work?
Character recognition technology allows computer software to read and recognize text contained in an image and then convert it into a document that can be searched or edited. Since the process involves something that humans can do quite easily (namely, reading text), it’s easy to assume that this would be a rather trivial task for a computer to accomplish.
In reality, getting a computer program to correctly identify text and convert it into editable format is an incredibly complex challenge complicated by a wide range of variables. The problem is that when a computer examines an image, it doesn’t see people, backgrounds, or text as distinct images, but rather as a pattern of pixels. Character recognition technology helps computers distinguish text by telling them what patterns to look for.
Unfortunately, even this isn’t as straightforward as it sounds. That’s because there are so many different text fonts that depict the same characters in different ways. For example, a computer must be able to recognize that each of the following characters is an “a”:
When humans read text, they have a mental concept of what the letter “a” looks like, but that concept is incredibly flexible and can easily accommodate a broad range of variations. Computers, however, require precision. Programmers must provide them with clear parameters that help them to navigate unexpected variations and identify characters accurately.
Pattern Recognition
The earliest versions of character recognition developed in the 1960s relied on pattern recognition techniques, which scanned images and searched for pixel patterns that matched a backlog of font characters stored in memory. Once those patterns were located, the software could translate the characters into searchable, editable text in a document format. Unfortunately, the patterns had to be an exact pixel match, which severely limited how broadly the technology could be applied.
One of the first specialized fonts developed to facilitate pattern recognition was OCR-A. A simple monospace font (meaning that each character has the same width), OCR-A was used on bank checks to help banks quickly scan them electronically. Although pattern recognition libraries expanded over the years to incorporate common print fonts like Times New Roman and Arial, this still presented serious limitations, especially as the variety of fonts continued to grow. With one popular font finding website indexing more than 775,000 available fonts in 2021, pattern recognition needed to be supplemented by another approach to character recognition.
Feature Detection
Also known as feature extraction, feature detection focuses on the component elements of printed characters rather than looking at the character as a whole. Where pattern recognition tries to match characters to known libraries, this approach looks for very specific features that distinguish one character from another. A character that features two angular lines that come to a point and are crossed by a horizontal line in the middle, for instance, is almost always an “A,” regardless of the font used. Feature detection focuses on these qualities, which allows it to identify a character even the program has never encountered a particular font before. As the printed examples above demonstrate, however, this approach needs to take several ways of rendering the character “A” into consideration when setting parameters.
Most character recognition software tools utilize feature detection because it offers far more flexibility than pattern recognition. This is especially valuable for reading document images with faded ink or some degradation that could prevent an exact pattern match. Feature detection provides enough flexibility for a program to be able to identify characters under less than ideal circumstances, which is important for any application that has to deal with scanned images.
OCR vs ICR: What’s the Difference?
Optical character recognition (OCR) is typically understood to apply to any recognition technology that reads machine printed text. A classic OCR use case would involve reading the image of a printed document, such as a book page, newspaper clipping, or a legal contract, and then translating the characters into a separate file that could be searched and edited with a document viewer or word processor. It’s also incredibly useful for automating forms processing. By zonally applying the OCR engine to form fields, information can be quickly extracted and entered elsewhere, such as a spreadsheet or database.
When it comes to form fields, however, information is frequently entered by hand rather than typed. Reading hand-printed text adds another layer of complexity to character recognition. The range of more than 700,000 printed font types is insignificant compared to the near infinite variations in hand-printed characters. Not only must the recognition software account for stylistic variations, but also the type of writing implement used, the quality of the paper, mistakes, steadiness of hand, and smudges or running ink.
Intelligent character recognition (ICR) utilizes constantly updating algorithms to gather more data about variations in hand-printed characters to identify them more accurately. Developed in the early 1990s to help automate forms processing, ICR makes it possible to translate manually entered information into text that can be easily read, searched, and edited. It is most effective when used to read characters that are clearly separated into individual areas or zones, such as fixed fields used on many structured forms.
Both OCR and ICR can be set up to read multiple languages, although limiting the range of expected characters to fewer languages will result in more optimal recognition results. Critically, ICR does not read cursive handwriting because it must still be able to evaluate each individual character. With cursive handwriting, it’s not always clear where one character ends and another begins, and the individual variations from one sample to another are even greater than with hand-printed text. Intelligent word recognition (IWR) is a newer technology that focuses on reading an entire word in context rather than identifying individual characters.
To learn more about how OCR vs ICR technology and how they can transform your application when it comes to managing documents and automated forms processing, download our whitepaper on the topic today.
Independent Software Vendors can help their customers reach their full potential and stop being held back by outdated document management practices. As data volumes continue to skyrocket, last century’s manual filing and sorting methods just don’t cut it anymore. Organizations are seeking new and efficient solutions to bring order to their document chaos.
PrizmDoc’s AI-powered Auto Tagging and Classification is helping solve these challenges. This breakthrough technology automatically organizes document collections, leading to faster information retrieval for ECM users. As an independent software vendor, integrating this tool into ECM platforms is an easy way to deliver next-level Auto Tagging and Classification capabilities.
Streamline Document Management with AI-Powered Auto Tagging and Classification
In the realm of Enterprise Content Management (ECM), efficient management of digital documents is essential. PrizmDoc’s Auto Tagging and Classification, leveraging IBM’s watsonx.ai technology, revolutionizes this process by automatically organizing documents and making them easily searchable.
This feature enhances document organization by using advanced AI-powered algorithms to analyze, categorize, and tag documents based on their content. This AI-driven tool improves document search and retrieval as a result of accurate tagging and classification. With contextually relevant results, your users will benefit significantly as search times are reduced, boosting productivity. This solution not only improves operational efficiency but also enhances user experience, making document management seamless and effective.
Benefits of AI-Powered Auto Tagging and Classification
Document management presents significant challenges for many organizations. According to one study, nearly half of employees find it hard to find documents quickly when they need them. For businesses seeking to solve this problem through new applications, another hurdle is that 80% struggle with seamless data and system integrations. PrizmDoc helps independent software vendors overcome these issues. When integrated into ECM systems, PrizmDoc allows ISVs to deliver solutions that streamline document organization, improve search functionality, and enhance efficiency – addressing the most common documentation pains experienced by businesses today.
Enhanced Document Organization
Independent Software Vendors integrating third-party document management solutions like PrizmDoc significantly reduce the time to market for new, innovative features like the AI-powered Auto Tagging and Classification tool. This tool offers ISVs an advanced way to organize documents for users, streamlining organization, ensuring information is consistently labeled, and easily retrievable by categorizing and tagging documents based on content.
AI-Powered Search and Retrieval
PrizmDoc enhances search functionality by generating relevant tags through IBM’s watsonx.ai technology. This leads to more precise, contextually relevant search results, reducing the time users spend searching for documents and boosting productivity within your application.
Time and Cost Savings Through Automation
Manual tagging is labor-intensive and prone to errors. Automating this process with PrizmDoc reduces the need for manual intervention, leading to significant time and cost savings. This allows resources to be allocated to more strategic tasks, enhancing overall operational efficiency.
Consistency and Accuracy
Uniformity in tagging and classification is crucial for maintaining data integrity. PrizmDoc ensures consistency across all documents by applying the same criteria uniformly, minimizing errors, and ensuring reliable document management practices.
Scalability
As an ISV’s customer’s business grows, so does the volume of documents they handle. PrizmDoc’s Auto Tagging and Classification scales effortlessly, handling increased document loads without additional resources. This scalability is vital for businesses looking to expand without compromising on efficiency.
Integrating PrizmDoc’s features within your tools can revolutionize document management, providing your clients with a competitive edge through enhanced efficiency and user experience.
Seamless Integration and Customization
PrizmDoc provides seamless integration into ECM solutions for ISVs, allowing users to process documents without leaving the ECM environment. This ensures data security while enabling single-platform document management, viewing, annotation, and processing. PrizmDoc’s robust API empowers customization to fit individual systems. It seamlessly maintains security while boosting efficiency, empowering ISVs to deliver enhanced solutions, and clients to focus on strategic tasks driving business success.
Overcome AI Challenges for ISVs with PrizmDoc’s Built-In Auto Tagging and Classification
ISVs often face challenges in developing and maintaining their own AI solutions, including high costs, resource allocation, and the need for specialized expertise. PrizmDoc addresses these issues by providing built-in AI capabilities. By leveraging PrizmDoc’s advanced tools, such as Auto Tagging and Classification, ISVs can enhance their offerings without the burden of developing AI from scratch, enabling them to stay competitive and meet client demands efficiently.
Use Cases for Auto Tagging and Classification
Streamlining Legal Document Processing During eDiscovery
Law firms that need to manage large volumes of documents during litigation can quickly become overwhelmed. PrizmDoc’s Auto Tagging and Classification automatically organizes and tags legal documents based on document type and content, reducing manual effort and ensuring quick retrieval. This efficiency allows law firms to focus on case strategy rather than administrative tasks, ultimately improving case outcomes.
Unleashing ECM Platform Potential with Auto Tagging and Classification
PrizmDoc enhances ECM with advanced Auto Tagging and Classification. This allows in-browser document viewing, annotation, and processing within ECM. Streamlining tasks boosts productivity over external tools. For ISVs, PrizmDoc offers easily integrated AI solutions, removing in-house AI challenges and allowing ISVs to focus on solutions while growing business success rather than developing AI.
Schedule a demo now to see how Auto Tagging and Classification can supercharge your customers’ document management!
In the digital era, managing and sharing documents is central to business operations. As organizations handle increasing volumes of data, effective Enterprise Content Management (ECM) and Document Management Software (DMS) solutions become crucial. These systems streamline the organization, storage, and retrieval of information. Integral to these systems are document viewing and processing integrations, enhancing the security, accessibility, and usability of stored data.
Defining Document Viewing Integrations
Document viewing and processing integrations are software components that enable users to access, view, and mark up various document formats within ECM and DMS systems. These tools are embedded into software platforms, offering a consistent and high-quality viewing experience across different file types and devices. They allow users to view and interact with documents without needing native applications or external tools, which is essential for efficient and secure document handling.
Challenges in Document Management
ECM and DMS software solutions face distinct challenges in offering users a way to manage, share, and collaborate on documents, including:
- Security Concerns: Ensuring sensitive information remains secure while being accessible to authorized personnel.
- Compatibility Issues: Managing a variety of document formats not natively supported by all systems or devices.
- Collaboration Barriers: Facilitating effective document collaboration, especially with geographically dispersed teams.
- Efficiency Hurdles: Streamlining document access and offering markup features to enhance processing productivity.
- Version Control: Maintaining document integrity by preventing unauthorized edits and keeping track of changes.
The Solution: Enhancing Document Management With an Integration for Viewing & Processing
Document viewing and processing integrations offer comprehensive solutions to these challenges by:
- Enhanced Security: Secure viewing options prevent unauthorized editing, alteration of metadata, and leaks of sensitive information.
- Consistent User Experience: Maintain a cohesive and efficient user experience across your branded applications.
- Streamlined Collaboration: Features like annotation and redaction enable effective collaboration within the ECM/DMS environment.
- Increased Efficiency: Quick and reliable access to documents and tools for efficient file conversion improves workflow and productivity.
- Control and Compliance: Maintaining document integrity and compliance with data protection regulations becomes more manageable.
Implementing Effective Document Viewing & Processing Integrations
Document viewing and processing integrations should prioritize security, compatibility, collaboration, efficiency, and compliance to address common document management challenges for users.
Key Features to Look For
When choosing a document viewing integration, consider features that enhance functionality and user experience, such as:
- Secure Document Viewing: Access and view a diverse variety of document formats within a secure environment to protect sensitive information.
- Annotation Tools: Facilitate collaboration with tools for adding comments, highlighting sections, and marking up documents.
- Redaction Capabilities: Obscure sensitive parts of a document to ensure compliance with privacy laws.
- File Conversion: Simplify converting documents into various formats to ensure accessibility across different platforms and devices.
- Advanced Search: Quickly locate specific information within documents
Integration with ECM and DMS Solutions
The true power of document viewing and processing integrations lies in their seamless integration with ECM and DMS solutions, making these platforms versatile tools for different business environments. The integration process should be straightforward, enhancing existing systems without extensive modifications.
Integration Enhancements
Effective integration offers:
- Universal Accessibility: Ensuring documents can be viewed and interacted with on any device, breaking down barriers in document accessibility.
- Streamlined Workflows: Integrating annotation, redaction, and file conversion features directly into ECM and DMS platforms reduces the need for multiple tools.
- Enhanced Collaboration: Secure and efficient document collaboration fosters a more dynamic work environment.
Benefits of Effective Document Viewing Integrations
Integrating robust document viewing solutions into ECM and DMS systems provides several benefits:
- Enhanced Document Security: Prevent unauthorized access and provide a secure viewing environment to protect sensitive information.
- Improved Document Collaboration: Facilitate secure collaboration, allowing real-time interaction and feedback within documents.
- Universal Document Viewing: Cross-platform viewing eliminates compatibility issues, ensuring all team members can access necessary documents.
- Increased Productivity: Streamlined review and approval processes, combined with efficient workflow management, boost productivity.
Addressing Integration Challenges
To successfully integrate document viewing solutions, organizations should:
- Assess and Plan: Understand specific integration requirements and plan accordingly.
- Set Up the Server: Ensure the server is configured correctly to handle the document load.
- API Integration: Utilize APIs to integrate features like document viewing, annotation, redaction, and file conversion.
- Customize and Test: Tailor the integration to match workflows and conduct thorough testing.
- Deploy and Maintain: Monitor the integration closely during the initial launch and perform regular maintenance checks.
Embracing Document Viewing Integrations
For product managers, effective document viewing and processing integrations represent comprehensive solutions to address various needs for document management within their ECM/DMS applications. These integrations offer flexibility, security, and efficiency, ensuring that organizations can manage documents effectively and stay ahead in the fast-evolving digital landscape.
Product managers are encouraged to explore the capabilities of robust document viewing and processing integrations, leveraging their features to meet current document management needs and scale for future growth. By implementing these solutions, businesses can enhance their ECM and DMS systems, increasing efficiency and success.
PrizmDoc: Leading the Way in Document Viewing Integrations
PrizmDoc is a leading solution in the realm of document viewing and processing integrations, offering robust features that enhance ECM and DMS applications. With capabilities like secure document viewing, annotation tools, redaction, file conversion, and advanced search, PrizmDoc addresses the critical needs of modern businesses. Its seamless integration, comprehensive security measures, and user-friendly tools make it an invaluable asset for efficient and secure document management.
Learn More: Discover how PrizmDoc can revolutionize your document management processes. Contact us for a demonstration and further information.
Gerry Hernandez, Accusoft Senior Software Engineer
This is a continuation of our series of blog posts that shares our experience with functional test automation in a real-world microservice product base. In part three, we will share our implementation approach to SURGE: Simulate User Requirements Good-Enough. Be sure to read part one and part two before getting started.
Our Implementation of SURGE
This final blog post will be kept brief, as we’ll be covering the nitty gritty details of our automated test framework in a future blog post. But we do want to immediately share the most important bits of how we implemented our methodology and how we chose to separate our concerns. While this is applicable to most platforms, we did write our implementation using Node.
Generalized Test Suite Management
At the time of writing this, we have over 800 tests for our product. That’s more than I can count with my fingers, so it’s imperative to have a smart way to deal with this. Obviously, if I’m only interested in a particular slice of functionality, running the whole suite would be asinine. Through a somewhat scientific approach, we have determined that shaving a yak is not a productive use of development time.
Necessity, the mother of invention, gave birth to our SQL-driven test filters. Our SURGE implementation builds a relational model in-memory and allows the invoker (build server, human, or otherwise) to specify a query that determines which tests should run.
Sound like overkill? It’s not. The thing is, we really drive the point home that SURGE is designed to deal with less than ideal situations. For example, it’s fairly safe to assume that not every test is properly tagged/annotated. We can do things like “run all tests that contain the word ‘API’ in the title,” for example, which will give us a fairly good representation of API-only functionality.
If you’re still not convinced, realize that we don’t have a crystal ball and are unable to determine every single possibility of how the test suite should run. What we do know is that SQL is a tried-and-true grammar for generalized manipulation of a relational model. In plain English: it works in many, many situations and everyone knows how to use it.
Suppose a user gets annoyed with using SQL for casual development. No problem; we have a “starts with” filter built into SURGE. It will run all tests whose feature file starts with a string that’s passed in as a variable. The secret is that under the covers, what it’s actually doing is a SQL query.
What if you can’t express what you want as a SQL query? What if, say, you wanted extremely granular control over which tests run? We have a lower level interface called SURGE Run-Lists; an array of features, scenarios, and test cases to run. Generate it however you’d like and the framework will act accordingly. For example, you could write a script that use a Git diff to determine which tests have changed, then run only those. Actually, this is exactly how our SQL-driven test filters work: the query is used to generate a run-list, which is then fed into SURGE.
For those of you paying attention, our “starts-with” filter generates a SQL query, which generates a run-list. This layered, generalized approach gives us extreme flexibility without compromise. Most importantly, though, is that all of this can be driven by automation.
Gherkish is Not Gherkin
Gherkin implies global scoping and several layers of leaky abstractions. We impose a dialect of Gherkin that we like to call Gherkish. It differs from Gherkin in the following ways:
- All steps must start with Given, When, or Then keywords. The keyword And is intentionally not supported to avoid ambiguities. “You ain’t gonna need it.”
- All step definition functions must be mapped using the entire Gherkish statement; the preceding keyword cannot be omitted, like in many Cucumber implementations. This ensures a one-to-one mapping between the Gherkish and the test step definition, keeping things stupid simple.
- All feature files are scoped to their own step files; steps are not shared globally. We do this by a file naming convention. It doesn’t get any more obvious than that.
- All scenarios begin with the “Given test case: [testCase]” step, and therefore, all rows in the data table begin with a testCase column. This is to provide a meaningful label that describes the intent of the row, which later gets reported by the framework. This keeps things practical.
With these limitations, we’ve constricted the role of the Gherkish to simply providing a specification of behavior, along with example data used to drive the tests. It should never do anything else and there should absolutely not be any more abstraction than this.
In our framework, we use Yadda’s Gherkin parser, along with some of our custom mixins. For the most part, that wheel did not need reinventing.
Synchronous WebDriver in Node
The entire point of Node’s concurrency model is to avoid long-running, synchronous IO. Well, we broke that rule quite heavily regarding our usage of WebDriver. Using WebDriver with Promises, callbacks, or Node Fibers is ugly, confusing, and impractical. So we use synchronous bindings via WebDriver-Sync. It makes the code exponentially more understandable.
Those with Node experience may point out that long-running operations would totally break the Node concurrency model, as the execution context of our code is single-threaded. This leads to the “then how do you run tests concurrently” question, which is answered in the next section.
The Different Layers of Test Runners
Under the covers, we use Mocha to run the tests. Mocha BDD-style tests are programmatically generated at run-time from the Gherkish and example data. Mocha takes care of error/exception handling and other niceties. We have imposed one opinion on our framework when using Mocha, and it’s that we feel that all step functions should be asynchronous, either by returning a Promise, or by accepting a callback. While Mocha does allow both synchronous and asynchronous tests, we don’t allow it. It keeps the code very consistent and more resilient to future changes.
To run any number of tests concurrently, we wrote a quick-and-dirty app that simply spawns multiple processes of our test suite, orchestrating specific features to run within each process. So yes, while each Node process is single threaded and potentially blocked by our synchronous WebDriver bindings, we can run any number of processes in parallel. See? Stupid simple and good enough.
The Small Role of Step Definitions
If the rest of the SURGE methodology is followed correctly, then the step definition files end up becoming very small with almost no functional responsibility other than to keep state. Within a given feature, each step function can read and write state to a context object. Depending on the test’s context and state, it can decide what to do, which will likely either make a call to one of the shared libraries, or make an assertion. That’s it.
In a nutshell, the only thing a step definition should do is map a Gherkish statement to an appropriate action that exists in a well-designed shared library.
The Small Role of Page Object Models
We have folders full of code that just deal with finding elements on various web pages. This is where we have XPath and CSS selector mappings for buttons, text boxes, images, and all sorts of points of interest when testing our software. That is the only thing they do; they find and return elements from a page.
One page corresponds to one file. These so-called “Page Object Models” are automagically injected at runtime when they’re used, so there is no need to litter the code with countless require statements and various initializers. The framework is smart enough to initialize the models and “bind” them to the WebDriver instance being used by the test. Truly zero configuration; write code that reflects your intent and the framework will fill in the details.
API Testing is Ridiculously Easy
I personally consider Node to be a RAD tool for RESTful web services. It’s just so easy to write a service or a client in Node. There’s really no trick to it. We write API clients, which are trivially easy in Node, then use those clients from the step definition functions. If you can write a Hello World in Node, you can write an API test in our test framework.
Wrapping Up
Our sprint team has changed the definition of “done” for our stories to released. There are no special qualifiers for this; released means it’s in production. This means unit tests, code review, automated functional tests, and deployments. If it hasn’t been delivered to our customers with quality, it’s not done. Period. There are many cogs in the machine that make this happen, but SURGE and our test suite plays a major role.
We usually do production releases between one and three times a day. Combined with other deployment tooling (which I will blog about shortly, I promise), our team is extremely confident in what we release. At the time of writing this, we have only ever rolled back one production deployment over the last three months. Our completion rate for sprints and stories have been quite predictable, minus an outlier or two.
But best of all, we created something that truly works for us.
The only issue is that SURGE is a victim of its own success. It began life as a prototype, but now it’s spreading like a contagious smile. That means we need to clean it up and get it ready for general consumption! Before you ask: no official comment on that, but stay tuned.
We’re always looking for talented engineers and QA analysis to help us kick it up a notch. We’re even okay with you telling us about how completely wrong we are. Whatever the case, if you have something to bring to the table, we’d love to hear from you.
Happy coding! 🙂
Gerry Hernandez began his career as a researcher in various fields of digital image processing and computer vision, working on projects with NIST, NASA JPL, NSF, and Moffitt Cancer Center. He grew to love enterprise software engineering at JP Morgan, leading to his current technical interests in continuous integration and deployment, software quality automation, large scale refactoring, and tooling. He has oddball hobbies, such as his fully autonomous home theater system, and even Christmas lights powered by microservices.
Post-secondary schools look very different this year as colleges and universities embrace both blended learning and online-only approaches to content delivery and engagement. But this isn’t a one-off operation. Even as pandemic pressures ease, the shift to distance learning as the de facto solution for many students won’t disappear. As a result, it’s critical for schools to develop and deploy learning management systems (LMSs) that both meet current needs and ensure they’re capable of keeping up with educational evolution. But what does this look like in practice? How do developers and team leaders build fully-functional LMS solutions that empower student success without breaking the bank?
Learning Management Systems (LMS) Challenges
When schools first made the shift to distance learning directives, speed was of the essence. While students were barred from campus for safety reasons, they’d paid for a full semester of instruction, and schools needed to deliver. As a result, patchwork programs became commonplace. Colleges and universities combined existing education software with video conferencing and collaboration tools to create “good enough” learning models that got them through to summer break. Despite best educational efforts, however, some students still went after schools with lawsuits, alleging that the quality of instruction didn’t align with tuition totals.
So it’s no surprise that as fall semesters kick off, students aren’t willing to put up with learning management systems that barely make the grade. They want full-featured distance learning that helps them engage with instructors and connect with new content no matter how, where, or when they access campus networks.
As a result, development teams can’t simply correct for current COVID conditions. Instead, they need to create systems that deliver both blended and purely online interactions, and have the power to ensure students that choose to continue with digital-first learning can still stay connected even after returns to campus become commonplace.
How to Create a Functional LMS Framework
So what does a fully-functional LMS framework look like in practice? Six features are critical for ongoing success. Let’s explore how these features can enhance your learning management system and set your end-users up for success in the classroom and at home:
Diverse Document Viewing
As schools make the shift to distance learning, the ability to view multiple document types is critical for long-term LMS success. From standard Word documents, Excel spreadsheets, and PowerPoint presentations to more diverse image types — such as those used in medical educational programming or manufacturing courses — students and instructors need the ability to both send and view diverse document types on-demand.
While both free and paid solutions for viewing exist outside LMS ecosystems, choosing this route creates two potential problems. Students with diverse technological and economic backgrounds may face challenges in finding and using these tools, and data security may be compromised. This is especially critical as schools handle greater volumes of students’ personal and financial information. If document viewing happens outside internal systems, private concerns become paramount.
In-Depth Annotations
With students now submitting assignments and exams via educational software, viewing isn’t enough. Staff also need the ability to annotate assets as they arrive. Here, professors and teaching assistants are best-served by built-in tools that allow them to quickly redline papers or projects, add comments, highlight key passages, and quickly markup documents with specific instructions or corrections.
Without this ability, staff have two equally unappealing choices. They can either print out, manually correct, and then re-scan documents, or send all comments as separate email attachments. Both are problematic, since they limit the ability of students and teachers to easily interact with the same document.
Comprehensive Conversion
File conversion is critical for effective learning management systems (LMSs). Specifically, schools need ways to quickly convert multiple document types into single, searchable PDFs. Not only do PDFs offer the ability to control who can edit, view, or comment on papers or exams, they make it easy for teachers to quickly find specific content. The permissions-based nature of PDFs makes them ideal for post-secondary applications and a must-have for any education software solution.
Cutting-Edge OCR and ICR
Optical character recognition and intelligent character recognition also forms a key part of distance learning directives. With some students still more comfortable with hand-written hard copies and some classes that require students to show specific work, OCR can help bridge the gap between form and function. By integrating tools with the ability to recognize and convert multiple character types and sets, schools are better equipped to deal with any document type. Search is also bolstered by cutting-edge OCR; instead of forcing staff to manually examine documents for key data, OCR empowers digital discovery.
Complete Data Capture
Forms are a fundamental part of university and college life — but the myriad of digital documents can quickly overwhelm legacy education software. Integrating tools with robust form-field detection allow schools and staff to streamline the process of complete data capture, both increasing the speed of information processing and reducing the potential for human error.
Barcode Benefits
As campuses shift to hybrid learning models, students occupy two worlds, both physical and digital. But this duality introduces complexity when it comes to tracking who’s on campus, when, and why. These are currently key metrics for schools looking to keep students safe in the era of social distancing.
By deploying full-featured barcode scanning solutions as part of LMS frameworks, colleges and universities can get ahead of this complexity curve. From scanning ID cards to take attendance and track resource use to using barcodes as no-contact purchase points or metric measurements for ongoing analytics, barcode solutions are an integral part of LMS solutions.
Automation Advantages
The sheer volume of digital documents now generated and handled by post-secondary schools poses the problem of practicality. Teachers and administrators simply don’t have time to evaluate and enter data at scale and speed while also ensuring accuracy. By automating key processes including document conversion, capture, and character recognition, schools can reduce the time required to process documents, leaving more room for student engagement.
Building an LMS Product for Teachers & Students
The bottom line for LMS solutions? If they don’t work for end-users, they won’t work for the broader school system as a whole. Gone are the days of invisible IT infrastructure. Now, students and staff alike are school stakeholders with evolving expectations around technology.
By deploying distance learning solutions that prioritize end-user outcomes with enhanced document viewing, editing, data capture, and automation, developers can create LMS tools capable of both solving immediate issues and offering sustained student success over time. Learn more about these functionality integrations for your learning management system at accusoft.com/products.
Enterprises leverage an abundance of documents in their operations, record-keeping, and analysis activities. From customer forms, to agreements, purchasing data, and internal reports, companies generate a lot of documentation. With companies generating so many pieces of documentation, it can be difficult for humans to derive information from the masses. Companies have to structure these documents and their contents into formats that provide the desired online viewing functionalities and data capture.
For this reason, enterprises leverage document processing and imaging software to allow document contents to be searched, edited, and annotated in web applications. These solutions provide intuitive viewing and collaboration functionality in content management systems, while also structuring data from documents into a format that can be used in analyses.
Processing Documents
Imaging solutions produce digital copies of hardcopy documents. Digitizing these documents allows them to be further viewed and edited. Optical character recognition then generates computer-recognizable records of their content. This allows the contents to be searched and for instances of text to be tagged and categorized as variables in quantitative analysis. Document imaging also allows businesses to produce forms with interactive fields that can be completed online. Data entered into these fields will already exist in digital form as strings of text, where it can be organized into records for datasets.
Accessing Content
Intuitive content management systems with collaborative access provide internal teams a means to reference data sources (the documents) and how information is organized in them. It is important for businesses to leverage these intuitive viewing and collaborative functionalities so individuals can easily locate needed information.
The viewing functionalities enabled by software like PrizmDoc Viewer allows firms to determine what information and fields should be included in the datasets they create and subsequently feed into an ERP system for reporting and analysis. These systems provide modules to report, track, and analyze data structured from documents and other sources in one central tool. The ease in viewing, annotating, and comparing documents through PrizmDoc Viewer adds to the ability to communicate reporting needs and database construction for the ERP system.
Keeping Records and Generating Insights
Businesses organize information from fields or text-based instances into tabular databases for easier record-keeping. For example, a company in the medical field may need to pull a patient’s name, birthdate, insurance, and visit details from documents completed by staff for digital record keeping or presentation to the patient. Storing document data in this format also enables it to be queried for statistical analysis. A financial services firm may want to record data from past transaction-related documents so it can run tests to determine the probability of closing certain types of deals, as well as forecast expected earnings.
To complete these data captures, Accusoft suites and SDKs can digitize hard copy documents into their underlying structured data. This software can also detect fields in digital forms that automatically extract the text entered on their behalf. Using these tools, businesses organize documents into content management systems and structure data for analysis and reporting.
Guest Blogger: Michael Johns, Content Specialist, Leading Computer Technology Corporation
Michael Johns is a marketing and content specialist working in the technology industry. With an interest in data, he has an appreciation for software solutions that help structure information and facilitate valuable analysis in creating better products and services.
TAMPA, FLA. (Nov. 2, 2021) – Docubee (formerly OnTask), a workflow automation and eSignature tool, has launched a new Health Tracking platform providing unmatched flexibility for companies to track employees’ vaccination records, exemption requests, COVID test results, health screenings, and wellness status.
The affordable and secure cloud-based system can be used with any device and from any location, making it simple for both employees and the human resources department to use.
Employees can access OnTask Health Tracking to submit important information, like proof of vaccination documents and COVID test results, and use an eSignature to certify authenticity of the information. It’s as easy as clicking a link or scanning a QR code from any device – employees don’t need to create an account, set a password, or install an app.
Once an employee submits information, OnTask Health Tracking routes the information to the appropriate people or departments, depending on the automation rules for notification and approval that the company sets. Each company has the ability to set the specific configurations that work best for the company’s workflow.
“We developed the OnTask Health Tracking platform to be simple for the user and a powerful tool for the company,” said Steve Wilson, president of OnTask. “It saves the HR department time by making health tracking simple and efficient.”
The platform’s pre-built workflow templates are quickly configured to fit within any company’s operations, allowing a company to start using OnTask Health Tracking within hours. Once configured, it’s simple for users to make adjustments to the workflow as mandates or business needs change, or OnTask’s support team can quickly make changes for an employer.
OnTask Health Tracking is a secure, timely, and efficient way for employers to comply with existing federal mandates and recently released Occupational Safety and Health Administration (OSHA) guidance around COVID-19. Its flexible platform allows employers to quickly adjust to changing legal requirements over time, including automatically timing and tracking when proof of vaccine boosters may be required.
In addition, repetitive document-centric HR tasks like onboarding new employees, managing PTO requests, and submitting expenses can move into the OnTask platform, making it useful long after the pandemic.
The platform is already gaining attention for its innovation. OnTask Health Tracking was recognized as a Top Vaccine Management Software Vendor by Select Software Reviews and featured in SaaSHub’s weekly trending products. It’s also a finalist for Tampa Bay Tech’s 2021 Tech Project of the Year.
View demo videos and additional information about the benefits of using OnTask Health Tracking at https://www.docubee.com/solutions/healthcare/.
About OnTask
OnTask is a workflow automation tool that makes it easy for small to mid-sized businesses to digitally send and fill forms, get signatures on documents and automate overall business processes, saving time and resources. OnTask is a flagship product of Tampa-based software company Accusoft, which holds more than 40 patents for its software technologies that are designed to solve complex workflow challenges, improve productivity, provide actionable data, and deliver results that matter. For more information on OnTask, visit www.docubee.com.
Jira REST APIs are used to interact with the Jira server for several purposes. Basically, they provide access and interaction with features like issues and workflows. In this blog, we are interested in sharing how to query epics, stories, and access logged work time to provide a way to estimate the time for release tasks using the Jira REST APIs.
Using Python Wrapper for the Jira API
The Jira REST APIs can be accessed in several ways. For example, they can be invoked using a POST request with the appropriate parameters. There are wrappers for specific languages such as R (used in statistical analysis) and Python. For the purposes of this article, the examples and other details, we will use Python. When using Python, a script should start like this:
Release Tasks in SDK
For analysis of release tasks in the SDK group, the Jira stories, bug fixes, incidents addressed, and in general any components or features addressed by a given release are handled primarily through a release epic that should be in place for the release tasks. This is a common best practice to keep things well organized.
A specific query written in Jira Query Language (JQL) is required to use release epics stories and sub-tasks, incorporated in a Python script. The syntax looks like this (notice that the variable ‘epickey’ contains the epic number for the specific release of interest). Notice that the query selects only stories or bugs with status DONE or RESOLVED:
Querying Jira with the Python Wrapper to Retrieve Reported Times
Once we get the stories that are with status Done or Resolved for a given release epic, it is then possible to get the reported times with the following lines of Python code. Note that the code prints the report in easy to read values for convenience:
Using the Jira API is a convenient method to retrieve information that would be otherwise complicated to do with pure Jira queries. A language such as Python allows for data formatting and other operations that permit efficient and clear data analysis to keep track of projects.
FinTech applications have become indispensable to the financial services sector, enabling users to easily engage with financial offerings in a manner that suits them, while also boosting operational efficiency. The industry’s ongoing digital transformation continues to redefine FinTech functions, with developers tirelessly crafting new apps capable of handling tasks formerly dispersed across numerous systems and software.
Among the most crucial features of FinTech applications is the ability to view and share documents. Developers have a range of document lifecycle solutions at their disposal to circumvent the challenging process of building these features from the ground up. However, the financial sector presents distinct security and compatibility prerequisites when it comes to choosing partners for integration. To truly grasp these technical hurdles, it’s important to understand the significance of Java in the development of FinTech applications.
A (Brief) History of Java in the Financial Sector
Financial institutions pioneered the adoption of automated workflows. The advent of the first electronic communication network that facilitated the trading of financial products off the trading floor was seen as early as the 1960s. During the 1970s, computerized order flows saw greater acceptance, with most financial companies crafting their own proprietary systems. The digital revolution truly ignited in the 1980s and early 1990s with the launch of the Bloomberg terminal and the Financial Information eXhange (FIX) protocol. By the late 1990s, the Nasdaq enabled the execution of securities trades autonomously, without the need for manual interference, through the incorporation of Island ECN.
Java shook up the programming language world when it debuted in 1995, and its timing couldn’t have been better. The financial industry witnessed an extensive wave of mergers and acquisitions in the late 1990s and early 2000s, which resulted in several companies grappling with the integration of a multitude of applications and data. Java’s ability to support diverse platforms was an appealing solution to this challenge, and numerous financial applications were translated into Java. Sun Microsystems, which first introduced Java to the market, even adopted the slogan “Write once, run anywhere” to promote its flexibility. Java’s simplicity of use and significantly enhanced speed compared to legacy code on outdated platforms quickly made it the language of choice for developers.
In a few short years, Java ascended to become the leading programming language within the financial services industry. Its popularity surged again following the launch of OpenJDK, a free and open-source version of the language, in 2007. An Oracle report in 2011 estimated that over 80% of electronic trading applications and virtually all FIX engines were written in Java. Even close to three decades after its debut, Java continues to be the primary programming language employed by financial services, surpassing other open-source alternatives by a considerable margin.
Java’s Enduring Appeal for the Financial Industry
The enduring preference for Java among financial sector developers isn’t simply due to tradition or resistance to change. Java’s unique attributes are an exceptional fit for financial applications, spanning both long-established enterprise-level banking systems and pioneering FinTech solutions.
Security
In the realm of financial services, security is the highest priority for developers. Applications related to banking and trading must have robust security provisions to guard financial data and personally identifiable information against unauthorized access. Java simplifies data access restriction and provides an array of memory safety features to diminish potential vulnerabilities, particularly those stemming from prevalent programming mistakes. Oracle consistently rolls out regular updates to fix recognized vulnerabilities and tackle the most recent cybersecurity threats.
Portability
Java, being a platform-independent language, allows applications to operate on virtually any device. This has always been a substantial benefit in the financial sector, but it has proven even more crucial in the era of cloud computing and mobile applications. Developers can employ the same code to roll out software in a virtual environment and render it accessible to end-users via their smartphones, computers, or other devices. The ability of Java virtual machines to support additional programming languages only adds to the language’s versatility.
Reliability
Given the nearly three-decade-long consistent use and the backing of a robust development community, Java has established itself as one of the most dependable programming languages globally. Potential instabilities have long been addressed, and there is a wealth of developer tools and documentation at hand to ensure software is built on a solid foundation. This reliability is critically significant for banking and financial applications, which demand high performance levels coupled with fault tolerance.
The Value of Java-Based Document Viewing and Sharing
As FinTech developers continue to build novel applications aimed at simplifying life for clients and employees in the financial industry, they’re facing a growing expectation from users for superior document viewing and sharing capabilities. Users want to bypass the time-consuming and resource-heavy task of manually processing paper documents, and most organizations strive to eliminate the security hazards associated with using external applications for managing digital documents.
However, developers face significant challenges when attempting to build these complex document viewing capabilities from scratch. Although there are numerous integrations that can introduce document lifecycle features, most aren’t based in Java and need extra development work to embed them into existing FinTech solutions. Without the option to natively view, share, and edit documents within the Java application, users frequently resort to external programs, a practice that presents potential security issues and version discrepancy risks.
Facilitating Java-based Document Functionalities through PrizmDoc® for Java
Accusoft’s PrizmDoc® for Java, formerly VirtualViewer®, is a robust, Java-based HTML5 document viewing tool designed to assure optimal compatibility with FinTech applications without compromising functionality and security. By supporting an array of document types, such as PDF, TIFF, JPEG, AFP, PCL, and Microsoft Office, PrizmDoc® for Java creates a streamlined viewing experience that eliminates the need for external viewing solutions.
As an integration built on Java, PrizmDoc® for Java can operate on nearly any operating system and is simple to deploy. There’s no need to install software on the user’s desktop, enabling FinTech developers to deploy a scalable solution that fulfills their crucial security and business continuity needs within a single, high-velocity application. PrizmDoc® for Java’s server component swiftly renders and dispatches individual document pages for local viewing as required, allowing users to access, view, annotate, redact, and manipulate financial documents instantaneously. Since documents are rendered within the web-based viewer, users never have to download or transfer files, which could put sensitive data at risk.
Experience PrizmDoc® for Java’s features for yourself by signing up for a free trial!