top of page
  • Writer's pictureHyun Min (Eddie) Kim

Development and Utilization of Imaging Analysis Software for Medical Diagnosis and Treatment

Updated: Nov 12, 2020

Abstract

The purpose of this project was to create an easy to use dimensional analysis program to aid medical professionals in quickly diagnosing patients based on medical imaging (X-ray, MRI, CT scans, and even camera images). The program's intuitive controls allow the user to quickly determine the length, area, and growth/decay of the patients’ condition.


Introduction

In the current medical field, there is a strong need for computational analysis of medical images to determine patients’ conditions and whether they are improving or declining. With MRIs and X-ray imaging, a doctor is able to determine major issues, but the human eye can’t always determine marginal improvements or issues. With Computational Analysis and area modeling, Medical professionals cannot only get exact measurements on a patients’ MRI or X-ray, they can also determine the rate of growth/decline of the patients’ condition. This same technique can be applied to camera images for skin ailments or physical trauma as well.


There are various programs created by medical and computer science professionals which provide ways to analyze these medical images; However, many programs have common downsides: the need for paid subscriptions, the multitude of uncommonly used functionality, loss of time by opening applications and importing files, and the difficulty of use. Additionally, Medical professionals often have to go through further training to operate these types of software, causing them to lose time in treating patients. To solve these problems and improve efficiencies in analysis, there was a need to create a web-app with simple functions to broaden the accessibility of these types of dimension analysis programs. The program was aimed to provide doctors and others a more efficient way to find the dimensions of an image, providing doctors a simple method to determine the severity of the areas of abnormality, whether it be the area of an enclosed region or the distance between two points on an image.


When coding this program, a variety of functions had to be created, one of which incorporated the Shoelace formula. Since the Canvas HTML element was used as the area for selection, the area of enclosed regions was calculated by using the individual coordinates of the plotted points. The Shoelace formula takes a group of coordinates, each one adjacent with the one before, (x1,y1), (x2,y2), (x3,y3)...(xn,yn), and calculates area by


For further proof of this theorem please click the link below:


Methods and Materials


Materials:


Code Procedure and User instructions:


The Code itself is archived and available upon request.


1. The user selects an image file which shows up on the left side of the screen as a preview

  • The “previewFile” is called when an image is selected and the file type is found. “.tif” and “.tiff” files are read as buffers and put into a Tiff object which produces a dataURL, while others are directly used as dataURLs. This data URL is used to add an image to the <img> element in the top left part of the HTML document.


2. After “next” is clicked, the image is projected on a <canvas> element in the document body

  • The “next” button calls the drawAreaStart function which sets the size of the <canvas> elements based on the dimensions of the image and calls the drawCurrentImage function which draws the current image onto the <canvas> element.


3. When the user clicks the Input Dimensions button, the user is able to select two points on the image on the <canvas> element and define the length between the two points, usually using a scale bar as a basis of the dimension. The set button defines the scale length relative to the dimensions of the <canvas> element/image.

  • The Input Dimension button changes the Draw Type to Points and sets the boolean variable currentInputing to true, indicating that the user is currently inputting the dimensions of the image. The two points that will be used as the reference distance will be plotted by the addPoint function and the distance between the two points will be calculated by the distance formula since each point has a specific coordinate values. This distance is stored as a variable and referenced in further instances of length calculation for the length of dashed lines and consequently the areas and perimeters of objects stored in the allAreas array(These are objects with a collection of point objects, dashedLine objects, and variables for perimeters and objects).


4. The Draw Type: Points enables the user to plot individual points on the image while Draw Type: Lasso enables user to continuously plots points to draw an area(functions like other lasso tools).

  • The event listener functions which correspond to the two different Draw Types are stored in the mouseHandlers objects for the ease of removal. Therefore, a boolean “drawTypePoints” is changed when this event happens. The point tool considers the user’s “click” function as an instance and plots the point at the location where the cursor is located. The lasso tool first handles the event listener “mousedown” and starts plotting points at the cursor’s location, plotting them everytime the window detects a “mousemove” listener. The lasso ends, completing the area by linking the last point to the first point, by the “mouseleave” listener.


5. Every time a point is plotted on the image, the program connects the previous point to this point by a dotted line if it is not the first point and defines the length between the two points.

  • Each point plotted in the program is stored in an array of point objects, “allPoints”. When the next point is plotted, a “newDashedLine” object is created, linking the two points together with a dashed line, and this “dashedLine” object is then stored in the array “dashedLines”. Each “dashedLine” object contains the two points that it is connecting, the color, the length, and two functions to draw the line and remeasure the length of the line.


6. When an enclosed area is created, the program defines the area of the enclosing region and the perimeter.

  • When the first object in allPoints is relatively close(within a distance of 8 pixels) to the last object plotted when DrawType is Points or when the lasso function ends, linking the final point to the first when DrawType is Lasso, an object is pushed to the “allAreas” array with the current “allPoints” array and the length of the perimeter of the enclosed area.


7. The clear button clears all points, dotted lines, and the area. Changing the Draw Type also clears these components

  • The clear button and the Draw Type button sets all the arrays which hold canvas objects to an empty array and redraws the original image.


8. The undo button reverts the image to the instance before the last user intervention.

  • An object, previousMove, stores all information of the objects after the user makes changes but before other variables are changed and calculations are made. When the undo button is clicked, all variables from the previousMove object are copied by stringifying the object and parsing it to create a new reference.




Data


<Figure 1> Overall view of the app after the image has been selected. Measurement from Indianapolis, Nashville, and Saint Louis


<Figure 2> Close up of the actual image analyzed, from previous image of the app.


<Figure 3> Measurement of an area in a scan of an eye.



Conclusion

The Eye Area App is used to easily find the distance between points on images and find areas of enclosed regions.

Figure 1 depicts the actual app with the layout of the buttons and the main area for the image and drawing capabilities. The ease and functionality of the design allows for users to intuitively know which buttons to click and how to load images in.


Figure 2 shows the app’s ability to measure the distance between points on a map and find the area within a certain area of the map using various points and straight lines. Users may run into problems of accuracy when analyzing graphs because it is hard to accurately plot points for the base pixel measurement on the scales of each image. It is also important to remember when plotting boundaries on maps, territories are usually drawn based on curved rivers or roads, making straight line projections a little more tedious to draw.


Figure 3 is an image of an eye scan. The app can successfully find the area and perimeter of a region using a lasso tool. This is particularly useful because users are not limited to analyzing images by plotting various points for an irregular shape. However, those using trackpads to draw areas will experience hardships because users have to hold down buttons and scroll simultaneously to draw areas with the lasso tool.


While this app works very well and allows medical professionals to quickly obtain results on their patients diagnosis, there is some room for improvement. One interesting improvement that can be made is the implementation of machine learning in analyzing images. If the app has trained data to automatically find potential areas for disease in medical images, the ease of use and automation will be improved. The app can have trained data on medical images of various diseases and infections. This can be done by preexisting machine learning javascript libraries such as tensorflow.js. Another potential improvement is the implementation of this app on various platforms. Moving this app to different platforms will allow more users with various devices to easily analyze images.



Works Cited

APPS Online. “Shoelace Theorem.” The Art of Problem Solving, 2019, artofproblemsolving.com/wiki/index.php/Shoelace_Theorem.

Wikipedia. “Shoelace Formula.” Wikipedia, Wikimedia Foundation, 30 Oct. 2019, en.wikipedia.org/wiki/Shoelace_formula.

23 views0 comments
Post: Blog2_Post
bottom of page