Header for SPIE use An Interaction Model for 3D Cutting in Maxillofacial Surgery Planning
更新时间:2023-08-11 08:35:01 阅读量: 教育文库 文档下载
- header推荐度:
- 相关推荐
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
An Interaction Model for 3D Cutting in Maxillofacial Surgery PlanningPatrick Neumann, Dirk Siebert, Armin Schulz, Gabriele Faulkner, Manfred Krauss, and Thomas Tolxdorff Department of Medical Informatics, University Hospital Benjamin Franklin, Free University Berlin, Hindenburgdamm 30, 12200 Berlin, GermanyABSTRACTOur main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and jaw bones. The easy-to-handle user interface employs visual and force-feedback devices to define subvolumes of a patient’s volume dataset. The defined subvolumes together with their spatial arrangements lead to an operation plan. We have evaluated modern low-cost, force-feedback devices with regard to their ability to emulate the surgeon’s working procedure. Keywords: surgery planning, maxillofacial surgery, volume segmentation, virtual tools, volume growing, force feedback, real-time visualization, input devices1. INTRODUCTIONOne major objective of craniofacial surgery procedures is to alter the shape and position of skull bones in order to correct congenital malformations or treat traumatic injuries. During the operative procedure the surgeon resects several skull fragments and rearranges them to achieve good dental occlusion and facial esthetics. The surgery must therefore be thoroughly planned in order to accurately predict the postoperative shape of the skull and soft tissues. 1.1. Operation planningConventional maxillofacial surgery planning involves the production of plaster casts from the patient’s anatomy. The plaster casts are mounted on an articulator (Figure 1a), which allows the dental segments to be cut and repositioned while the bases maintain their interrelationship.(a)(b)(c)Figure 1: (a) Articulator for plaster casts; (b) calibration frame; (c) frame reconstructed from CT.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
The paradigm of virtual reality provides a variety of techniques to support or replace the plaster-cast-model surgery. The 3D visualization and stereoscopic presentation of the volume data, as well as the possibility of modifying the object by using virtual cutting and replacement tools, accelerate the planning procedure and enhance it with new facilities; for example, a comparative study of different planning variants. The computer-based methods necessitate an adequate set of 3D input and output devices. 1.2. Intra-operative planning controlThe planning results obtained by a computer-aided planning tool can be easily transferred to the operating room since they can be directly used for performing intra-operative navigation. This opens a further range of applications for virtual reality. In our approach the planning data and the actual patient are correlated via a frame that is rigidly connected with the skull of the patient by a splint (Figure 1b). Markers on the frame that are visible in a photograph and in CT (Figure 1c) serve as calibration points. During the operation an electromagnetic tracking system with 6 DOF determines the 3D position of the patient.2. PURPOSEIn the past, several basic approaches have been investigated for a 3D segmentation to cut skull fragments in volume data using VR techniques. For example, Delingette et al. [2] used the "virtual hand" user interface, by which the cutting tool follows the motion of the user’s hand, which is tracked by an electromagnetic sensor. However, most of these approaches lack force and haptic feedback to enhance the realism of their simulations. As the technology improves and the cost of force-feedback devices in the gaming sector decreases, it is worthwhile to consider the application of such devices in the medical domain. With these advances, it is possible to construct input devices that can be controlled via a high-level interface and give the surgeon haptic feedback during the planning process. In this paper we describe a new approach for performing a manual 3D segmentation (Figure 2a) that enables the surgeon to quickly and exactly define the desired bone segments. This new interactive segmentation technique uses low-cost forcefeedback input devices offered by various companies like Microsoft® and Logitech® (Figure 2b/c) for less than $150. The new features of such input devices can be used to provide additional depth information, encoded by force, for the visualization on a 2D output screen. A fast and powerful visualization kernel supports the segmentation process and continuously displays its progress.(a)(b)® ®(c)Figure 2: (a) Surgeon performing the 3D segmentation; (b) Microsoft and (c) Logitech force-feedback joysticks.3. METHODS3.1. VisualizationThe planning process is based on rendered views of the patient’s 3D volume data acquired by CT. In the first stage of our rendering pipeline (Figure 3), the volume is automatically segmented with a bone-threshold window to extract the patient’s skull bone.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
For fast computation, each volume object is represented in a small bit-cube [3] in which each bit indicates whether the corresponding voxel belongs to bone tissue or not. In a modern computer architecture, each bit volume is stored in RAM as an array of 64bit long words, where each long word represents 64 voxels. The additional memory requirements of this type of data representation are small but the speed by which the data can be inspected increases by treating always 64 voxels at one time.preprocessing stage original data cube object segmentation stage object reconstruction stage automatic bone threshold segmentation bone segment definition bone bit-cube other object images object composition stage image composition result imagebone segment bit-cubevoxel projectionz-buffer imageilluminationobject imageFigure 3: Rendering pipeline and image composition.After the object segmentation stage, which will be described in the next section, a depth map of the image is computed in the reconstruction stage by voxel projection from the object bit-cube. With this z-buffer image and the original data, an object image can be reconstructed and illuminated using the illumination model by Phong [4]. Progressive refinement and partial recomputation are implemented for interactive 3D visualization. By using a successive image refinement method, the image quality improves over time [5]. Partial recomputation is used for fast reconstruction of small, changed areas in the object volume. In the case of very small areas, which will affect only a few pixels in the output image, special low-level output functions can be used for speed optimization. In our application, an image with depth information is generated from each object. Because of the z-buffer-based overlap, the visual results of different objects can be compiled into one output image in the composition stage, differentiated for example by color or transparency. 3.2. 3D SegmentationDuring the segmentation process, a hierarchical object tree is generated with the patient’s skull bone as root object. To resect a bone segment, two new sub-objects are derived from their parent (Figure 4), one for the new bone segment and one that contains all remaining voxels from the original bone. Initially, the first object is empty whereas the second object is only a copy of the original. Within this object hierarchy it is always possible to undo planning steps or to try alternative planning variations.user interaction seed point definition bone volume cutting force feedback processingvolume growing init new subvolumes new segmentvisualization update partial rendering result imageprocess next voxelremaining volumepartial renderingFigure 4: Object segmentation scheme in the cutting process.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
seedpoint(a)(b)(c)(d)Figure 5: (a)-(d) Volume-growing process, started with a seedpoint in the lower jaw.The user starts the definition of a bone segment by placing a seedpoint in a chosen view of the object (Figure 5a). Like a mouse, our input device always has a visual pointer on the screen. A seedpoint can be set by positioning this pointer and pressing a button at a pixel location in screen space where the user sees the object. For any pixel, the depth of the corresponding surface voxel in binary object space is given by the image z-buffer. After choosing an initial voxel, utilized as seed voxel, segmentation is carried out by connected-component analysis [6] based on a 26-voxel neighborhood (Figure 5a-d). If there is a path of neighboring voxels in the original bit-volume, corresponding voxel bits are moved from the object-copy to the new segment. The growth can be visualized in real-time by partial reconstruction since in every volume-growing step only one voxel changes and only a few surface voxels affect an object image result. For fast results the connected-component analysis is directed by visual bone-surface pixels, which means that the voxel neighbors closest to the observer are examined first to find object voxels. To control the volume-growing process, the bone-segment borders can be interactively defined by placing cuts. A cut is defined by drawing a line in free-hand mode with the visual pointer of the input device (Figure 6a/b). To keep the user interface simple, every cut is projected onto the bone surface orthogonally to the viewing plane. The cutting direction can be changed arbitrarily by simply choosing a different viewing direction. The depth of the cut and the cutting speed can be controlled by the force-feedback input device.cutcutseedpoint(a)(b)(c)Figure 6: Volume-growing controlled by a cut (a),(b) between the jaw bones; (c) inside a region of interest.There are two additional ways to limit objects whose borders are hidden by bone from other objects. The first method allows the user to adjust the visualization properties of surrounding objects so that they are transparent or totally invisible during the volume-growing process. The second possibility is to limit the volume-growing process inside 2D slices of the original data volume. The user is able to navigate through the 2D data slices inside a region of interest, shown directly as overlay on the 3D view (Figure 6c). Also, the segmented object voxels in these slices are distinguished by color from non
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
segmented voxels and the volume-growing process is visualized. Like conventional segmentation techniques in 2D slices, the segment is defined by drawing a line at the object borders where the growing process is leaking. The user may place additional seedpoints to achieve faster visual results in areas remote from the last seedpoint. Additional seedpoints are also necessary for unconnected bone structures, which should be handled as one object. The modification of a cut or insertion of a new one restarts the entire volume-growing process. 3.3. Force feedbackWe have installed a testbed to evaluate both the usability of force-feedback devices and the parameters of the effects we used. Our virtual planning station comprises the visualization engine and a driver for I/O devices such as force-feedback joysticks. The driver has a high-level interface protocol that operates independently from the device’s particular hardware implementation (Figure 7). The high-level interface provides commands for force-feedback effects used in the surgical planning procedure like ’saw’ or ’drill’ together with their corresponding set of parameters. A device that supports forcefeedback capabilities is free to decode these commands and generate the appropriate effects. A device without such capabilities may ignore such commands and function as a standard input device.surgery planning stationI/O devicevisualization enginebidirectional I/O interfacedevice driver serial link, USB, etc.DirectX, iforce, etc.force-feedback deviceFigure 7: Integration scheme of input devices.The high-level interface requires an intelligent device driver for the input device. In our testbed we connected a lowperformance host computer to the planning station via a serial link. This host computer drives the Microsoft® SidewinderTM force-feedback joystick using the DirectXTM library. However, the interface supports a wide range of input devices with or without force-feedback capabilities. For example, the "iforce" library supports a set of force-feedback devices with different capabilities. Additionally, professional force-feedback systems can be connected with an adapted device driver. To emulate the surgeon’s working procedure we have implemented force-feedback sensations for sawing and drilling. During user interaction the force parameters are continuously streamed to the input device [7]. The input device driver has to translate the force parameters into a force effect by the joystick. To shield the user from heavy jolts or jerks by the stick, our driver is equipped with force ramps that increase the force smoothly in the joystick center (Figure 8a). In our testbed we have developed a force-adjustment panel to configure force ramps and to enable the surgeon to individually adjust the forcefeedback sensation of the joystick (Figure 8b).force in %100 80 60 40 20 0 -1,0 1,0 0,5 0,0 -0,5 -0,5 0,0 0,5 1,0 -1,0joystick x and y excursion(a)(b)Figure 8: (a) Example of a force ramp for the joystick; (b) prototype of the force-adjustment panel.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
The streamed force parameters, coming from the planning tool, always represent bone thickness. In drilling mode, the force is proportional to the bone density directly in front of the drill. While drilling, a small sine wave is superimposed onto the force to give the user the impression of a continuous, fast rotation of the drill. In sawing mode, the force parameter is proportional to the sum of all voxel densities the user is cutting. In this mode, a small sawtooth is added to enhance the realism of the effect. As a principal requirement for the transmission speed, the overall bandwidth of the force-feedback system must exceed that of the human perception system to achieve sufficient realistic results. Kalawsky [8] considers a bandwidth of 30 Hz sufficient. Thus the structure of our interfacing protocol is compact and allows a command to be sent within a few bytes.4. RESULTSUsually an operation planning starts with the segmentation of the frame with the splint used for correlating the planning data with the patient. During the planning process this segment is not needed and will only hide underlying bone structures (Figure 9a). It is not possible to extract the frame in the preprocessing stage using the threshold segmentation because our frame material has the same Hounsfield units as bone. The frame can be segmented with the techniques explained earlier simply by setting one initial seedpoint on the frame and placing a small cut directly in front of the teeth in a profile view of the patient (Figure 9b/c). This procedure can be done automatically in the future.cut(a)(b)(c)Figure 9: (a) Frontal view of a segmented frame with splint; (b) profile view with cut and (c) resected frame.The segmentation of the lower jaw needs also only one initial seedpoint and a few small cuts at the joints and at the contact points to the upper jaw (Figure 10a/b). This bone can be extracted by the surgeon in less than 2 minutes, which is significantly faster than the production of a standard plaster cast. The extraction of the upper jaw for a Le Fort 1 osteotomy presents no greater difficulty (Figure 10c/d). Usually we only need one initial seedpoint and one straight cut. As expected, the surgeon feels with the joystick a sudden thrust after drilling or sawing through the jaw bone.(a)(b)(c)(d)Figure 10: (a), (b) Segmented lower jaw; (c), (d) segmented upper jaw.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
For the sagittal correction of the mandible, a resection line has to be defined between the ramus and the rest of the lower jaw. In real surgery, one difficulty posed by this operation is to mobilize the mandible without cutting the main nerve to the teeth. In our virtual surgery planning process, complex cuts can be modeled by plane cuts. Nevertheless, the surgeon can simulate the real cut exactly if desired (Figure 11a-d).(a)(b)(c)(d)Figure 11: (a)-(d) Different views of the resected ramus from the lower jaw.Our project focuses on the maxillofacial surgical correction of jaw malformations (Figure 12a-d). Up to now, ten operations on patients with dysgnathia have been supported with techniques described above. We have found that our virtual cutting instruments greatly simplifies the resection of bone segments for this area of surgery planning.(a)(b)(c)(d)Figure 12: Results from a dysgnathia correction planning; (a), (b) before correction; (c), (d) after correction.An important medical demand is the high accuracy of ±0.5 mm for the whole procedure. This requires a highly accurate 3D sensor system in combination with precise patient data acquisition and its precise processing. The accuracy of the presented segmentation is only limited by the voxel resolution of the underlying data set. The data set came from a modern SIEMENS Spiral CT scanner and has a voxel resolution of 0.7 × 0.7 × 1.4 mm.5. CONCLUSION AND FUTURE WORKOur system satisfies the requirements for an image-guided virtual surgery planning system [9]. The fast rotation of the volume, the simple cutting interaction with a force-feedback device, and the visualization of the volume-growing process ensure the usability and acceptance of our planning interface in the medical domain. In our approach we use low-cost, force-feedback devices from the gaming sector, which are quite different from real surgical tools. Our cutting interaction model takes into account only bone density for cutting and processing force feedback. There are no physical models integrated for describing the behavior of refracting bone, which is a technique often used in real operations. We view our system as a maxillofacial surgery planning tool rather than a real simulation system. Nevertheless, the surgeon is able to perform and feel the resection of bone segments nearly as he would in a real operation.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
Further efforts will be directed toward achieving a sufficient stereoscopic 3D representation through a "see-through" 3D display. The intra-operative augmented display of the operation planning results can also be used for teaching purposes. The use of a 3D output device should give students a full representation of the planning and the real operation. Toward the completion of the project, we will evaluate the possibility of representing soft-tissue modulations corresponding to bone shift.6. ACKNOWLEDGMENTSOur project, "Intra-operative Navigation Support," is funded by the Deutsche Forschungsgemeinschaft (DFG) and the University Hospital Benjamin Franklin (UKBF). We would like to thank the Department of Oral, Maxillofacial and Facial Plastic Surgery of the UKBF for its cooperation. The authors are grateful to Jean Pietrowicz for proofreading the manuscript.7. REFERENCES1. 2. 3. 4. 5. 6. 7. 8. 9. P. Neumann, G. Faulkner, M. Krauss, K. Haarbeck, T. Tolxdorff, "MeVisTo-Jaw: A Visualization-based Maxillofacial Surgical Planning Tool" Proceedings of the SPIE Medical Imaging, vol. 3335, pp. 110-118,1998. H. Delingette, G. Subsol, S. Cotin, J. Pignon, "A Craniofacial Surgery Simulation Testbed", Third Int. Conf. on Visualization in Biomedical Computing, SPIE, vol. 2359, pp. 607-18, 1994. C. Wood, C. Ling, C.Y. Lee, "Real Time 3D Rendering of Volumes on a 64bit Architecture", SPIE - Mathematical Methods in Medical Imaging, vol. 2707, pp. 152-158, 1996. B.T. Phong, "Illumination for computer generated pictures", Communications of ACM, vol. 18(6), pp. 311-317, 1975. K.R. Sloan Jr., S.L. Tanimoto, "Progressive Refinement of Raster Images", IEEE Transactions on Computers, vol. c28(11), pp. 871-875, 1979. K.D. Toennies, C. Derz, "Volume rendering for interactive 3-d segmentation", Proceedings of the SPIE Medical Imaging, vol. 3031, pp. 602-609, 1997. L.B. Rosenberg, A Force Feedback Programming Primer, Immersion Corporation, San Jose, California, 1997. R.S. Kalawsky, The Science of Virtual Reality and Virtual Environments, Addison-Wesley, 1993. J.V. Cleynenbreugel, K. Verstreken, G. Marchal, P. Suetens, "A Flexible Environment for Image Guided Virtual Surgery Planning", Visualization in Biomedical Computing, pp. 501-510, 1996.
- exercise2
- 铅锌矿详查地质设计 - 图文
- 厨余垃圾、餐厨垃圾堆肥系统设计方案
- 陈明珠开题报告
- 化工原理精选例题
- 政府形象宣传册营销案例
- 小学一至三年级语文阅读专项练习题
- 2014.民诉 期末考试 复习题
- 巅峰智业 - 做好顶层设计对建设城市的重要意义
- (三起)冀教版三年级英语上册Unit4 Lesson24练习题及答案
- 2017年实心轮胎现状及发展趋势分析(目录)
- 基于GIS的农用地定级技术研究定稿
- 2017-2022年中国医疗保健市场调查与市场前景预测报告(目录) - 图文
- 作业
- OFDM技术仿真(MATLAB代码) - 图文
- Android工程师笔试题及答案
- 生命密码联合密码
- 空间地上权若干法律问题探究
- 江苏学业水平测试《机械基础》模拟试题
- 选课走班实施方案
- Maxillofacial
- Interaction
- Planning
- Cutting
- Surgery
- Header
- Model
- SPIE
- use
- 3D
- 岗位说明书6423121160
- 利益相关者视角下的远程医疗发展策略研究
- ThinkPHP3.1快速入门(1)基础
- 关于冬天的成语与古诗
- 电气运行巡检卡(终版)1
- 美国上市公司信息披露制度的变迁及启示
- 山东省菏泽市牡丹区 济南版生物七年级上册第二章第一节 细胞的结构和功能(第一课时)教案
- 遗传算法在数据挖掘中的应用
- 第十章 社会救助社会工作
- 个人月工作总结报告模板范文
- 东贡小学学雷锋先进事迹材料
- 无轴承永磁同步电机原理及研究发展趋势
- 常用真空计的测量范围及其性能
- 中国古代帝王服饰研究
- 蒲石河300MW大型抽水蓄能机组轴系安装及轴线调整
- 浅析柏拉图与中世纪神学文艺理论关系研究
- 给水系统作业指导书
- 中华人民共和国银行业监督管理法
- 如何安全的操作电动葫芦
- 《初中化学作业布置中的减负增效》教学设计