Header for SPIE use An Interaction Model for 3D Cutting in Maxillofacial Surgery Planning
更新时间:2023-05-17 09:08:01 阅读量: 实用文档 文档下载
- header推荐度:
- 相关推荐
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
An Interaction Model for 3D Cutting in Maxillofacial Surgery PlanningPatrick Neumann, Dirk Siebert, Armin Schulz, Gabriele Faulkner, Manfred Krauss, and Thomas Tolxdorff Department of Medical Informatics, University Hospital Benjamin Franklin, Free University Berlin, Hindenburgdamm 30, 12200 Berlin, GermanyABSTRACTOur main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and jaw bones. The easy-to-handle user interface employs visual and force-feedback devices to define subvolumes of a patient’s volume dataset. The defined subvolumes together with their spatial arrangements lead to an operation plan. We have evaluated modern low-cost, force-feedback devices with regard to their ability to emulate the surgeon’s working procedure. Keywords: surgery planning, maxillofacial surgery, volume segmentation, virtual tools, volume growing, force feedback, real-time visualization, input devices1. INTRODUCTIONOne major objective of craniofacial surgery procedures is to alter the shape and position of skull bones in order to correct congenital malformations or treat traumatic injuries. During the operative procedure the surgeon resects several skull fragments and rearranges them to achieve good dental occlusion and facial esthetics. The surgery must therefore be thoroughly planned in order to accurately predict the postoperative shape of the skull and soft tissues. 1.1. Operation planningConventional maxillofacial surgery planning involves the production of plaster casts from the patient’s anatomy. The plaster casts are mounted on an articulator (Figure 1a), which allows the dental segments to be cut and repositioned while the bases maintain their interrelationship.(a)(b)(c)Figure 1: (a) Articulator for plaster casts; (b) calibration frame; (c) frame reconstructed from CT.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
The paradigm of virtual reality provides a variety of techniques to support or replace the plaster-cast-model surgery. The 3D visualization and stereoscopic presentation of the volume data, as well as the possibility of modifying the object by using virtual cutting and replacement tools, accelerate the planning procedure and enhance it with new facilities; for example, a comparative study of different planning variants. The computer-based methods necessitate an adequate set of 3D input and output devices. 1.2. Intra-operative planning controlThe planning results obtained by a computer-aided planning tool can be easily transferred to the operating room since they can be directly used for performing intra-operative navigation. This opens a further range of applications for virtual reality. In our approach the planning data and the actual patient are correlated via a frame that is rigidly connected with the skull of the patient by a splint (Figure 1b). Markers on the frame that are visible in a photograph and in CT (Figure 1c) serve as calibration points. During the operation an electromagnetic tracking system with 6 DOF determines the 3D position of the patient.2. PURPOSEIn the past, several basic approaches have been investigated for a 3D segmentation to cut skull fragments in volume data using VR techniques. For example, Delingette et al. [2] used the "virtual hand" user interface, by which the cutting tool follows the motion of the user’s hand, which is tracked by an electromagnetic sensor. However, most of these approaches lack force and haptic feedback to enhance the realism of their simulations. As the technology improves and the cost of force-feedback devices in the gaming sector decreases, it is worthwhile to consider the application of such devices in the medical domain. With these advances, it is possible to construct input devices that can be controlled via a high-level interface and give the surgeon haptic feedback during the planning process. In this paper we describe a new approach for performing a manual 3D segmentation (Figure 2a) that enables the surgeon to quickly and exactly define the desired bone segments. This new interactive segmentation technique uses low-cost forcefeedback input devices offered by various companies like Microsoft® and Logitech® (Figure 2b/c) for less than $150. The new features of such input devices can be used to provide additional depth information, encoded by force, for the visualization on a 2D output screen. A fast and powerful visualization kernel supports the segmentation process and continuously displays its progress.(a)(b)® ®(c)Figure 2: (a) Surgeon performing the 3D segmentation; (b) Microsoft and (c) Logitech force-feedback joysticks.3. METHODS3.1. VisualizationThe planning process is based on rendered views of the patient’s 3D volume data acquired by CT. In the first stage of our rendering pipeline (Figure 3), the volume is automatically segmented with a bone-threshold window to extract the patient’s skull bone.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
For fast computation, each volume object is represented in a small bit-cube [3] in which each bit indicates whether the corresponding voxel belongs to bone tissue or not. In a modern computer architecture, each bit volume is stored in RAM as an array of 64bit long words, where each long word represents 64 voxels. The additional memory requirements of this type of data representation are small but the speed by which the data can be inspected increases by treating always 64 voxels at one time.preprocessing stage original data cube object segmentation stage object reconstruction stage automatic bone threshold segmentation bone segment definition bone bit-cube other object images object composition stage image composition result imagebone segment bit-cubevoxel projectionz-buffer imageilluminationobject imageFigure 3: Rendering pipeline and image composition.After the object segmentation stage, which will be described in the next section, a depth map of the image is computed in the reconstruction stage by voxel projection from the object bit-cube. With this z-buffer image and the original data, an object image can be reconstructed and illuminated using the illumination model by Phong [4]. Progressive refinement and partial recomputation are implemented for interactive 3D visualization. By using a successive image refinement method, the image quality improves over time [5]. Partial recomputation is used for fast reconstruction of small, changed areas in the object volume. In the case of very small areas, which will affect only a few pixels in the output image, special low-level output functions can be used for speed optimization. In our application, an image with depth information is generated from each object. Because of the z-buffer-based overlap, the visual results of different objects can be compiled into one output image in the composition stage, differentiated for example by color or transparency. 3.2. 3D SegmentationDuring the segmentation process, a hierarchical object tree is generated with the patient’s skull bone as root object. To resect a bone segment, two new sub-objects are derived from their parent (Figure 4), one for the new bone segment and one that contains all remaining voxels from the original bone. Initially, the first object is empty whereas the second object is only a copy of the original. Within this object hierarchy it is always possible to undo planning steps or to try alternative planning variations.user interaction seed point definition bone volume cutting force feedback processingvolume growing init new subvolumes new segmentvisualization update partial rendering result imageprocess next voxelremaining volumepartial renderingFigure 4: Object segmentation scheme in the cutting process.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
seedpoint(a)(b)(c)(d)Figure 5: (a)-(d) Volume-growing process, started with a seedpoint in the lower jaw.The user starts the definition of a bone segment by placing a seedpoint in a chosen view of the object (Figure 5a). Like a mouse, our input device always has a visual pointer on the screen. A seedpoint can be set by positioning this pointer and pressing a button at a pixel location in screen space where the user sees the object. For any pixel, the depth of the corresponding surface voxel in binary object space is given by the image z-buffer. After choosing an initial voxel, utilized as seed voxel, segmentation is carried out by connected-component analysis [6] based on a 26-voxel neighborhood (Figure 5a-d). If there is a path of neighboring voxels in the original bit-volume, corresponding voxel bits are moved from the object-copy to the new segment. The growth can be visualized in real-time by partial reconstruction since in every volume-growing step only one voxel changes and only a few surface voxels affect an object image result. For fast results the connected-component analysis is directed by visual bone-surface pixels, which means that the voxel neighbors closest to the observer are examined first to find object voxels. To control the volume-growing process, the bone-segment borders can be interactively defined by placing cuts. A cut is defined by drawing a line in free-hand mode with the visual pointer of the input device (Figure 6a/b). To keep the user interface simple, every cut is projected onto the bone surface orthogonally to the viewing plane. The cutting direction can be changed arbitrarily by simply choosing a different viewing direction. The depth of the cut and the cutting speed can be controlled by the force-feedback input device.cutcutseedpoint(a)(b)(c)Figure 6: Volume-growing controlled by a cut (a),(b) between the jaw bones; (c) inside a region of interest.There are two additional ways to limit objects whose borders are hidden by bone from other objects. The first method allows the user to adjust the visualization properties of surrounding objects so that they are transparent or totally invisible during the volume-growing process. The second possibility is to limit the volume-growing process inside 2D slices of the original data volume. The user is able to navigate through the 2D data slices inside a region of interest, shown directly as overlay on the 3D view (Figure 6c). Also, the segmented object voxels in these slices are distinguished by color from non
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
segmented voxels and the volume-growing process is visualized. Like conventional segmentation techniques in 2D slices, the segment is defined by drawing a line at the object borders where the growing process is leaking. The user may place additional seedpoints to achieve faster visual results in areas remote from the last seedpoint. Additional seedpoints are also necessary for unconnected bone structures, which should be handled as one object. The modification of a cut or insertion of a new one restarts the entire volume-growing process. 3.3. Force feedbackWe have installed a testbed to evaluate both the usability of force-feedback devices and the parameters of the effects we used. Our virtual planning station comprises the visualization engine and a driver for I/O devices such as force-feedback joysticks. The driver has a high-level interface protocol that operates independently from the device’s particular hardware implementation (Figure 7). The high-level interface provides commands for force-feedback effects used in the surgical planning procedure like ’saw’ or ’drill’ together with their corresponding set of parameters. A device that supports forcefeedback capabilities is free to decode these commands and generate the appropriate effects. A device without such capabilities may ignore such commands and function as a standard input device.surgery planning stationI/O devicevisualization enginebidirectional I/O interfacedevice driver serial link, USB, etc.DirectX, iforce, etc.force-feedback deviceFigure 7: Integration scheme of input devices.The high-level interface requires an intelligent device driver for the input device. In our testbed we connected a lowperformance host computer to the planning station via a serial link. This host computer drives the Microsoft® SidewinderTM force-feedback joystick using the DirectXTM library. However, the interface supports a wide range of input devices with or without force-feedback capabilities. For example, the "iforce" library supports a set of force-feedback devices with different capabilities. Additionally, professional force-feedback systems can be connected with an adapted device driver. To emulate the surgeon’s working procedure we have implemented force-feedback sensations for sawing and drilling. During user interaction the force parameters are continuously streamed to the input device [7]. The input device driver has to translate the force parameters into a force effect by the joystick. To shield the user from heavy jolts or jerks by the stick, our driver is equipped with force ramps that increase the force smoothly in the joystick center (Figure 8a). In our testbed we have developed a force-adjustment panel to configure force ramps and to enable the surgeon to individually adjust the forcefeedback sensation of the joystick (Figure 8b).force in %100 80 60 40 20 0 -1,0 1,0 0,5 0,0 -0,5 -0,5 0,0 0,5 1,0 -1,0joystick x and y excursion(a)(b)Figure 8: (a) Example of a force ramp for the joystick; (b) prototype of the force-adjustment panel.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
The streamed force parameters, coming from the planning tool, always represent bone thickness. In drilling mode, the force is proportional to the bone density directly in front of the drill. While drilling, a small sine wave is superimposed onto the force to give the user the impression of a continuous, fast rotation of the drill. In sawing mode, the force parameter is proportional to the sum of all voxel densities the user is cutting. In this mode, a small sawtooth is added to enhance the realism of the effect. As a principal requirement for the transmission speed, the overall bandwidth of the force-feedback system must exceed that of the human perception system to achieve sufficient realistic results. Kalawsky [8] considers a bandwidth of 30 Hz sufficient. Thus the structure of our interfacing protocol is compact and allows a command to be sent within a few bytes.4. RESULTSUsually an operation planning starts with the segmentation of the frame with the splint used for correlating the planning data with the patient. During the planning process this segment is not needed and will only hide underlying bone structures (Figure 9a). It is not possible to extract the frame in the preprocessing stage using the threshold segmentation because our frame material has the same Hounsfield units as bone. The frame can be segmented with the techniques explained earlier simply by setting one initial seedpoint on the frame and placing a small cut directly in front of the teeth in a profile view of the patient (Figure 9b/c). This procedure can be done automatically in the future.cut(a)(b)(c)Figure 9: (a) Frontal view of a segmented frame with splint; (b) profile view with cut and (c) resected frame.The segmentation of the lower jaw needs also only one initial seedpoint and a few small cuts at the joints and at the contact points to the upper jaw (Figure 10a/b). This bone can be extracted by the surgeon in less than 2 minutes, which is significantly faster than the production of a standard plaster cast. The extraction of the upper jaw for a Le Fort 1 osteotomy presents no greater difficulty (Figure 10c/d). Usually we only need one initial seedpoint and one straight cut. As expected, the surgeon feels with the joystick a sudden thrust after drilling or sawing through the jaw bone.(a)(b)(c)(d)Figure 10: (a), (b) Segmented lower jaw; (c), (d) segmented upper jaw.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
For the sagittal correction of the mandible, a resection line has to be defined between the ramus and the rest of the lower jaw. In real surgery, one difficulty posed by this operation is to mobilize the mandible without cutting the main nerve to the teeth. In our virtual surgery planning process, complex cuts can be modeled by plane cuts. Nevertheless, the surgeon can simulate the real cut exactly if desired (Figure 11a-d).(a)(b)(c)(d)Figure 11: (a)-(d) Different views of the resected ramus from the lower jaw.Our project focuses on the maxillofacial surgical correction of jaw malformations (Figure 12a-d). Up to now, ten operations on patients with dysgnathia have been supported with techniques described above. We have found that our virtual cutting instruments greatly simplifies the resection of bone segments for this area of surgery planning.(a)(b)(c)(d)Figure 12: Results from a dysgnathia correction planning; (a), (b) before correction; (c), (d) after correction.An important medical demand is the high accuracy of ±0.5 mm for the whole procedure. This requires a highly accurate 3D sensor system in combination with precise patient data acquisition and its precise processing. The accuracy of the presented segmentation is only limited by the voxel resolution of the underlying data set. The data set came from a modern SIEMENS Spiral CT scanner and has a voxel resolution of 0.7 × 0.7 × 1.4 mm.5. CONCLUSION AND FUTURE WORKOur system satisfies the requirements for an image-guided virtual surgery planning system [9]. The fast rotation of the volume, the simple cutting interaction with a force-feedback device, and the visualization of the volume-growing process ensure the usability and acceptance of our planning interface in the medical domain. In our approach we use low-cost, force-feedback devices from the gaming sector, which are quite different from real surgical tools. Our cutting interaction model takes into account only bone density for cutting and processing force feedback. There are no physical models integrated for describing the behavior of refracting bone, which is a technique often used in real operations. We view our system as a maxillofacial surgery planning tool rather than a real simulation system. Nevertheless, the surgeon is able to perform and feel the resection of bone segments nearly as he would in a real operation.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system [1]. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and
Further efforts will be directed toward achieving a sufficient stereoscopic 3D representation through a "see-through" 3D display. The intra-operative augmented display of the operation planning results can also be used for teaching purposes. The use of a 3D output device should give students a full representation of the planning and the real operation. Toward the completion of the project, we will evaluate the possibility of representing soft-tissue modulations corresponding to bone shift.6. ACKNOWLEDGMENTSOur project, "Intra-operative Navigation Support," is funded by the Deutsche Forschungsgemeinschaft (DFG) and the University Hospital Benjamin Franklin (UKBF). We would like to thank the Department of Oral, Maxillofacial and Facial Plastic Surgery of the UKBF for its cooperation. The authors are grateful to Jean Pietrowicz for proofreading the manuscript.7. REFERENCES1. 2. 3. 4. 5. 6. 7. 8. 9. P. Neumann, G. Faulkner, M. Krauss, K. Haarbeck, T. Tolxdorff, "MeVisTo-Jaw: A Visualization-based Maxillofacial Surgical Planning Tool" Proceedings of the SPIE Medical Imaging, vol. 3335, pp. 110-118,1998. H. Delingette, G. Subsol, S. Cotin, J. Pignon, "A Craniofacial Surgery Simulation Testbed", Third Int. Conf. on Visualization in Biomedical Computing, SPIE, vol. 2359, pp. 607-18, 1994. C. Wood, C. Ling, C.Y. Lee, "Real Time 3D Rendering of Volumes on a 64bit Architecture", SPIE - Mathematical Methods in Medical Imaging, vol. 2707, pp. 152-158, 1996. B.T. Phong, "Illumination for computer generated pictures", Communications of ACM, vol. 18(6), pp. 311-317, 1975. K.R. Sloan Jr., S.L. Tanimoto, "Progressive Refinement of Raster Images", IEEE Transactions on Computers, vol. c28(11), pp. 871-875, 1979. K.D. Toennies, C. Derz, "Volume rendering for interactive 3-d segmentation", Proceedings of the SPIE Medical Imaging, vol. 3031, pp. 602-609, 1997. L.B. Rosenberg, A Force Feedback Programming Primer, Immersion Corporation, San Jose, California, 1997. R.S. Kalawsky, The Science of Virtual Reality and Virtual Environments, Addison-Wesley, 1993. J.V. Cleynenbreugel, K. Verstreken, G. Marchal, P. Suetens, "A Flexible Environment for Image Guided Virtual Surgery Planning", Visualization in Biomedical Computing, pp. 501-510, 1996.
正在阅读:
Header for SPIE use An Interaction Model for 3D Cutting in Maxillofacial Surgery Planning05-17
生产启动作业指导书09-16
文娱部活动简单规划08-11
广东省事业单位工作人员年度考核(新)登记表 - 图文03-12
2013年秋电大《开放英语(1)形成性考核册》参考答案05-18
学前教育专业心理学冲刺班模拟题05-20
上海环球金融中心封顶仪式方案05-26
中职一年级寒假作业答案12-17
英美法德四国政体比较(1)05-07
浅论深基坑桩锚支护结构设计及应用06-04
- 教学能力大赛决赛获奖-教学实施报告-(完整图文版)
- 互联网+数据中心行业分析报告
- 2017上海杨浦区高三一模数学试题及答案
- 招商部差旅接待管理制度(4-25)
- 学生游玩安全注意事项
- 学生信息管理系统(文档模板供参考)
- 叉车门架有限元分析及系统设计
- 2014帮助残疾人志愿者服务情况记录
- 叶绿体中色素的提取和分离实验
- 中国食物成分表2020年最新权威完整改进版
- 推动国土资源领域生态文明建设
- 给水管道冲洗和消毒记录
- 计算机软件专业自我评价
- 高中数学必修1-5知识点归纳
- 2018-2022年中国第五代移动通信技术(5G)产业深度分析及发展前景研究报告发展趋势(目录)
- 生产车间巡查制度
- 2018版中国光热发电行业深度研究报告目录
- (通用)2019年中考数学总复习 第一章 第四节 数的开方与二次根式课件
- 2017_2018学年高中语文第二单元第4课说数课件粤教版
- 上市新药Lumateperone(卢美哌隆)合成检索总结报告
- Maxillofacial
- Interaction
- Planning
- Cutting
- Surgery
- Header
- Model
- SPIE
- use
- 3D
- 藏族饮食“四宝”
- 关于英语专业定位的思考
- 江苏省成化高中2011届高三(上)期末模拟试卷(二)
- 民间借贷利率期限结构之谜_基于温州民间借贷利率监测数据的解释_丁骋骋
- 给水系统作业指导书
- 西南交通大学2008-2009电力系统试卷答案
- 北师大版小学一年级上册语文第五、六单元试题合集
- 通用入党积极分子思想汇报范文
- 组态软件与施耐德PLC通讯设置
- XPS实验报告-参考
- 酒店实习心得体会
- 财政投资评审管理暂行规定
- 如何安全的操作电动葫芦
- 在线式水质监测仪器IPO上市咨询(2014年最新政策+募投可研+细分市场调查)综合解决方案
- 苗族服饰纹样的象征性
- 中国健康知识传播激励计划
- 商业购物中心案例分析—上海正大广场
- 室内设计师常用尺寸
- 糖尿病及其并发症的中医药治疗研究进展
- 中国矿大煤矿瓦斯抽采技术