Miltec Corporation is a Huntsville, Alabama based company focused on a variety of technological areas, but dealing primarily with defense contracts, especially missiles. My work has been with the "targets" group who does pre and post-flight data analysis for the National Missile Defense Program. They provide systems engineering and technical support for the Strategic Targets Product Office.
I began working at Miltec in May of 2000, more than a year ago now. Fairly quickly after my arrival my work began to focus almost solely on a project designing a missile flight data visualization tool. Last semester I also got the opportunity to work on another project, while continuing the work on the visualization, developing a system to generate electronically browseable reports of missile flights in HTML.
I am writing this report as part of the completion of my time as a co-op with Miltec. I want to use it not only to describe the nature of my work but also to examine the co-op program at Miltec as one of its first participants. I have enjoyed my time here immensely and I consider it to have been the single most important event in my development professionally. Also I have introduced some new technologies while working here; things that are cutting edge but still stable technologies that I believe can be trusted as stable development paths for the future. As I leave the maintenance of these projects will fall to people who are less familiar with these new concepts that I am. I hope to describe briefly my motivations for choosing the different methods that I chose, detail where I would take development if I were continuing, and mention possible issues that might arise.
In my last report I predicted that this final semester would be the hardest to date because it was time to begin to finalize my projects and get them to a maintenance state where someone else could assume responsibility. I was far more right that I could have imagined at the time. I have discovered how much skill I have at the integration of information from different sources and coming up with plans of development. At the same time I have discovered how much trouble I have with saying "this is enough; you have gathered enough information it is time to put it all to use." Always there was the hope that I would come up with some new method that would drastically reduce the amount of work necessary to complete the task at hand. Very often I was correct and I am pleased with the things that I have produced. However, much of the time though I spent too much time reading and too little coding.
The biggest project that I have worked on to date has been the missile visualization tool. The goal is to develop a program that can take a set of information about a missile flight and produce a three dimensional visualization that shows the missile in flight. When I arrived, development of the program was primarily the responsibility of Ken Winfree and he had been developing it for the previous year. His development pattern had been to have a basic template of a program which he altered for each flight to reflect the appropriate data sets and 3D geometry. Timewise it was not a terribly inefficient system; the core of the program did not have to change each time, only the geometries and the data. It was not a long term solution for three reasons. One, Ken was the only person that could produce visualizations. Two, it did take an appreciable time to create a new visualization. And three, Ken, who went to school for a long time to pursue a career in aerospace engineering, was spending far more time than he wanted producing 3D visualizations.
A new project was begun developing a more extensible tool to allow the user to specify the data and missile geometry at run time rather than design time, so that the creation of a new visualization involved only changing some settings before the program was run rather than redesigning each time. I began work extending the template that Ken developed, but an issue that became more important as the project developed began to show. I am a computer programmer by trade; I have been going to college for several years specifically taking classes that taught me methods of designing computer programs. I know five languages well and have appreciable experience in a dozen. Something that I was beginning to see last semester that my work in Miltec and collaborating with others on the internet has solidified is that my time spent programming has had a real effect on my competence, and I am pretty good at what I do. I am not speaking poorly of Ken's skills at all, just that his training has been different than mine.
I revised Ken's program and used the basic design pattern and just optimized it for extensibility and succinctness. It was not designed though to be configured at run time though and eventually the best path for development was to start a new project with a different design philosophy. My previous 3D experience came from working in the Software Automation and Intelligence Lab (SAIL) at Tennessee Tech. While I was there I worked on simulations, first on a language parser for a 3D robotic arm and later on the arm simulation itself. This program was written in Java3D and so I was familiar with the api from there.
The missile visualization to date was being developed in 3DLinX which is an ActiveX control developed by another company in Huntsville named Global Majic. The interface to 3DLinX was being written in Visual Basic (VB). VB is a Rapid Application Development (RAD) language; this means that it is easy to create simple applications quickly by using the mouse to drag and drop components onto a form and then link them together with small pieces of code. 3DLinX is aimed very much at the same audience, it has an extensive user interface that lets a user create the 3D scene at design time. The problem is that VB was not designed to support the development of complex projects, and neither VB nor 3DLinX are designed to be configured extensively at design time. Their primary audience is expected to be interfacing through the graphical interface and not by writing code. C++ and Java are "object-oriented" languages while VB is "object-based;" I won't go into technical details about the meanings of those terms, but the end effect is that a sophisticated program written in VB ends up being at least 150% as complex and 200% as long.
I began work on a missile visualization written in Java3D. I modeled it heavily after the existing architecture and some work I had done in VB in an attempt to develop an application using 3DLinX. It was a complex project and eventually time constraints forced me to abandon that development path in order to meet deadlines. The specific issues that I did not think could be dealt with quickly enough were: Java3D could not record to a file, support for loading geometry from modeling packages was limited, and there was no existing way to create text overlays (text that is drawn on the screen rather than moving with anything in the scene.) 3DLinX had all of those capabilities to varying extents and I could not project a realistic time frame for the completion of those tasks, so I abandoned the Java3D and went back to developing in VB.
Things went decently well for a long time. I developed a set of graphical interfaces to allow a user to configure many of the aspects of the visualization. The program was getting complex and needed some optimization, but it was working as planned. One serious issue was the inability to load new geometry at run time, so the user could change the flight path of the missile, but could not change the actual missile. This was a known issue going in and the ability to change the missile was not a design goal.
Eventually though as the program got more complex 3DLinX began to become more difficult to work with. Particularly there are two separate bugs that work together such that the fix for one bug causes the other to appear and vice versa. Global Majic was contacted and they confirmed that it was a bug and that it would not be fixed until the next release. The only apparent workaround for the one bug that could be worked around required a fundamental change in design philosophy for the program. Specifically 3DLinX was originally released as a shareware version that could not have more than 10 entities being manipulated at a time. Miltec had a registered copy, but somewhere in 3DLinX there was a bug such that under certain conditions 3DLinX would crash if more than 10 entities were accesses within a control structure. The program was designed specifically such that the number of entities was never known. This way if there were more missiles added then the same structures would suffice because there was nothing binding the program to the current number of entities. To go through and add stateful information like that would have both been difficult and made the project valueless as it would again require editing for each new simulation and this time it would be editing a much more complex structure.
Also it is important to keep in mind that 3DLinX is a black box to me as a programmer. I had no idea about what was going on inside of it, so to test for different bug conditions I would have to try to change only one condition at a time and then rerun the program and see if it crashed. When it would crash most of the time VB would die as well as the program and so I had no debugging information. As I mentioned before 3DLinX was not designed to be dealt with programatically and the documentation for the api is very limited, so the process was nearly entirely in the dark. I wrote four separate versions of the program each of which tried a different way to achieve the basic goal of flying the missile. One version in particular took perhaps a full time week and had maybe 50 significant changes. The complete randomness with which I had to search for these bugs combined with the difficulties I had already faced with developing a quality application in VB brought me to the point that when the bug was finally found I would rather have my head beaten against the wall that go back to working on it. I hadn't known previously that I had such a capacity for loathing of a programming project. It turned out that the bug was related to setting the state of more than 10 entities, but it only happened if the data points being used were large enough to overflow a single precision floating point number and only if the variables were assigned directly and not through an intermediate. (Just how difficult that would be to hunt down working blind is imaginable.)
How I dealt with this burnout is not what I should have done. I started avoiding Ken's questions about the current status of the project and spending time working on other projects. It would have been by far the better course of action to come to him and describe the difficulties that I was having and ask for permission to work on developing another project for a while. Honestly I wasn't used to this, it wasn't just that I didn't want to work on it; I would sit down and I just could not hold the program in my head and come up with what the next thing to do was. I had a writer's block of a sort and I my problem in general has been having too many possibilities for dealing with a problem, not having none. I did not handle it as I ought to have and I freely admit that.
The main thing that I started doing was developing another Java3D application. Two things that I had come to value immensely working in 3DLinX was the extensive documentation and the developer community. When I would have an issue with Java3D I could send an e-mail to a listserv run by sun and I would get responses from people who had dealt with similar issues or sometimes from the api developers themselves who are also on the list. Also the Java3D api has no graphical interface; it is designed to be manipulated programatically and so was much better suited to my needs. The java language is also a much more robust language and one that I enjoy programming in. It mattered very much to me that I develop something because I said that I could deliver something and I wanted to live up to my word.
Conveniently enough right about the time that I was looking into Java3D again someone posted a library to the list that did overlays; one of the missing capacities from Java3D. I went through and extended that api and it is now a fairly robust set of code with a few known issues, but definitely workable for use in a project. Also I worked previously on a set of code that was the basis for a library that would allow recording from a Java3D programming which was one of the other known deficiencies. All told that left the ability to load models which was not a definite deficiency; there are a number of model loaders for Java3D including a standard loader for LightWave .obj files and the Web3D consortium (who are producing the VRML and X3D standards) are making their browser in Java3D so they have a good loader for VRML. The testing of these different loaders will take some time both with the modeling package and programming.
That just about brings me up to date on that project. I think that the Java3D architecture is a much better way to go. I simplified the program a whole lot and rather than trying to allow the user a complex set of graphical configuration options I decided to load the settings from a file. I have grown to appreciate the structure of eXtensible Markup Language (XML) documents working on the documentation project and I am using a custom XML based document to set up the scene and the relationships of different elements.
I went to a meeting with different people throughout the company this afternoon and there is a project very similar to the one I am working on with Keith being developed by another group. Instead of Java3D and XML they are using OpenGL with plain ASCII; the basic architecture is the same though. I think that the XML is a definite improvement so far as the configuration files go, but as for Java3D versus OpenGL there is certainly room for argument. The Java3D has more of a structure already created for the programmer which both makes things faster and limits the sort of options that a programmer has.
My recommendation regardless of the development path would be to have someone experienced with the concepts of 3D programming tasked specifically with creating this program. Have them produce a product specification and a development plan and then work in conjunction with the people that will be using the tool to design it to meet their needs.
Also this afternoon there was another project that was working toward both creating a new format for specifying 3D shapes and a viewer for those shapes. I highly recommend against this path for several reasons. There are existing standard formats that serve the purpose the project was hoping to work towards; specifically Virtual Reality Modeling Language (VRML) and LightWave objects come to mind. The creation of a new standard will produce something that is inoperable with any tools other than the ones created as a part of the project. Third party modeling packages will not be available and third party viewers will not either. Any people who have to take responsibility for maintenance of the project will have to become experienced with a new standard, whereas with using existing standards someone familiar with 3D graphics is likely to have at least a passing familiarity and perhaps more. Finally the argument is that this will be a program with limited applicability that will not undergo extensive development; having sat in my office and demonstrated my work and heard the almost obligatory "Could you make it do..." over and over I am very hesitant to believe this will have limited applicability.
My other major project was the development of a system for the generation of browseable reports of missile flyouts. There was a HyperText Markup Language (HTML) document that was produced by another company as a report for a different flight and the goal was to generate something similar. The authoritative document was being developed in Word and being compiled from different sources working on the project. The desired turnaround from completion of the Word document to production of the HTML version was two days.
An obvious method of development is to generate the HTML using Word. Unfortunately this had two major drawbacks. One, there were no links within the document connecting the table of contents to different sections or connecting references to tables and figures. Two, the HTML that Word generates is particularly bad. At the time I wrote my last report there had not been a live test of this system; just recently there has been. The final version of the Word document was around 18mb; the HTML generated by Word including images and some proprietary Microsoft extension files came to about 13mb. The final version of the HTML generated by my program including images came to about 3mb. Honestly I do not know what the purpose of those extra 10 megabytes are. I can locate what the files are but I can come up with no explanation for them that justifies their size. Regardless, the HTML generated by Word will not render correctly in any browser including Internet Explorer (IE) 5.5 or 6. The large size of the files being loaded however does make the mis-rendered pages load and scroll very slowly.
I have had experience attempting to post-process the HTML and I know that it is a fruitless task. Much can be done but it is extremely difficult to automate the process reliably and doing changes by hand is always error prone especially when operating under a deadline. For these reasons I decided to develop a documentation system using eXtensible Markup Language (XML) and eXtensible StylesheeTs (XSLT.)
XML is not a language per se. Rather it defines a subset of Standard Generalized Markup Language (SGML) that can be used in defining languages. The best known SGML based language is HTML. SGML does not describe a document in itself, rather it describes the HTML language which in turn describes a document. SGML is a broad language which allows a wide variety of descriptive constructs in describing a language. XML is a subset of SGML, it allows several of the descriptive properties of SGML but it removes some for the sake of simplicity.
Therefore it is something of a misnomer to say that I wrote documents using XML. It would be more correct to say that I wrote documents using an XML derived language; that is a language that can be described using the syntax rules of XML. The actual language that I was using to create documents was originally one that I wrote myself. One of the primary advantages though of XML is that it allows a person to create a Document Type Definition (DTD) which can be shared with others so that they can write documents conforming to the document type without ever having to have seen any of the original documents. Likewise people can write computer programs that operate on a specific document type and users can write documents that they know will be accepted by the program without having to have any communication with the developer. HTML is a public DTD and Netscape Navigator and Internet Explorer both can render the webpages of people from around the world because all parties involved agreed on the rules of how the document will be laid out.
Knowing this and being a big proponent of standards I did not like creating my own document type since it simply added yet another standard to the existing plethora. Shortly before the first test of the system I examined a public DTD called DocBook. DocBook is a DTD that was originally designed for the description of software reference manuals. It is rapidly becoming the standard for many types of technical documentation however. I looked at it briefly when doing my initial survey and it seemed too narrowly targeted to suit our purposes so I ignored it. Later however I was working on some security documentation for the Linux Documentation Project who have recently moved to DocBook as their standard for new submissions. While working on that documentation I saw that many of the issues I had expected were only a lack of knowledge on my part, so I began migrating my work to a subset of DocBook, so that when I would mark up a document it could be dealt with as DocBook.
The majority of my work had not been in the creation of the language, rather it was in the creation of the XSLT's to translate a document marked up in the language I created into another format. When I switched to DocBook there were existing tools that could translate DocBook into HTML however they were aimed more at the generation of technical documentation and were not suitable for the type of documentation that we were creating. So, my existing work was still valuable and I changed the document type it expected over.
It seems at first counterintuitive to go to the trouble of marking up a document in DocBook and then going through and translating the information into HTML. It seems as though the most intelligent plan would be to simply mark it up in HTML the first time and be done with it. The reason is that HTML is a language designed to describe the appearance of a document on the screen (where paragraph breaks occur, what is underlined, etc.) whereas DocBook is a language designed to describe the layout of a document semantically (what paragraphs comprise a section, what footnote references what term, etc.) Recording the semantic information about a document allows a computer program to operate on the document with greater precision. The immediate benefit of this is that the document can be reliably translated into other forms. Translated is perhaps too light of a term because it suggests a one-to-one correspondence between the documents which is not the case; transformed is a better term I think.
DocBook describes what the sections are in the document and what cross references there are and things like footnotes as well. When the transformation takes place the sections are numbered appropriately, the cross references are replaced with the appropriate numbers of the sections they reference as well as a hypertext link to that section. Footnote references are placed appropriately on the page and linked to the expanded form at the bottom of the page. Also classification markings are placed appropriately at the beginning of sections and surrounding tables and figures. Nearly anything that follows a regular set of rules can be generated by the computer taking a task that is tiresome and error-prone for a human author and placing the work on the computer all because the semantic structure of the document has been recorded.
The changes can be more fundamental than that. There are different stylesheets that generate drastically different HTML documents; one form is a single long document while another is frames-based and has an index on one side that allows the user to show each section separately in the other. Things are not even limited to HTML, another stylesheet translates the document into a form that eventually becomes a pdf.
I am very much a fan of XML, not only for the benefits for the short term but in the long term these documents will be more easily accessible to computers because of the extra information encoded in them. This system is stable and working well for the HTML version; generation of pdf's has not yet been completely transitioned over from the previous document type.
Currently the production model is for the authoritative document to be generated in Word and then post-processing is done to mark it up in DocBook so that it can be transformed. Eventually the most efficient model would be to generate initially in DocBook and then have the version targeted at printing be the pdf. This has advantages beyond the obvious reduction of redundant work. One thing is that merging multiple documents can be done automatically because XML documents are simply plain text files. Currently different sections are written and maintained by different people and then merged by a single person. The maintenance of those separate documents and merging is a place where synchronization errors could easily occur; again, with DocBook the computer handles generating numbers for sections, figures, tables, and cross references thus eliminating a common source of typographical errors. Another strong advantage is that the document could be productively entered into a multi-user versioning system (CVS.) This would make it so that anyone could get the latest authoritative source of the document at any time, any changes could be seen relative to the previous revision, and the time and owner of any changes would be recorded.
The biggest issue is the lack of a solid and economical editor to make the introduction of the new technology into the department easier. It is the most technically sound path, but having tools that the authors can use easily is very important if it is going to be introduced smoothly. I have done some research and there are a variety of developing XML editors but none of them seemed both cheap enough and easy enough to use to suit our needs. There are several open source projects that are also approaching a usable product but none seem to be stable yet. The transition to DocBook created some new possibilities since there are editors written specifically for DocBook, but I have not had time to do further research.
One other major change that should take place is the introduction of a set of pages created using vector graphics. Many of the graphics in the reports are line graphs and they could be represented as vector graphics (graphics described as a set of mathematical functions) rather than raster graphics (graphics described as a set of colored pixels) as they currently are. This would give them higher resolution as well as allowing the user to be able to zoom interactively.
It was not possible to write HTML that would render exactly as desired in all browsers, so since the distribution is controlled the HTML is being generated to the standards and a browser that renders it correctly (Mozilla) is included on the CD. Distributing a browser like this could also allow the addition of plug-ins to support specific media types like vector graphics. The HTML will still be viewable in other browsers, but depending on how well they conform to the standards it may or may not look exactly as intended.
The other major responsibility that I have had during my time at Miltec has been system administration. Linux is a Unix-like open source operating system that has been around for a while but only really growing in popularity in recent years. I personally am attracted to it for the amount of control that it gives me over the computer, but the biggest attractions for it in a corporate environment is that it is free. There are three main systems that I have been doing some or all of the maintenance on; one is the box that I do development on and where I get my mail, another is an intranet webserver and database server, and the last is a public webserver and ftp server for the targets group.
The day to day work of system administration is in general more of a maintenance task rather than a creative one. Depending on the complexity it can take a descent amount of time, but at the end of the day instead of having created a new resource most of what you accomplished was preventing an existing one from disappearing or made it more easily accessible to. Very much of my work has been with security and setting up system and network monitoring as well making sure that the systems are not running anything that they don't need to and the things that they are running are configured correctly and as securely as possible.
Things changed significantly when we moved to a new building and the whole network went behind a firewall. I still have been keeping an eye on how things are configured (since I have a web/mail server on the outside and targets has a ftp server on the outside) but the concerns with those boxes is maintaining their integrity and not network monitoring. To say that a system is impenetrable is just foolishness, given time and resources any computer that is plugged in can be accessed, I think though that I have made it not worth the time and effort.
Something that should be done for the Linux boxes is to purchase connections to the Redhat network for the different server systems. This will allow the administrator to automatically update their system if any new security releases come out. This is not strictly necessary as you can download the software off of the internet, but at hand is simple economics. It is true that in the long run you get what you pay for. If no one pays for Redhat's software then eventually the company will fold and the option will no longer be there. The cost is $20 a month or there are discounts for having several machines for an extended period of time. This is as opposed to $750 for a copy 2000 server with 5 client licenses meaning no more than 5 people can connect at a time, if you want more people you pay more money. There are no restrictions placed on what you can do using Redhat and you will never be audited by the company.
Also if I were responsible for maintaining the network I would look seriously into setting up a Light Directory Access Protocol (LDAP) server. LDAP has a variety of uses but for Miltec one of the most useful features would be that it is used to store information like names and e-mail addresses and it can be accessed from most e-mail clients. Outlook, Netscape, Mozilla, pine, ... all of them let you set a LDAP server to query and then when you go to your address book they can pull addresses from the server. Currently the authoritative source for e-mail and phone information is a Word document and as such it is less accessible than it could be and it is easy for synchronization errors to occur. I have never configured a LDAP server before and there may be issues, but as Miltec grows having easy access to contact information is going to be important and it is going to outgrow the constraints of a Word document.
Another aspect of infrastructure that I would examine is the calendaring. This is not an especially serious issue, it is something that I personally am considering because I have been doing research for a project that I am going to be working on once I return to school. The web-based calendar that is being used now is a perfectly acceptable solution, but only so long as there is no desire to either integrate with other calendaring applications or to collaborate with anyone outside of the company using another calendaring system. To the best of my knowledge those are not serious needs at this time and so there is no real motivation to switch from a solution that is working.
That having been said I will mention briefly the project that I looking into doing in the fall mostly because I have been reading up on it and am excited about the possibilities. Calendaring is a relatively simple task if you are dealing with only keeping track of a the events for a single person. As you add in more people and have collaboration between those people and the recurring of events, things get more sophisticated. When you add in that people may be from anywhere in the world when collaborating in the global market things get more sophisticated still. Ideally you want for the process to be fairly automatic such that if I am a busy businessman who wants to meet with the head of another company my calendar program can talk to her calendar program and then give us options for when we can meet. For this to work however our calendars have to be able to speak the same language. The protocol designed with this in mind and as an attempt to be extensive and flexible enough to serve calendaring needs for a global marketplace is iCalendar. The problem is that the protocol is very complex to deal with the variety of situations it will face. The Request For Comments (RFC) document describing the protocol is over 150 pages. I hope to take over maintenance of a project implementing iCalendar in the fall. Whenever I have something that I am happy with I will get back with Miltec and ask if they would like to use it.
As I mentioned at the beginning I have very much enjoyed the last 15 months I have been with Miltec. It has been an invaluable learning experience and it has helped me to mature immensely both personally and professionally. It is difficult to even compare the competence I had when I began this job to where I am at now. Perhaps the longest term project that I had opportunity to work on to date was a semester project which is three months. I have been working on visualization stuff since I arrived and it is still not complete; learning to deal with that level of complexity and the issues with a developing project has been a challenge.
My co-op experience has been different from many of my friends in the complexity of the work that I was allowed to do and the level of autonomy I was allowed. I think that of all the benefits that I enjoyed the freedom was the best. Being allowed to exercise my creativity is something that was not only beneficial to me it was also beneficial to the company I believe. The technologies that I introduced, particularly with the document generation, is not a development path that would have been taken were I not given the freedom to innovate. It is a solid path though and one that I think will stand up well to the test of time. I honestly don't know what path the visualization will take, but the input that I have had into the process has helped things develop well I think.
All in all there is very little I would change. Now that I am facing returning and looking at my savings I wish that I had received more regular raises. Co-oping for four semesters has been a good time period for me; the first semester gave me time to learn the ropes, the second time to begin getting involved with developing some projects, the third to do some considerable work on those projects, and then the fourth to work toward wrapping them up. The current compensation system only allows for raises after another semester of school so I only got one my entire time here. Though I did not think of it at the time I think that I deserved another somewhere along the line, perhaps something performance-based would have been appropriate.
Also I think that some of my projects would have developed differently had I the opportunity to communicate with more people. As I mentioned with the visualization there were several groups around working on developing similar projects and one of them in particular was comprised of people with more training in 3D graphics and computer programming. This is a difficult subject to approach because I do not want to to give the impression that I have anything but the utmost respect for Ken. As I mentioned earlier though, his training is different that mine. Attempting to discuss texture buffers, transformation matrices, object-oriented design or scene graphs with him was difficult in much the way his discussions of astro-physics were difficult for me to grasp. Not that either one of us lacked intelligence, we both just possessed information in different domains that required experience to understand well.
Collaboration I think will be important to that project in general though. I believe it is possible to have the bulk of the program be shared between different projects and meet the needs of different groups within the company. Creating an architecture that is easy to extend should not be especially difficult either. The people involved need to be communicating though.