All IM 2003 Tutorials are a half-day in length.


Tutorial Program for Monday morning, 24 March 2003


Program at-a-glance

Monday Morning T1: Service Level Management and Quality Aspects in Service Management T2: Management of Next-Generation Wireless Networks and Services T3: Internet Traffic Monitoring and Analysis: Methods and Applications T4: Web Service Management
Monday Afternoon T5: QoS Management in IP Networks T6: Internet Management Protocols: State-of-the-Art and Recent Developments T7: Security Management: State-of-the-Art, Challenges and Myths T8: Management of Optical Networks
Friday Morning T9: Web-Services Primer: Architecture, Protocols and Standards T10: Fast Track Introduction to Control Theory for Computer Scientists T11: eTOM: The Business Process Framework for Information and Communications Service Providers T12: Management of Pervasive Computing
Friday Afternoon T13: NGOSS: What is it good for? T14: Over-the-Air Device Management T15: OSS/J: Building Real World Systems with J2EE T16: Broadband Networks and Service Management

TUTORIAL 1: Service Level Management and Quality Aspects in Service Management
Presenter: Jeffrey S. Wheeler, Data Technical Services. Inc., USA

This tutorial is a guide for both Service Providers and companies looking to either create or procure SLA-driven service and applications for their business and/or network environments. The technical detail provided will satisfy most network engineers, but the presentation material and form will entertain and enlighten the business-focused attendees as well.

This tutorial will introduce a common Terms-of-Reference platform (TOR), discuss this TOR and its relevance in most business and technical environs with the customer clearly at the focal point and core at all times. The tutorial will contain guidelines for both the Service Providers and Service Consumers to follow when either creating or consuming SLA-driven services, regardless of whether these are business policy driven or purely network driven services. Quality of Service detail specific to Service Level Management will drop down into technical mechanisms suitable for an engineering audience. This low-level technical detail will then be mapped upwards to business policy that will then be linked to the actual Service Level Agreement semantic and syntactical criteria providing a common thread of logic throughout the tutorial material. Topics covered include (i) quality of service mechanisms in network services, (ii) Quality of Service components in the Service Level Architecture, (iii) SLA-driven network services, (iv) service level agreements and their basic components, (v) what are service levels, by components, should be negotiated, can be negotiated with the provider, and which are fixed services, (vi) how are SLAís measured.

Jeffrey S. Wheeler is currently the CTO of Data Technical Services, Inc. (DTS), a technical consulting firm located in the greater Seattle, Washington area. Prior to DTS, Wheeler was a recognized leader in networking technologies, having led Ahaza Systems, Incís. research and development efforts in the creation of IPv6 hardware and software network based services and equipment. Wheeler was also active on the speaking circuit and maintained an active role in technology development through various roles in Standards Bodies such as the IETF, IEEE, and the Internet2. Prior to his role at Ahaza Systems, Inc. he served as the CTO for PFN, a Product Research and Development company based in the Boston area. Prior to joining PFN, Wheeler was the Network Architect for Microsoftís Internet Technology Group (ITG). He directed and oversaw the creation and adoption of many innovative and key technologies into and on behalf of Microsoft and their global networking system. His role included maintaining active involvement in several industry standards bodies. He is a leading advocate for the Directory Enabled Networks (DEN) initiative through participating in the DMTF, and has authored and participated in the development of many key and new technologies within the Internet Engineering Task Force (IETF) and Internet2 group. Wheeler has developed and currently teaches a workshop at Networld+Interop titled ìPolicy, QoS, and DEN.î He has spoken at international symposiums and conferences on topics such as MPLS, SNMP for Configuration, OSPF and Routing Technologies, Policy and QoS topics. He has authored many industry white papers, contributed to numerous RFCs and DRAFTs and has served as an editor and reviewer for Morgan Kaufmann publishers.

 

 

TUTORIAL 2: Management of Next-Generation Wireless Networks and Services
Presenter: Mehmet Ulema, Manhattan College, USA

Next generation wireless networks and services (3G and others) will be drastically more complex than today's so called second-generation (2G, 2.5G) wireless systems in an impressive worldwide market. In this changing environment, the operators will introduce new services and more powerful and efficient ways of doing business by integrating new technologies with existing ones. New wireless architectures will include high capacity pico cells, urban microcells, wide area macro and increasingly popular Wireless Local Area networks, as well as satellite networks. IP and Internet will play a key role in these wireless networks. Not only the networks elements, communication devices will be evolved but so will the management systems and way of managing.

This tutorial will start with an overview of the present and future wireless communications networks and services including key components, interfaces and procedures. A review of the current network management practices and technologies used for today's wireless networks will be provided next. The technical and operational challenges that the industry and technical community are facing will be discussed next. Then, the tutorial will introduce the current standardization and industry activities related to network management for wireless networks and services. The industry trends and the use of advanced approaches in managing wireless systems will be covered next. Finally, potential impact on service providers, equipment manufacturers, and network management system developers will be discussed.

Mehmet Ulema has more than 20 years experience in the telecommunications field as a professor, director, project manager, systems engineer, network architect, researcher and software developer. Currently, he is a professor at Manhattan College in New York and is involved in various research and consulting projects on wireless communications including wireless intelligent networks, network management for wireless networks, wireless Internet access and wireless local loop. He held management and technical positions in Daewoo Telecom, Bellcore, AT&T Bell Laboratories, and Hazeltine Corporations. He also was an adjunct professor in the City College of New York and Stevens Institute of Technologies. He is a member of the editorial board of the Wireless Network Journal, the International Communications Journal and IEEE Communications Magazine. He is the co-founder of the IEEE Communications Societyís Information Infrastructure Technical Committee. Currently he serves as the chairman of the IEEE Communications Societyís Radio Communications Technical Committee. He is involved in many IEEE conferences as the session organizer, session chair, member of the Technical Program Committees, and Member of the organizing committees. Recently, he was the Technical Program chair of the IEEE Network Operations and Management (NOMS) 2002. He received MS and PhD in Computer Science at Polytechnic University, New York. He also received BS and MS degrees at Technical University of Istanbul, Turkey.

 

 

TUTORIAL 3: Internet Traffic Monitoring and Analysis: Methods and Applications
Presenter: James W. Hong, POSTECH, Korea

Multi-gigabit networks are becoming common today in Internet service providers (ISP) and enterprise networks. The bandwidth of ISP's backbone networks is evolving from OC-48 (2.5Gbps) to OC-192 (10Gbps) to support rapidly increasing Internet traffic. Also, enterprise networks are evolving from 100-Mbps or 1-Gbps to multi-gigabit networks. Further, the types of traffic on these networks are changing from simple text and image based traffic to more sophisticated and higher volume traffic (such as streaming rich media, voice and peer-to-peer). Monitoring and analyzing such high-speed, high-volume and complex network traffic is needed, but it lies beyond the boundaries of most traditional monitoring systems. Various application areas are requiring information generated from such traffic monitoring and analysis. For example, such information can be used for 1) usage-based billing, 2) denial-of-service (DOS) attack analysis, 3) user network usage analysis, 4) network capacity planning, 5) customer relationship management, and so on. Many of these applications are critical to the business, operations and management of ISPs and enterprises.

This tutorial will present the techniques involved in capturing and examining packets, generating and storing flows, and analyzing them for various purposes and applications. Active and passive packet monitoring techniques and tools are compared and discussed. Monitoring and analysis tools such Cisco NetFlow, cflowd, CoralReef, argus, and NG-Mon are examined. Application areas of such monitoring and analysis tools will also be explored.

James W. Hong is an associate professor in the Department of Computer Science and Engineering, POSTECH, Pohang, Korea. He received a PhD degree from the University of Waterloo, Canada in 1991 and an MS degree and a BS from the University of Western Ontario, Canada in 1985 and 1983, respectively. He has worked on various research projects on network and systems management, with a special interest in Web, Java, CORBA, and XML technologies. Hongís research interests include network and systems management, distributed computing, and network monitoring and planning. He has published more than 100 international journal and conference papers. He has served as Technical Chair for IEEE CNOM from 1998 to 2000. He was technical co-chair of NOMS 2000 and APNOMS'99. He is an editorial advisory board member of International Journal on Network Management (IJNM). He is also editor-in-chief of KNOM Review Journal and a member of IEEE, KICS, KNOM, and KISS.

 

 

TUTORIAL 4: Web Service Management
Presenter: Akhil Sahai, Hewlett Packard Laboratories, USA

Web services are becoming a well-accepted way of putting e-businesses on the web, as well as enabling users (either humans or other web services) to use them. Web service refers to a business or computational function delivered using standard web technology, such as HTTP (Hypertext transfer protocol), XML (Extensible markup language), WSDL, and SOAP (Simple object access protocol) delivered over the Extra-, Intra- or Internet. According to many market research firms, it is likely that, before the year 2005, many of the companiesí offerings will be available as web services and large corporations will deploy tens or hundreds of web services. These web services have to be interfaced with the internal business processes. A complex infrastructure is usually a reality in an e-business environment. Traditionally, the problem of enterprise management has been limited to the areas of network and system management. For example, the IT staff was concerned only about the smooth operation of networks and systems. More recently, as the role of IT is transforming from "infrastructure support" to "service provisioning," more and more emphasis is given to the overall application and service management rather than management of bits and pieces of networks and systems. In order to manage the e-business infrastructure it is necessary to have frameworks and tools that allow business managers/analysts/users to define, measure, and analyze IT and business metrics. They also need management systems that can manage business processes and web services that constitute their own enterprise e-business infrastructure but also to manage relationships with other enterprises. This tutorial will describe how to build a good management solution in this context. A good web service management solution essentially requires - (a) Having an adaptable network and system infrastructure that can provide for changing workloads; (b) defining the service level agreements and metrics; (c) properly instrumenting (invasively and non-invasively) the web-services and business processes infrastructure to generate the required measurements and events based on the defined metrics, and (d) building the appropriate management system that can model and analyze the data for the evaluation of IT and business metrics.

Akhil Sahai is a senior scientist at Hewlett Packard Laboratories, Palo Alto. He is currently researching web service management, solutions and architectures. Sahai was one of the initial members of the e-speak team that shaped Hewlett-Packardís pioneering web service technology. During this time, he designed the events, web e-speak components and the management infrastructure for e-speak. He has led research earlier at Multi-Media Systems Group at Kent Ridge Digital Labs (earlier ISS) Singapore. He has about seven years of industrial experience. Sahai received his doctorate in computer science from INRIA-IRISA France and his Masterís degree in computer science from the Indian Institute of Science in Bangalore. He has published widely in distributed systems, network/system/service management and mobile computing. He is an associate member of IEEE.

 

 

Tutorial Program for Monday afternoon, 24 March 2003

 

TUTORIAL 5: QoS Management in IP Networks
Presenter: Marcus Brunner, NEC Europe, Germany

QoS technologies are entering the Internet domain. Among them are IntServ (RSVP), DiffServ and MPLS. This tutorial introduces the basics of IntServ, DiffServ, and MPLS as the underlying QoS technologies. Scalability, manageability, and integration of these technologies are discussed. We believe that the key to get QoS into IP-based networks lies in the control and management of QoS-enabled networks and transport services. Therefore, the tutorial concentrates on management issues, particularly for the combination of DiffServ and MPLS. Today we are facing several problems in this area, e.g., how do we build edge-to-edge or end-to-end guaranteed services?

This tutorial overviews management technologies developed and being standardized in the IETF to provide QoS solutions including SNMP, SNMPconf, COPS, and the Policy Framework. Additionally, we will present higher layer management architectures including Bandwidth Brokers/QoS Servers, Policy Servers, QoS routing, and Measurements. Finally, some issues and concepts for inter-domain QoS management will be addressed.

Marcus Brunner is a senior research staff member at the Network Laboratories of NEC Europe Ltd. in Heidelberg, Germany. He received his PhD from the Swiss Federal Institute of Technology (ETH Zurich), while working in the Computer Engineering and Networks Laboratory (TIK) of the Electrical Engineering Department. He got his M.S. in computer science from ETH Zurich in 1994. Aside from the involvement in different national and international projects, his primary research interests include network architectures, programmability in networks, network and service management, and Quality of Service management in general. He is actively participating in several IETF and DMTF working groups.

 

 

TUTORIAL 6: Internet Management Protocols: State-of-the-Art and Recent Developments
Presenter: Aiko Pras, University of Twente, The Netherlands

This tutorial discusses the Internet management standards that are being defined by the IETF. It starts with the history, goals and principle operation of Internet management. After that we will discuss in great detail the many developments that took place with respect to the Structure of Management Information (SMI), Management Information Bases (MIBs), as well as the Simple Network Management Protocol (SNMP). In particular we will present the main concepts behind the SMI, the differences between SMIv1 and SMIv2, as well as recent developments like SMIng / SMIv3. After that we will shortly identify the various MIBs that have been defined within the IETF, and discuss the new MIBs that have been derived from the original MIB-II. These MIBs are the most important ones for monitoring IP networks. After discussing these MIBs we will investigate the transfer of management information and the development from SNMPv1 via SNMPv2 to SNMPv3. Although SNMPv3 has just become full standard, the Evolution of SNMP (EoS) WG is still proposing a number of additional improvements; in this tutorial we will shortly identify what these proposals are. Other topics to be addressed are extensible agent technology (AgentX) and distributed management (DisMan). The tutorial concludes with a discussion of recent developments within IETF and IRTF management groups, references to additional material and some provocative statements concerning the future of Internet management standards.

Aiko Pras is a senior researcher at the Centre for Telematics and Information Technology (CTIT), a research institute of the University of Twente, and one of the knowledge institutes within the Dutch Telematics Institute. He is member of the Telematics Systems and Services (TSS) architecture group, as well as the manager of the SimpleWeb and co-editor of the Simple-Times.

 

 

TUTORIAL 7: Security Management: State-of-the-Art, Challenges and Myths
Presenter: Mark Burgess, Oslo University College, Norway

"90% of security problems are solved by cryptography."

This statement, which I heard at a meeting recently, reflects a commonly held belief in today's ultra-hyped market of security products -- that security can be purchased as a software package. But the truth of the matter is that all of our security software is useless unless it is managed correctly, and maybe not even then. This tutorial will provide a half-day introduction to the central issues in computer security. You will learn why the statement above couldn't be more wrong. What does security mean, and how do we achieve it? Can it really be purchased as an accessory?

The central mantra of this tutorial is that "security is a property of systems" and that it is essential for security managers to involve everyone in their organization in the deployment of security. The recent standard ISO17799: "Information Technology: Code of Practice for information security management" contains a lot of good advice about implementing security. This tutorial uses this document as a springboard for security and provides the necessary background to understand its concepts. In addition to some basic theory, the course clarifies the role of encryption in security and provides some simple recipies for tightening security in an organization. Security is first and foremost about trust management for the protection of assets. Whether your organization uses Windows or Unix, whether it has a firewall or not, you will learn how to analyze your networks and systems to find their weak spots, using general principles. How should you authenticate employees and users? What should your priority list be for implementing security? Should security be outsourced? Do you need a firewall? What are IPSec and DNSSec and what do they mean for information technology in the future? What is social engineering? Is SSL really secure?

Mark Burgess is Associate Professor at Oslo University College in Norway. He has a PhD in theoretical physics from the University of Newcastle upon Tyne and has been doing research in system administration and security for about ten years, particularly applying statistical methods to such diverse problems as resource management, system integrity and anomaly detection. Burghes was program chair for the USENIX LISA2001 conference that profiled more theoretical approaches to system management. He is the author of cfengine and several books, including Principles of Network and System Administration, Wiley 2000. He is a frequently invited speaker at international conferences.

 

 

TUTORIAL 8: Management of Optical Networks
Presenters: Ron Skoog and Brian Wilson, Telcordia Technologies, USA

Optical networking is emerging as a critical component in the telecommunications infrastructure. The manageability of integrated data and optical networks will have a profound effect on the success of telecommunication service providers. Coincident with the rise of optical networking has been a push to develop new techniques for managing integrated telecommunications networks. A traditional management oriented approach is the development of the TMF MTNM interface. Other, more ambitious, approaches push for a control plane that moves many management functions to the network such as the ASTN and ASON frameworks along with the OIF UNI (and the associated GMPLS and LMP protocols). This tutorial seeks to make sense of the many proposed management and control architectures by reviewing these proposals in light of network management challenges for emerging optical networks. Key topic areas that will be covered are: multi-layer network management; emerging network management architectures (e.g., MTNM, NGOSS, WebServices); management of the control plane; the control planeís relationship with the management plane; management of end-end connections through heterogeneous sub-networks; new service models (e.g., BoD, OVPN); and management challenges introduced by ON transparency.

Ronald Skoog has been a Senior Scientist at Telcordia Applied Research for about four years, and during that time he has worked in the areas of optical networking architectures, optical network management and control, SONET/SDH/WDM network design and network evolution, IP/WDM network architectures and evolution studies, emerging network technology studies (e.g., Gigabit Ethernet, next generation SONET, RPR, optical ëon-rampsí, etc.), and reliability studies for optical networks and optical network elements. Prior to joining Telcordia, he spent 29 years at Bell Laboratories/AT&T Bell Laboratories/AT&T Labs working in the areas of transport network design; signaling network design, protocols, and performance/reliability studies; and circuit switched network systems engineering and performance/reliability studies. He was a supervisor/district manager at AT&T for 25 years. Skoog has a BS in electrical engineering from Oregon State University, and an MS and PhD from the Massachusetts Institute of Technology in electrical engineering (control and systems theory). He is a member of IEEE, IEEE Communications Society, LEOS and Sigma Xi.

Brian Wilson is a Senior Scientist in the Network Operations and Management Research area at Telcordia Technologies. He has more than 20 years of experience in the operations, management and planning of telecommunications networks. His early experience is in the area of switching system requirements and network planning tool requirements. For the past 10 years Wilson has focused on network management for ATM, SONET and WDM networks. He has extensive experience in the design and implementation of network management systems for integrated multiple technology networks and has led efforts to prototype network management capabilities that have been deployed in ATDNet and MONET government test-bed networks. More recently, he has supported efforts to enhance the capabilities of Telcordia Technologies OSS products to support network discovery and optical networking technologies. Wilson has a BSE in Industrial and Operation Engineering from the University of Michigan and an MSE in Industrial Engineering and Operations Research from the University of California.
 

 

Tutorial Program for Friday morning, 28 March 2003

 

TUTORIAL 9: Web-Services Primer: Architecture, Protocols and Standards
Presenters: Alaa Youssef and Giovanni Pacifici, IBM T.J. Watson Research Center, USA

Built on top of HTTP and XML, Web Services have emerged as the leading vendor-neutral interoperability technology. Never before has such a large portion of the distributed computing and communications industry agreed on a single way to invoke services running on remote systems. Web Services are, URL-addressable, self-describing, software components that implement a set of functions or service. Web services use open connectivity standards such as the Internet protocol (IP), simple object access protocol (SOAP) and Web services description language (WSDL). Businesses use Web Services for wrapping existing software functionality and make it visible and accessible to other software applications. This tutorial presents the concept of Web services, and explains how to incorporate Web services into a distributed application. We will present the architecture of the Web services run-time environment and describe the protocols and standards associated with Web services, such as SOAP, WSDL and Universal Description Discovery and Integration (UDDI). We will also review the most popular development tools and run-time middleware available today.

This tutorial will help participants to: (i) understand what a Web service is and how it can be used to integrate applications; (ii) learn a conceptual service-oriented framework and understand how the various technology components of the Web services approach fit together; (iii) learn about the SOAP protocol, its extensibility, intermediaries, and relation to HTTP; (iv) understand the architecture of the Axis (Apache Extensible Interaction System) infrastructure and how it implements the SOAP specification; (v) learn how to describe a Web service using WSDL and use WSDL to generate code; (vi) explore techniques for advertising and discovering Web services; (vii) understand Web services management concepts: Quality of Service, reliability, and security. We will use demonstrations and examples to illustrate the key aspects of the Web services technology.

Alaa Youssef joined IBM Research in 1998 and is a Research Staff Member in the Service Management Middleware group. Youssef is a member of the team that has recently developed the first prototype of a quality of service and performance management system for Web services, available for download as part of the IBM Web Services Toolkit at www.alphaworks.ibm.com. He received his BS and MS degrees in computer science from Alexandria University, Egypt, in 1991 and 1994, respectively. He received his PhD degree in computer science in 1998 from Old Dominion University, Virginia. His research interests include distributed systems, service management middleware, network and application level quality of service and content security. He is a member of the IEEE.



Giovanni Pacifici joined IBM Research in 1995 and is the Manager of its "Service Management Middleware" group. He led the team that has recently developed the first prototype of a quality of service and performance management system for Web services. Before joining IBM Research, Pacifici was a Research Scientist at the Center for Telecommunications Research at Columbia University where he led several research activities that focused on the design and evaluation of real-time control and monitoring systems for high-speed networks with quality of service guarantees. Pacifici designed and implemented a monitoring and multimedia traffic generation system for MAGNET II, a high-speed multimedia network. He was the Technical Program Co-Chair for IEEE Infocom 2001 and the Technical Program Co-Chair for the Fourth IFIP/IEEE International Conference on Management of Multimedia Networks and Services. Pacifici has been an editor for the Elsevier Science's Computer Networks Journal and the IEEE/ACM Transactions on Networking. He has also served as the Guest Editor for two special issues of the IEEE Journal on Selected Areas in Communications. He received the "Laurea" in Electrical Engineering and the "Research Doctorate" in Information Science and Telecommunications from the University of Rome, "La apienza" in 1984 and 1989, respectively. As a student, his main research activities were focused on the design and performance evaluation of access control protocols for local and metropolitan area networks. Pacifici is a senior member of IEEE and a member of ACM.

 

 

TUTORIAL 10: Fast Track Introduction to Control Theory for Computer Scientists
Presenter: Joseph L. Hellerstein, IBM Corporation, USA

Feedback control is central to network and systems management. It is employed to achieve service level objectives (e.g., target response times) by adjusting lower level resource actions (e.g., scheduling priorities and bandwidth allocations). Feedback is used to optimize resource allocations for a workload mix. And sometimes feedback is employed in reporting by selecting threshold values for measurement variables based on their effect on service level metrics. While computing systems in general and network management in particular make broad use of feedback control, this has traditionally been done in an ad hoc manner. In contrast, control theory provides a well-developed and systematic approach to the analysis and design of feedback systems. In particular, this theory provides a way to determine if feedback loops are stable (e.g., avoiding wild oscillations), accurate in their control (e.g., achieve the right resource allocation policies), and quick settlement to their steady state values (e.g., to adjust to workload dynamics). Unfortunately, existing books on control theory are not well suited to computer scientists because of the examples (e.g., electrical circuits, dash pots) and the emphasis on continuous time instead of discrete time systems.

This tutorial provides a fast track introduction to control theory for computer scientists. It is divided into four parts. The tutorial begins with an introduction to key concepts. Included here are control goals (e.g., regulation, optimization, disturbance rejection); the control architecture (with examples from the Apache web server and the Notes email server); and control objectives. The second part provides background on linear system theory. Knowledge of high school algebra is sufficient. Key results and their justification are described in areas such as transfer functions, settling times, and steady state gain. The third section describes control analysis. We show how to analyze a control system to assess its stability, accuracy, and settling time. A running example is presented using Lotus Notes in which we compare a proportional and integral controller. The fourth section provides case studies of applying control theory. Considered first is utilities throttling in DB/2 in which the execution rate of the utility (e.g., BACKUP, REORG) must be regulated so as to control the impact on production work. We describe the experiments employed to estimate the transfer functions, analyze the data obtained, and discuss the issues that arose (e.g., adaptation to changing workloads), The second study addresses optimization of the performance of the Apache web server using fuzzy control.

Joseph L. Hellerstein is a research staff member and manager at the IBM Thomas J. Watson Research Center where he manages the adaptive system department. Hellerstein received his PhD from the University of California at Los Angeles. Since then his research has addressed various aspects of managing service levels, including: predictive detection, automated diagnosis, expert systems, and the application of control theory to resource management. Hellerstein has published approximately 80 papers and an Addison-Wesley book on expert systems. He is co-author of Feedback Control of Computing Systems, which is under contract with Wiley.

 

 

TUTORIAL 11: eTOM: The Business Process Framework for Information and Communications Service Providers
Presenter: Enrico Ronco, Telecom Italia Lab, Italy

Amongst the many initiatives of the TeleManagement Forum (http://www.tmforum.org), one of the most interesting and valuable for Information and Communications Service Providers (ICSP) is the definition and development of the eTOM Business Process Framework. The "enhanced Telecom Operations MapîTM (eTOM) is a business process model or framework that describes all the enterprise processes required for a Service Provider, and analyzes these to different levels of detail according to their significance and priority for the business. For these companies it serves as the blueprint for process direction, and provides a neutral reference point for internal process reengineering needs, partnerships, alliances, and general working agreements with other providers. For suppliers, the eTOM framework outlines potential boundaries of software components to align with the customers' needs, and highlights the required functions, inputs, and outputs that must be supported by products.

This tutorial, designed with the collaboration of the TeleManagement Forum, is aimed at offering a detailed presentation of the eTOM Business Process Framework and its benefits for the enterprises that will adopt it: it will analyze the rationale of an enterprise-wide process framework, provide an in-depth view of the framework process structure and contents and provide some lessons learned on the customization of the eTOM framework for an individual Service Provider.

Enrico Ronco graduated in Computer Science in 1991 and since then has worked at Telecom Italia Lab (formerly known as CSELT). Since early 1993 he has been involved in ìprocess management.î He participated in process reengineering projects within Telecom Italia and collaborated to the definition of a process analysis and formalization methodology. During the 1998-2001 period he led three projects at Telecom Argentina aimed at the reengineering of operations and strategic processes, and at the definition of an overall enterprise process model for that company. Since 2000 Ronco has also been involved in the activities of the TeleManagement Forum by covering the role of the eTOM (Telecom Operations Map) Team Leader. Currently he is leading a TILAB research project focused on innovation in the Operations and Management topic area.

 

 

TUTORIAL 12: Management of Pervasive Computing
Nikos Anerousis, Voicemate, USA

The vision of pervasive and ubiquitous computing was set forth in the early ë90s, calling for a world of people and environments augmented with computational resources that provide access to information and services anytime, anywhere. Since then, a part of this vision has been realized, with the arrival of small-sized smart devices (personal digital assistants, mobile phones, etc.) that have become an integral part of our everyday life. In addition to devices oriented towards humans, pervasive computing environments also include networks of devices (sensors) embedded in their physical environment that gather information, perform local computations, and communicate with other devices and centralized computing resources. Such systems are finding usage in a variety of applications, including remote sensing, agriculture, military and aerospace, and disaster recovery.

This tutorial will present a comprehensive overview of pervasive computing and its evolution in the last 10 years. Particular attention is given to three areas where most research and development work has focused: natural interfaces (displays, speech, input devices), context awareness, and automated capture and access of live experiences. In addition, we will explore a number of systems issues common in the development of pervasive applications: Networking techniques for creating a pervasive communications infrastructure (especially in the presence of intermittent connectivity), middleware and operating system support, fault tolerance and disaster recovery, security and privacy issues. The very nature of pervasive computing has also given rise to a new class of management and control problems related to size, scale, distribution, and security. The tutorial concludes with an overview of suitable management architectures for such environments. This tutorial is intended for practitioners and researchers that wish to enhance their understanding of pervasive computing environments, their applications, and their management infrastructure.

Nikos Anerousis is Chief Technology Officer at Voicemate Inc., where he leads research and advanced product development in the areas of pervasive publishing and management, knowledge engineering, multimodal user interfaces and mobile computing. Prior to Voicemate he was a senior member of the technical staff at AT&T Research where he conducted extensive research on network and distributed systems management, packet telephony, routing and control architectures for the internet, multimedia services, etc. In 1998 and 1999 he was also an adjunct assistant professor at Columbia University. He received a Diploma in Electrical Engineering from the National Technical University of Athens, Greece and MSc and PhD degrees in Electrical Engineering from Columbia University, New York. His research interests include mobile and pervasive computing, control and management of next generation networks and services, performance evaluation, knowledge engineering and programmable networks. Anerousis is the author or co-author of numerous papers in refereed international conferences and journals. He serves on the editorial board of the Journal of Network and Systems Management and was technical program co-chair for the IEEE/IFIP Integrated Management 2001.

 

 

Tutorial Program for Friday afternoon, 28 March 2003

 

TUTORIAL 13: NGOSS: What is it good for?
Presenter: John Strassner, Intelliden, USA

NGOSS is the TeleManagement Forum's business-oriented solution framework for defining a next-generation OSS. The NGOSS program consists of architectural, business, information modeling and compliance efforts. In addition, it contains "Catalyst" programs whose purpose is to demonstrate multiple NGOSS principles using COTS hardware and software coupled with innovative technology. As such, NGOSS is delivering a framework for producing componentized solutions, and is producing a repository of documentation, business and system models, and code to support these efforts. This tutorial will provide a "Hitchhiker's Guide" to the NGOSS program. It will start with understanding the motivation for building a new generation OSS, and then provide an in-depth overview of the key areas of NGOSS - its architecture, its business process framework (the eTOM), its shared information and data modeling work, and how compliance is measured. Particular attention will be paid to the shared information and data model (which is a federated set of models including DEN-ng, IETF, and ITU work) since it is used to represent the business, system and implementation viewpoints of an NGOSS system. Next, brief overviews of several innovative parts of NGOSS, such as its approach to unifying policy and process management, will be described. The tutorial will then conclude with examples of how NGOSS is being used in the industry today, by providing overviews of several of its innovative Catalyst programs.

This tutorial will be of interest to system and software architects, developers and project managers, as well as technical and business team leaders that want to gain an understanding of future OSS solutions.

John Strassner, the founder of Directory Enabled Networking (DEN) technology, currently serves as Chief Strategy Officer for Intelliden, providing the overall direction and strategy for the definition and development of the company's patent-pending software management suite. He is a former Cisco Fellow who was instrumental in setting the direction for directory- and policy-enabled products and technologies within Cisco and the industry. He first developed DEN as a new paradigm for managing and provisioning networks and networked applications. Currently, he is the rapporteur of the NGOSS metamodel working group (this extends the UML metamodel to incorporate NGOSS concepts), rapporteur of the NGOSS behavior and control working group (this defines how policy and process management can be used to manage the behavior of an NGOSS system), co-chair of the TMF Shared Information and Data modeling work group (which is where DEN-ng and other modeling activities live), member of the TMF NGOSS Steering Group (chartered to set the definition and direction of the NGOSS architecture). Strassner is currently leading the effort to define and implement the next version of the DEN specification, called DEN-ng, in both the TMF as well as in Intelliden products. DEN-ng was built to accommodate the needs of NGOSS architectures, and features the use of patterns and roles. He is the author of the book Directory Enabled Networks, and is currently authoring a new book, called Policy Based Network Management. Strassner is a frequent speaker at many leading international industry conferences.

 

 

TUTORIAL 14: Over-the-Air Device Management
Presenter: Paul Oommen, Nokia Research, USA

As the functionality of mobile handsets grows at an increasing rate, configuring and maintaining services and features on these devices becomes a complex and time-consuming task. For instance, enabling WAP, GPRS, CDMA and data connectivity requires configuration of multiple settings. For data connectivity, each access point must be configured separately ‚ a complex and time-consuming task at best. What is thus needed is a framework for managing mobile devices over-the-air (OTA). OTA Device Management will help the widespread adoption of mobile services, as it provides a mechanism for the users to easily subscribe to new services. For the operators this enables an efficient way to manage the provisioned services, by dynamically adjusting to changes and ensuring a certain level of quality of service (QoS). Recently the SyncML Initiative developed an open technology called SyncML Device Management (DM) for OTA Device Management. SyncML DM provides an integrated and extensible framework for OTA management needs of 3G mobile devices and beyond. The standard is optimized for OTA management ‚ one of the foundations in SyncML Initiative has been to take into account the resource and bandwidth limitations of the small mobile devices.

This tutorial will introduce legady (2G) methods for OTA service provisioning (OTASP) and parameter administration (OTAPA). In CDMA, legacy standards define the use of data bursts for OTASP and OTAPA. The evolution of IP based methods for OTA management: especially, the technical specifications from CDMA development group (CDG) will be briefly covered. The tutorial will cover in detail the SyncML DM technology. The standardization activities related to OTA management in CDG, 3GPP and 3GPP2 are addressed as well.

Paul Oommen received his Bachelor's degree in electronics and communication engineering from the University of Kerala, India in 1992 and his Master's degree in electrical engineering, specializing in communication systems, from the Indian Institute of Technology, Kanpur, India in 1995. From 1995 to 1998 he worked on Networking and Network Management solutions for Cisco Systems and Intel. In 1998 he joined Nokia Research Center, where he is currently involved in the development and standardization of mobile management technology. His research interests are in mobile and network management protocols, mobile communications, and wireless data services.

 

 

TUTORIAL 15: OSS/J: Building Real World Systems with J2EE
Presenter: David Raymer, Motorola, USA

The construction of an end-to-end OSS from individual commercially available off-the-shelf components has, over the last ten years, proven to be a much more elusive goal than most realized. There are numerous reasons for the failure of component-based software to deliver on a plug-and-play-then-unplug-and-play-again solution. Two of the key reasons for this failure have been lack of a platform independent software component model (along with an associated component execution environment) and the lack of simple standards or specifications in terms of an OSS API. The development and continuing evolution of the Javaô 2 Extended Edition Platform and work of the OSS through Javaô Initiative have begun to address both of these issues to enable the construction of component-based OSS solutions. The J2EE platform provides the component model and the component execution environment, to which the OSS through Javaô Initiative is defining precise standards-based APIs for the OSS marketplace.

This tutorial will provide in-depth coverage of the technical challenges in building very-large scale distributed OSS solutions using J2EE and the OSS through Javaô APIs. Individuals attending this tutorial will gain an understanding of the J2EE Architecture, the OSS through Javaô APIs [in terms of the OSS/J design guidelines], and techniques for extensibility; as well as information on how to deploy and make use of the freely available OSS through Javaô reference implementations, test compatibility kits, and API specifications.

Dave Raymer is a Principle Software Engineer with Motorolaís Global Telecommunications Solution Sector working as part of the Operation Support Systems Division Systems Engineering Group with responsibilities for standards and systems engineering for the iDEN product line. Raymer has 15 years of experience in the software industry building large scale distributed systems with object-oriented technologies. He has participated as a member of the focus team for the OSS/J QoS API, the expert group for the OSS/J Service Activation API, the expert group for the OSS/J IP Billing API and as part of the expert group and focus team for the OSS/J Inventory Management API. Raymer is the principal technical representative from Motorola to the OSS/J Initiative and participates actively on the OSS/J Architecture Board. He is also the chair of the TeleManagement Forum NGOSS Red Team as well as being active in 3GPP SA5 SWG-C.

 

 

TUTORIAL 16: Broadband Networks and Service Management
Presenter: Joseph Ghetie, Consultant TCOM & NET

This tutorial provides a comprehensive analysis of broadband networking technologies with focus on network and service management. The broadband coverage includes access, metropolitan, and high-speed core backbone networks, including the latest developments in the next generation of Internet. Current and emerging underlying networking technologies that compose broadband networks are evaluated as architectures, vendors, products, and implementations: Wavelength Division Multiplexing, Synchronous Optical Network, Packet over SONET, Resilient Packet Ring, Gigabit Ethernet, 10 Gigabit Ethernet, Passive Optical Networks, Digital Subscriber Line, Hybrid Fiber Coax, and Wireless Access. The tutorial also includes analysis of management platforms/systems and associated management applications used in commercial products that provide broadband network and service management.

The tutorial is addressed to management information systems, data communications and telecommunications staff, software developers, network and service providers, analysts, consultants, and network operators and managers seeking understanding of management applications and management products associated with broadband networks and services.

Joseph Ghetie is currently a network and systems engineer consultant for TCOM & NET and a former systems engineer and training manager and instructor for Telcordia Technologies (Bell Communications Research). In his position, J. Ghetie was responsible for developing architectures, requirements, and solutions for network management integration, providing consulting, and supporting management standards development. Joseph Ghetie has also developed and taught numerous advanced technical courses in the areas of Internet, telecommunications, and data communications network management. He assumed his position at Telcordia Technologies in 1988.
Joseph Ghetie is the author of a published book on "Network and Systems Management Platforms Analysis", Kluwer Academics Publishers. Since 1993, he has taught over 25 tutorials at major network management international conferences and symposia, INMS (IN), NOMS, SICOM, EMS, APNOMS, LANOMS, SBRC, ITC, etc.
Joseph Ghetie is originally from Romania and has a MSEE in "Electronics and Telecommunications" from the Polytechnic Institute of Bucharest. He started in 1967 as an electronics and network engineer with the " Institute for Railroad Research and Design", involved in the design and implementation of data communications networks and complex process-oriented computer systems for centralized traffic control of mass transit. Since 1985 he worked as a technical specialist at J.C. Penney Company, Inc. as part of the corporate headquarters "Corporate Communications Systems Development" group, where he was responsible, as project manager, for developing strategic plans, evaluation, selection, and implementation of new networking technologies.