Wednesday, September 26, 2007

Security

Security is always important.

Sunday, September 23, 2007

Infrastructure IX - Next Generation Network (NGN)

The Telecommunication Standardization Sector of International Telecommunication Union (ITU-T) has made recommendations on Next Generation Network (NGN) which is conceived as a concrete implementation of the Global Information Infrastructure (GII). Throughout the years, different study groups have published the results of their researches and developments in this regard.

ITU-T (2004) note that ‘the target of NGN is to ensure that all elements required for interoperability and network capabilities support applications globally across the NGN while maintaining the concept of the separation between transport, services and applications.’

I quote the definition of NGN from ibid (2004, p.2)

‘A packet-based network able to provide telecommunication services and able to make use of multiple broadband, QoS-enabled transport technologies and in which service-related functions are independent from underlying transport related technologies. It enables unfettered access for users to networks and to competing service providers and/or services of their choice. It supports generalized mobility which will allow consistent and ubiquitous provision of services to users.’

Nowadays, there are many different NGN communication operators such as PSTN (Public Switched Telephone Network), ISDN (Integrated Services Digital Network) and GSM (Global System for Mobile communications), and they are internetworked by means of gateways. We see the applications of them in the following manner, the communication devices connected to NGN will include analogue telephone sets, fax machines, ISDN sets, cellular mobile phones, GPRS (General Packet Radio Service) terminal devices, SIP (Session Initiation Protocol) terminals, IP phones through PCs (Personal Computers), digital set top boxes, cable modems, etc.

The NGN is characterised by the following fundamental aspects (ITU-T 2004, p3):

  • Packet-based transfer

  • Separation of control functions among bearer capabilities, call/session, and application/service

  • Decoupling of service provision from transport, and provision of open interfaces

  • Support for a wide range of services, applications and mechanisms based on service building blocks (including real time/streaming/non-real time services and multi-media)

  • Broadband capabilities with end-to-end QoS and transparency

  • Interworking with legacy networks via open interfaces

  • Generalised mobility

  • Unfettered access by users to different service providers

  • A variety of identification schemes which can be resolved to IP addresses for the purposes of routing in IP networks

  • Unified service characteristics for the same service as perceived by the user

  • Converged services between Fixed and Mobile networks

  • Independence of service-related functions from underlying transport technologies

  • Support of multiple last mile technologies

  • Compliant with all Regulatory requirements,
    for example concerning emergency communications and security/privacy, etc.
  • The aspects have been illustrated the characteristics of NGN. Needless to say, the scopes of ITU-T are very wide and in-depth. But I myself find it the most conclusive aspect of NGN is to generalise mobility, which will allow a consistent provision of services to a user. In other words, the user will be regarded as a unique entity when utilizing different access technologies, regardless of their types (ITU-T 2007, p.3).

    I only focus the discussion on Generalised Mobility. In the future, mobility will be offered in a broader sense where users may have the ability to use more access technologies, allowing movement between public wired access points and public wireless access points of various technologies. It actually means that this movement will not necessarily force an interruption of an application in use or a customer service. However, this requires significant evolutions of current network architectures. Enabling more transparent fixed-wireless broadband communications and mobility across various access technologies appears as a major issue (ibid 2004, p.7).

    Therefore, the ICT industries are achieving this objective. Gohring (2007) reports that RIM plans to issue a new model of BlackBerry with both cellular and Wi-Fi wireless capabilities as well as Motorola and Nokia were both selling phones with Wi-Fi and cellular aimed at business users last year. This indicates that the developers and manufacturers of mobile devices need to enable their products to be compliant with multiple operators and multiple access capabilities.

    References

    Gohring N 2007, ‘RIM plans Wi-Fi/cell phone BlackBerry’, Computerworld Hong Kong Daily, posted 28 May 2007, viewed 15 September 2007, <http://www.cw.com.hk/computerworldhk/article/articleDetail.jsp?id=429669>.

    ITU 2005, home page, ITU-T’s Definition of NGN, updated 19 December, viewed 13 September 2007, <http://www.itu.int/ITU-T/ngn/definition.html>.

    ITU-T 2004, ITU-T Recommendation Y.2001 (12/2004) - General overview of NGN, Series Y: Global Information Infrastructure, Internet Protocol Aspects and Next-Generation Networks, Next Generation Networks – Frameworks and functional architecture models, Geneva, Switzerland.

    ITU –T see Telecommunication Standardization Sector of ITU

    Monday, September 10, 2007

    Infrastructure VIII - IEEE 802.11n

    IEEE (Institute of Electrical and Electronics Engineers, Inc.) and ITU (International Telecommunication Union) both have task groups to research and develop the new standards of networking protocols and mobile technologies. These have indicated the directions of building up our infrastructure. Meanwhile, the ICT industries have been making great efforts to launch their new products and services in the wireless era. I myself receive a few promotions via emails or phone calls every week.

    First of all, I would like to look into the developments of IEEE. Without doubt, the Institute of Electrical and Electronics Engineers, Inc. (IEEE) have long-standing recognition in the field. They have established the ‘Wi-Fi’ standards (i.e. IEEE 802®), one of the well-known mobile technologies and have various working groups and study groups to research and develop these standards. Meanwhile, ‘WirelessMAN’, IEEE 802.16 specifications support the development of fixed broadband wireless access systems to enable rapid worldwide deployment of innovative, cost-effective and interoperable multi-vendor broadband wireless access products (IEEE-SA 2007). More information regarding IEEE 802® is able to be found from IEEE website. The other standard IEEE 802.3™ is moving steadily which is for Ethernet wired LANs/WAN. More information regarding IEEE 802® is able to be found from IEEE website. The other standard IEEE 802.3™ is moving steadily which is for Ethernet wired LANs/WAN. In addition, the IEEE 802.11 specifications address both the Physical (PHY) and Media Access Control (MAC) layers and are tailored to resolve compatibility issues between manufacturers of Wireless LAN equipment.

    The Wi-Fi Alliance is a global, non-profit industry association that have certified Wi-Fi products from March 2000 till now. According to (Wi-Fi Alliance 2007), they have certified over 3400 products in the industry and have more than 300 member companies devoted to promoting the growth of wireless Local Area Networks (WLANs).

    Nowadays, ‘Wi-Fi’ has been adopted by ICT industries. TGn (2007) has just approved IEEE 802.11n draft 2.05 in July 2007 and Draft 3.0 is on the way and will be finalised and approved in 2008. In fact, Many ICT companies have competed with one another to issue the Draft 2.0 compliant equipment such as wireless router, wireless switch and wireless client. Wi-Fi Alliance (2007 p.4) claims that the multiple-in, multiple-out (MIMO) technology, multiplies the performance of the Wi-Fi signal, and is reflected in the two, three, or even more antennas found on some 802.11n routers and support 5GHz radio frequency. Additionally, its capacity is five times of 802.11g, rise from 54Mbit/s to 300Mbit/s that is able to fulfil today’s multimedia applications and products demand. This is a breakthrough to the wireless technology. But in practice, Judge (2007) reports that it will not be able to reach the data rate 300Mbit/s as the Ethernet standard, 802.3af cannot support two different radio frequencies, the 2.4GHz band (802.11bg/n) and the 5GHz bands the 5GHz (802.11a/n). Therefore, it will probably reach only half of the data rate (i.e. 150 Mbit/s).

    The other concern about 802.11n standard or wireless LAN is security. Dr So (2007) mentions four major attacks on wireless LAN including Intrusion, Denial of Service, Phishing and Eavesdropping. The most risky one is Eavesdropping because the attacker listens to the traffic on the wireless network and grasps useful information including passwords for online banking service and e-commerce but, can hardly be identified.

    I would like to discuss about the researches and developments of ITU on Mobile Technologies tomorrow.

    To be continued

    References

    Judge P 2007, ‘Aruba set to launch 802.11n access point’, Computerworld Hong Kong Daily, posted 14 September 2007, viewed 15 September 2007, <http://www.cw.com.hk/computerworldhk/article/articleDetail.jsp?id=457611>.

    Wi-Fi Alliance 2007, ‘Wi-Fi CERTIFIED™ 802.11n draft 2.0:Taking Wi-Fi® to the Next Level’, published May 2007, pp.1-2

    So R 2007, ‘Wi-Fi threats stay alive’, Computerworld Hong Kong Daily, posted 10 May 2007, viewed 15 September 2007, <http://www.cw.com.hk/computerworldhk/article/articleDetail.jsp?id=425942>.

    TGn 2007, Status of Project IEEE 802.11n, Standard for Enhancements for Higher Throughput, updated July 2007, San Francisco, California, US, viewed 17 September, <http://grouper.ieee.org/groups/802/11/>.

    TGn - see IEEE P802.11 Task Group n 2007

    Infrastructure VII - Disaster Recovery in practice

    People still think Disaster Recovery (DR) is important but not necessary, or not cost effective. Many years ago, we proposed the management for replacing the Uninterrupted Power Supply (UPS) in our server room. The management asked us when the power failure occurred last time. Hence I understood that he did not see need for it.

    Fonseca (2007) reported that

    ‘Peltzman said he understands why corporate management puts constraints on disaster recovery spending even though business people are the ones who complain when their systems fail. However, many of his IT brethren are often at odds with business leaders on the importance of business continuity and disaster recovery technology.’

    Steven Peltzman who is the CIO of The Museum of Modern Art in New York, has also faced the disagreement with business executives regarding the IT spending on disaster recovery. In fact, many CIOs and business executives have very diverse point of views on Disaster Recovery/Business Continuity (DRBC) in accordance with SunGard’s survey. Harris Interactive Inc. polled 176 corporate executives and 351 IT managers in February and March. The brief results of the survey are listed below.

    From the above results, we can see that more likely IT respondents focus on the backend operations rather than the frontend applications. When identifying which systems are essential to safeguard from disaster, business and IT executives are in agreement regarding the top four systems that impact revenue. Both groups identified e-mail, back-office applications, customer service and telecommunications in their list of top five systems that would affect the bottom-line if they were unavailable. That means they have diverse views but also, share some views.

    I am an IT specialist and I would very often see only a corner of a picture. To really have successful disaster recovery solutions, we should at least pull all the key decision makers together from all parts of the business such as business development, finance, HR, IT and etc and work out the plan for DR in advance, going through a business impact analysis to define the critical systems and applications so the plan is clearly defined before any interruption occurs.

    To be continued

    References

    Robins B 2007,‘Survey Reveals Limiting IT Downtime Window Major Concern; Business And IT Executives Disagree On Importance Of Disaster Preparedness’, SunGard, <http://www.sungard.com/news/default.aspx?id=2&announceId=844>.

    Fonseca B 2007a, ‘IT trying to work with execs on disaster recovery’, Computerworld Hong Kong Daily, posted 2 May 2007, US viewed 6 September, <http://www.cw.com.hk/computerworldhk/article/articleDetail.jsp?id=423990>.

    Fonseca B 2007b, ‘Survey: Business, IT differ on disaster recovery’, Computerworld Hong Kong Daily, posted 2 May 2007, US online, viewed 6 September, <http://www.cw.com.hk/computerworldhk/article/articleDetail.jsp?id=423976>.

    Infrastructure VI - Disaster Recovery

    Last time I mentioned about Contingency Plan. Today I further the discussion of it in a practical sense and I would bring up something about Disaster Recovery (DR) which is actually a kind of contingency plans.

    The need for DR is to ensure the business continuity whenever the crisis arises such as fire, power failure, storm, disease outbreak (e.g. SARS) and any other unexpected events which can damage your business, and your precious data.

    Smit (2007) reports that:

    ‘According to the Meta Group, the average cost of an hour of downtime for data centre applications is $330,000. According to the Strategic Research Corp., if that data centre belongs to a credit card authorization company, the loss jumps to $2.6 million. And if it belongs to a brokerage house, it climbs to $6.5 million. One day of lost productivity costs a company an average of $432 per employee.’

    Without doubt this is a great loss to a company. Don’t expect your clients would understand your difficulties and accept your apologies. The best solution is to plan ahead before the disaster occurred. Reducing the downtime means cutting down the loss. But how?

    Ibid (2007) has given us the directions to ensure high availability and business continuance.

    Protecting, replicating and backing up data

    First of all, we need to build up a high-capacity and low-latency data centre, which is interconnected to MAN (Metropolitan Area Network) and WAN (Wide Area Network). This can enable zero-data-loss data mirroring to protect user sessions, prevent transaction loss, and support automatic failovers between mirrored sites. SAN (Storage Area Network) technologies which enhance the distance, security, bandwidth utilization of replication and backup to remote sites, however it has not been really popular. In addition, technologies such as write acceleration, tape acceleration and server-less backup reduce latencies, extend distances and reduce application impact of storage replication applications. Moreover, it needs support for business continuance applications, especially those that provide replication and data protection.

    sourced from Javvin


    Enhancing application resilience

    Companies can remove single points of server failure by deploying high-availability clusters or load-balancing technology across Web and application servers. Apart from that, connectivity can be extended between clusters in different data centres to protect against major disruptions. Achieving this type of redundancy requires a high-speed, low-latency metro network.

    Ensuring user access

    Companies can employ technologies such as VPN to allow users from branch offices and telecommuters to reconnect to applications quickly as soon as they are up and running. In addition, technologies such as global site selectors can allow users to manually or automatically connect to the most available web application available at any given time. In the case of a disruption in any one application environment, users continue to have access to the alternate site.

    Needless to say, we all realised the devastating impact of 911. It just happened once in the past 6 years. Do we really have to focus on this incident too much and then, spend tens and thousands dollar on the above systems. Some may not be used even once in 10 years. The answer is absolutely. I still remember the disaster happened around 6 years ago. Due to the disorder of the fire sprinkles of an office on the high floor, it flooded the whole commercial building with water and thereafter, the power was suspended for a day. At that time, what we could do was to shut down all our mission critical servers before the UPS (Uninterrupted Power Supply) has been worn out. This action was to protect our servers and data. Lucky we had installed the UPS for all mission critical servers.

    You can probably imagine how big the loss was caused by this incident. Very unlikely, DR is able to fully eliminate the loss but at least, it can lighten it. Anyway, DR is totally a choice of investment. What is your choice?

    To be continued

    References

    Javvin, ‘Metropolitan Area Network and MAN Protocols’, Javvin Technologies, Inc, California, <
    http://www.javvin.com/protocolMAN.html>.

    Javvin, ‘Storage Area Network and SAN Protocols’, Javvin Technologies, Inc, California, <
    http://www.javvin.com/protocolSAN.html>.

    Javvin, ‘WAN: Wide Area Network’, Javvin Technologies, Inc, California, <http://www.javvin.com/networkingterms/WAN.html>.

    Smit A 2007,’Data centre safety measures protect business’, Enterprise Innovation, Technology, posted 28 August 2007, viewed 8 September 2007, <http://www.enterpriseinnovation.net/article.php?cat1=2&id=1847>.

    Sunday, September 9, 2007

    Infrastructure V – Business Concern

    In the business world, people have to be very dynamic and are able to forecast the changes. We IT people need to have this sense as well to adopt the new technologies and adapt to the changes. If we fail to see the problems, sometimes the consequences will be very costly and unbearable.

    Last month, the typhoon Pabuk made Hong Kong very chaotic because the Hong Kong Observatory (HKO) announced they would hoist Signal 8 within an hour, then changed their minds and flashed the 8 signal almost immediately. Due to this sudden change, the mobile networks overloaded and landlines also crashed. Meanwhile the HKO site kept crashing due to overload as people checked the latest typhoon conditions (Hammond 2007). Obviously, this was a communication breakdown. The question I would ask is why didn’t the infrastructure function properly in an urgent situation? Any contingency plans?

    The other case just happened in US recently. Owing to the breakdown of the major switch in the LA airport, over 20,000 people were trapped on planes and 60 planes were sitting on the tarmac (Hammond 2007). Hammond claims that ‘If that switch was mission-critical, why was there no backup system?'

    Through the above two incidents, we understand that very often systems are functioning properly under normal circumstance, but they fail when encountering the crisis. In the service industry, this problem would not be acceptable and bearable. The worst thing is you will lose your creditability and you are actually paying very high price for not investing on the contingency plan or disaster recovery solution.

    To be continued

    Reference

    Hammond S 2007, ‘Communication breakdown’, Computerworld Hong Kong Daily, posted 1 September 2007, viewed 5 September 2007, <http://www.cw.com.hk/computerworldhk/Communication-breakdown/ArticleStandard/Article/detail/454130>.

    Wednesday, September 5, 2007

    Infrastructure IV - Mobile Solutions

    Ubiquitous Computing, I have mentioned previously in this blog. Very often, my friends ask me advices for purchasing computers. I would normally recommend them to buy notebook computers even though the mobility of the machines is not their major concern. Why? First of all, computers are the necessities and are no longer luxury to most of the households in Hong Kong. Take my family as an example, I have two notebook computers at home, one for me and the other for my wife and my son. Definitely, all of us need the Internet access. The best and simplest solution is to set up a wireless router at home so that we all can share the access. Not many people would cable their houses for this. Normally, apartments are relatively small in Hong Kong. Therefore, a wireless router 802.11g should provide powerful coverage for an apartment or even two. (I am actually still using 802.11b model at home.)

    Secondly, Hong Kong government has started providing Wi-Fi facilities at government premises (GovHK 2007). Besides, over 2,300 wireless access spots are established in Hong Kong by the registered licensees including Airport Authority. Apart from that, 3G has been available locally for a few years and the major mobile carriers are licensees. Nevertheless, the 3G access is quite costly at this stage. As a result, notebook computer becomes a powerful ubiquitous device as it can be very tiny now.

    Without a good infrastructure, there is no point to have the best mobile devices. We have to closely monitor the IT market and always give the best solution to our users. For example, we have mobile devices such as blackberries available for loans, and now upgrading them from GPRS (General Packet Radio Service) to 3G which has been implemented very well in some Asian countries including South Korea and Japan. The throughput rates of GPRS and 3G are up to 40 kbs and 384 kbs respectively according to GSM Association (2007). The following graph illustrates the development of mobile technologies.


    sourced from GSM World

    Certainly, from 2G to 3G is a big jump in terms of the data rate. Probably, we will be seeing the bigger jumps in nearly future. NTT DoCoMo, which is Japan's largest cellular carrier, has been working on Super 3G for some time and anticipates introducing the technology in Japan sometime around 2009 as a stepping stone between current so-called 3.5G technology and future 4G systems, has also been aggressively pursuing 4G system development. In experiments conducted in late December last year the carrier came close to hitting a 5G bps data transmission speed from an experimental 4G system to a receiver moving at 10 kilometers per hour (Williams 2007). 4G and 5G are on the way…

    To be continued

    References

    GovHK 2007, ‘Public IT Facilities’, Hong Kong Government, <
    http://www.gov.hk/en/residents/communication/publicit/itfacilities/wifi/index.htm>.


    GSM Association 2007, ‘GSM World’, viewed 1 September 2007, <http://www.gsmworld.com/technology/gprs/index.shtml>.

    Williams, M 2007, 'NTT DoCoMo targets 300M bps in Super 3G experiment', Computerworld Hong Kong Daily, posted 13 July 2007, Tokyo, viewed 18 August 2007, <http://www.cw.com.hk/computerworldhk/article/articleDetail.jsp?id=441398>.

    Tuesday, September 4, 2007

    Infrastructure III - Data Communications

    As the IT trend is directed to E-Commerce, M-Commerce, Web Portal and etc, the back end processing is far more complicated and substantial than last decade. Two-tier client server model is retiring and instead multi-tier model has become the dominant model. Therefore, the processing is shifted to the application servers, Web servers and database servers. For more information about thin client or multi-tier client server system, please refer to my previous publications in this blog.

    Right back to 2000, we started using 1000 mega bit switch. We have always focused on the infrastructure. The bad infrastructure would hinder the performance of good systems and applications. On the contrary, the good infrastructure would lighten the problems of the bad systems and applications. For example, if an application is coded badly and inefficiently, it will still run reasonably fast on a high speed network.

    Within the Office, we are now running 1000BASE-T gigabit Ethernet with very high quality network cable which is category 5E or 6 Shielded Twisted Pair (STP). Certainly, we can see the return from the luxury infrastructure due to the high data rate and good transmission qualities. So far I can see our network is much more stable. In the past, (around 10 years ago) we were running Fast Ethernet network (i.e. 100BASE-T) and the 10/100 Ethernet dump hub could not stabiles the network traffic. Whenever one connection was malfunctioning due to various reasons, it would affect the operation of all other connections on the same hub. But now I do not really come across this situation again. Basically our network is very stable and we are planning to move forward to 10 gigabit network as long as the technology is mature. Of course, we won’t extend it to every workstation. Amul (2006) claims that 10G Ethernet will increase the capacity of the network back bone through eliminating a few elements required run TCP/IP and Data traffic over an infrastructure originally designed to transport voice. This will definitely benefit the ISPs. How about us? I would say “Yes” to this question. Even though we are not an ISP and not adopting IP phone system, it will still ease the high volume of data traffic among the servers. Hence, it would drop the response time and latency of user requests. This will be very effective to heavy backend processing systems.

    The decentralisation of computing power has been the IT trend in this decade and the mobile technologies are moving into a super highway. The first third-generation (3G) wireless phone was launched by NTT DoCoMo in Tokyo on 1 October 2001. At that time, the 3G service was only confined to within Tokyo National Route 16, a 30 km radius from central Tokyo (Purvis 2001). But 6 years later, NTT DoCoMo has developed Super 3G cellular system which can transmit data at 300 mbps at most and is pursuing 3.5G and future fourth-generation (4G) systems (Williams 2007). I will go further about this issue tomorrow.

    To be continued


    References

    Amul 2006, ‘10 Gigabit Ethernet’, Network World, viewed 5 September 2007, <http://www.networkworld.com/details/460.html>.

    Hammond, S 2007, ‘Consumer tech in the enterprise space’, Computerworld Hong Kong Daily, posted 1 August 2007, viewed 18 August 2007, <
    http://www.cw.com.hk/computerworldhk/content/printContentPopup.jsp?id=447180 >.

    Purvis J 2001, ‘World's first 3G launch on 1st October severely restricted’, International Market News, posted 4 October 2001, Hong Kong Trade Development Council, Tokyo, viewed 1 September 2007, <http://www.tdctrade.com/imn/01100401/info14.htm>.

    Williams, M 2007, 'NTT DoCoMo targets 300M bps in Super 3G experiment', Computerworld Hong Kong Daily, posted 13 July 2007, Tokyo, viewed 18 August 2007, <
    http://www.cw.com.hk/computerworldhk/article/articleDetail.jsp?id=441398>.