Can Christmas Tree Lights Really Play Havoc with Your Wi-Fi?

While many different factors can dull your wireless signal, it would take a lot of holiday twinkling to thwart your router

  • By Andrew Smith, The Conversation on December 7, 2015

Before the terrible jokes start and we all declare that this is a fit of “Bah Humbug!” from the telephone regulator, the warning is correct—your fairy lights could indeed be a Wi-Fi downer. But then so could many other devices. Ultimately, it is a matter of how much of a problem they actually cause.

The science behind the warning
The whole press release describes how microwave ovens, fluorescent lights and other devices could also play havoc with your wireless connection.

Casting your minds back to science at school, you may recall your teacher describing the electromagnetic spectrum. The electromagnetic spectrum covers radio waves, microwaves, visible light and radiation. It is around us all the time. Our phones, radios, televisions and desk lights all depend on this principle from physical science.

Wireless networks typically work on the 2.4 Gigahertz microwave radio spectrum. The term Hertz means the number of waves per second, so 1 Hertz is one wavelength per second. Your FM radio station may use 100 Megahertz, or 100,000,000 waves per second, while 2.4 Gigahertz, used by wireless, is 2,400,000,000 waves per second, making the radio waves used by Wi-Fi considerably shorter. Essentially, this means that they are “weaker” than FM radio waves—as they require greater power to cover the same distance.

Your wireless router also uses considerably less power than a public FM transmitter. We expect the maximum reach of a domestic Wi-Fi signal to be 100 metres, while FM in the right conditions can easily be obtained at up to 10km and beyond. (There are also public forms of Wi-Fi called WiMAX, which can work in larger areas, but it is important to note that this is unrelated to the Ofcom press release.)

Because your wireless network is much less powerful than a big FM transmitter and its waves are weaker, where you place the router and what you have in your house will have an impact. Home electrics, microwaves, steel girders, concrete cladding and foil insulation all can have an effect. Older properties with their thicker walls make a difference, too, as the lower-powered, high frequency Wi-Fi radio waves struggle to penetrate them.

But while many different factors can dull your Wi-Fi signal, I can’t recall anyone yet getting miffed about their festive laptop watching of Dr Who being affected as soon as the Christmas lights go on.

What should you do?
But it is possible. Most fairy lights have unshielded wires, which means there’s no radio frequency insulation to protect radio-based devices from the electromagnetic effects of the power cables trailing around your tree.

Nevertheless, it would take a considerable volume of lights to create enough interference to seriously degrade your Wi-Fi network. In fact, you would have to be lighting up your tree like a small sun—which perhaps some of you are planning.

Do consider downloading the Wi-Fi checker app offered by Ofcom, however—it may help you discover that it’s the service provided by your phone company, rather than the fairy lights, that’s to blame for all that endless buffering.

You should also think about where you place your wireless router in your home. Hiding it under a tin can inside a cupboard insulated with tin foil will ruin your Facebook fun. As will decorating your wireless device with holly and fairy lights.

There are domestic devices that will degrade the wireless signal—although it’s not often you’ll be running your microwave 24 hours a day—but don’t rush to throw away your fairy lights just yet. Christmas is coming, after all.

Andrew Smith does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

This article was originally published on The Conversation. Read the https://theconversation.com/can-christmas-tree-lights-really-play-havoc-with-your-wi-fi-51606.

TPCPs Gain New Appreciation as IT Security Solution

Contemporary information technology isn’t adequate to secure the valuable information these systems are entrusted to manage, as recent security breaches at US corporations and government agencies demonstrate. There are two reasons for this. First, public and private networks, based on Ethernet and the TCP/IP protocol, weren’t designed to protect information, but to make it easy to share information. Second, the architecture of the modern IT infrastructure was established long before cybercrime became the global nemesis that it is today, and therefore the issues of security and trust weren’t well understood or taken into account.

Before widespread adoption of the Internet, there was no means to generally interact with both public and private data centers throughout the world. Combine this global accessibility with the ability to provide remote access over the Internet, rather than just publish information as the Web was intended, and you have the makings of an information security disaster.

Tamperproof computing platforms, or TPCPs, can help address the principal failings of contemporary IT, primarily the lack of means for adequately protecting encryption keys and for maintaining sustainable “chains of trust,” rooted in a known trusted entity for software running on tamperproof hardware. Some have concluded from the lack of existence of such systems that TPCPs aren’t feasible. While this may have been true in the past, the advent of hyperscale semiconductor integration and system-on-a-chip technologies now enable construction of TPCPs. Today, it is possible to build TPCPs based on tamperproof hardware and to create software capable of exploiting sustainable chains of trust.

From its inception in 1999, the Trusted Computing Platform Alliance and its successor the Trusted Computing Group focused on consumer devices and personal computers, Internet security’s Achilles heel at the time. A Trusted Network Connect subgroup was added in 2004, and a Trusted Platform Module (TPM) specification for servers first became available in 2005, more than a decade after the Internet was taking hold. But it wasn’t until 2009 that the TPM was recognized as an international standard by ISO and the TNC changed its focus to “pervasive security,” encompassing the broader IT infrastructure. All of this can be summed up simply as too little, too late. The problem of data center security was already rampant by then, although largely unpublicized and not in the public eye.

Whatever the reasons for the painfully slow evolution of the Trusted Computing Group, it’s clear in retrospect that the need for TPCPs was neither well understood nor fully appreciated until recently. This historical happenstance can be best described as a case of benign neglect, as it’s now evident that the Internet has become a global playground for cybercriminals, and any computing platform connected to it risks potentially massive organized criminal cyberattacks.

Perhaps the biggest oversight has been failure to recognize that, to be completely trustworthy, a dynamically-changing computing platform necessarily requires that all hardware and software permitted to run be reliably identified, authenticated and verified at all times. Indeed, the analogy between computer “viruses” and physiological pathogens suggests that nothing less than the equivalent of an autonomous “immune system” which can distinguish in real time between “what is me?” (i.e., hardware and software that has been identified, authenticated, and verified) and “what is not me?” (i.e., hardware and software that has not been identified, authenticated, and verified) is required to sustain a TPCP.

Once this fact is recognized and acknowledged it becomes indelibly clear that, to be completely trustworthy, all software permitted to run on a TPCP must be secured by an auditable and sustainable chain of trust secured by tamperproof hardware and that this, in turn, requires constant real-time surveillance of, and control over, all hardware and software in the system. Only once this has been achieved will truly tamperproof computing exist.

David L. R. Stein and Christopher M. Piedmonte are cofounders of Suvola Corporation, a company providing a full stack of secure and trusted Debian LINUX platform software for tamperproof hyperscale computing technology from Freescale Semiconductor and IBM.

Rise of the Alternative Network Provider

New competitors disrupt the telecom business model

By Chris Antlitz, Telecom Senior Analyst

tbrIncumbent telecom operators in the U.S. face a new category of competitors that play by a different set of rules. These alternative network providers aim to disrupt the traditional telecom business model by lowering access costs and improving the user experience. Their motivations differ from incumbent telcos, which focus on monetizing their connectivity solutions. Rather, these alternative network providers view access as a sunk cost necessary to drive their other initiatives, such as digital advertising and e-commerce. The stakes are high because these market dynamics will shift the balance of power, money, and landscape makeup in coming years. Only the strongest and most nimble incumbent operators will survive the coming shakeout.

There are three emerging segments of the alternative network provider space: Wi-Fi, cloud, and advertising. Each of these areas is driving increased interest by non telecom companies in pursing telecom endeavors.

Wi-Fi is viable alternative to cellular

Wi-Fi is becoming a viable alternative to traditional cellular service, not only offering data, but also voice and text services. The prevalence of hotspots, in residential and commercial buildings as well as in public venues, is making Wi-Fi coverage nearly ubiquitous across large swaths of urban and suburban areas. A new breed of operator is emerging to capitalize on Wi-Fi, including cable operators, startups such as Republic Wireless and Internet companies such as Google.

Wi-Fi operators pose a significant challenge to incumbent telecom operators because Wi-Fi is relatively low cost to use and the quality of service has been greatly enhanced due to innovations in handover technology and seamless authentication. In many cases, Wi-Fi is being offered for free, with the cost being subsidized by new business models, such as analytics and advertising, which is why this is so disruptive to telcos.

Wi-Fi offers the lowest-cost, highest-impact way to deliver connectivity. The ability to leverage unlicensed (free) spectrum, minimal backhaul requirements and the endemic footprint of hotspots across the U.S. makes Wi-Fi a considerable threat to cellular.

Advertising disrupts access model

Facebook (via its Internet.org initiative) and Google have taken on the seemingly insurmountable challenge of bringing low-cost Internet access to the world population. This is no small feat, as nearly two-thirds of the world’s population lacks Internet access, particularly in emerging markets.

Both companies are investing in and tinkering with new technologies aimed at trying to solve this problem, including using fiber, Wi-Fi and “space furniture,” such as satellites, balloons, drones and blimps, to blanket the planet with wireless coverage. Google is also pursuing becoming a mobile virtual network operator to offer its branded wireless service, piggybacking on the networks of Sprint and T-Mobile to offer service in the U.S. market.

Facebook and Google are able to justify these endeavors to their stakeholders because they are indirectly driving growth in their core business, which is to sell digital advertising, by offering free or nearly free access. The more people using the Internet, the more opportunities there are for these companies to sell their ads. This model is highly disruptive to incumbent telcos because they are in the business of selling access. TBR believes that if companies like Facebook and Google are able to drastically reduce the cost of Internet access while still providing a “good enough” quality of service, it will render the traditional telecom business model obsolete.

Facebook and Google are going one step further than just access, however. They are also engaged in lowering device costs (i.e., not just smartphones, but also other connectable devices such as meters, wearables and the like) and making apps more data efficient. Tackling each of these areas in unison will help make devices and connectivity affordable for the mainstream world population.

Cloud builds out network backbone

Companies in the cloud business or that rely on cloud internally, are proactively ensuring they can support their business scale and provide optimal quality of service to their customers. Amazon’s key focus is to ensure it can support the exponential growth in its Amazon Web Services (AWS) business. Relying on incumbent telcos for bandwidth, low latency and reliable connectivity is not only expensive but also a business risk.

Therefore, Amazon is investing in its optical infrastructure to connect its data centers to better control its business, and Microsoft, Google, Facebook, IBM and Salesforce are doing the same (i.e., owning and controlling fiber links to ensure they optimize their cloud businesses). These companies are taking part in terrestrial and submarine optical projects and are involved in building out their infrastructure or leasing large portions of infrastructure from third parties to secure bandwidth. Buying dark fiber is another area of great interest to this segment of companies, as this infrastructure is built out and can be purchased inexpensively compared to the cost of deploying net-new fiber lines.

This movement by cloud providers is pushing down traffic carriage costs for traditional operators, making it harder for them to monetize their networks. The more nontelecom companies start owning and controlling their fiber backbones, the greater the disruption to traditional telecom operators.

Competition is good for network vendors
Network vendors are benefitting from the disruption occurring in the telecom market. Not only do they have new customers to which to sell infrastructure, they also are selling more to their traditional customers as they fight to protect their core business and stay relevant.

Webscale 2.0 companies, including Google, Facebook, Amazon and Microsoft, comprise a significant portion of key network vendor revenues. In 2014 Webscale 2.0 companies represented around 20 percent of total revenue for some key vendors, including router supplier Juniper and optical transport suppliers Ciena and Infinera, and growth is accelerating. Cisco, Alcatel-Lucent, and other network suppliers are also citing increased activity from nontelecom customers, and Wi-Fi operators are becoming key customers for a range of network suppliers, including Ericsson, Aruba, Ruckus Wireless, and Cisco.

Some customers are buying off-the-shelf products, while others are having custom-made products manufactured by a series of OEMs. Still, spend is flowing into this sector as this new segment of customers ramps up internal initiatives, resulting in opportunities to sell hardware as well as software and services.

This fact is underscored by IT services companies jumping into the fray and supporting these customers with a range of solutions, spanning from consulting and systems integration services to network design and planning services to back-office software support systems and platforms.

Conclusion

The balance of power in the telecom industry is shifting rapidly to content and Internet companies. Alternative network providers realize they need to be proactive to protect their market positions and blaze their own paths to growth. Relying on incumbent telecom operators for business-essential functions, such as providing ubiquitous Internet connectivity and 99.999 percent reliability, is a risky and costly proposition, and these companies are taking more control over the value chain to secure their destinies and ensure they can provide optimal service to their end customers.

Incumbent operators are in a precarious situation because the prevalence of alternative network providers is increasing downward pressure on access prices and will continue to shift value-added services to over-the-top players. Incumbent operators will need to accelerate their business transformation to regain their nimbleness and be able to operate profitably at lower access prices. This will require a focus on software-mediated technologies such as NFV and SDN as well as leveraging cloud and analytics to streamline their networks and make them more flexible.

Suppliers are in an enviable position because their addressable market is growing as more companies enter the telecom space. Selling network infrastructure to content and Internet companies has become a significant contributor to vendor revenues while traditional customers increase investment to remain competitive. TBR believes incumbent telecom operators will accelerate their shift to software-mediated technologies to stay competitive. This will drive a windfall for network vendors because transformation projects tend to be large in scale and take multiple years to implement.

Technology Business Research, Inc. is a leading independent technology market research and consulting firm specializing in the business and financial analyses of hardware, software, professional services, telecom and enterprise network vendors, and operators. Serving a global clientele, TBR provides timely and actionable market research and business intelligence in a format that is uniquely tailored to clients’ needs. Our analysts are available to further address client-specific issues or information needs on an inquiry or proprietary consulting basis. TBR has been empowering corporate decision makers since 1996. For more information please visit www.tbri.com.

 

Use Private Cloud to Get the Enterprise Applications That You Want

racks_largeWhat do companies want today? Well, many want to be able to power their business with the most advanced technologies and innovative enterprise applications and platforms.

They want fast and powerful applications that provide deep analytics, better business processes, and improved insight. And they want this as quickly and painlessly as possible – without adding complexity to their current IT infrastructure or bringing a lot of additional costs in equipment and management.

Looking at that list, the Rolling Stones’ song You Can’t Always Get What You Want comes to mind. I’m sure for some service providers, if a customer was to come to them with this wish list, their initial response would probably be something like, “Is that all? Sure you don’t want dragons and unicorns with that as well?”

But what do some, more forward-looking managed service providers say when a customer comes to them with a wishlist for fast and powerful applications that provide deep analytics, better business processes, and improved insight?

How about when they want all of this as quickly and painlessly as possible without adding complexity to their current IT infrastructure or additional costs in equipment and management? For these leading providers, the answer is, “No problem. How soon would you need that?”

That’s because these leading service providers understand that deploying critical enterprise applications and platforms on premise, using expensive hardware and complex processes, is not the only option out there. And by utilizing powerful and flexible private cloud technologies, these leaders are able to build agile infrastructures that let them give customers a simple and pain-free way to leverage enterprise platforms such as SAP S/4 HANA.

Instead of customized in-house deployments that have long implementation cycles, lots of complex steps and requirements, and, let’s face it, plenty of headaches, providers who use private cloud can quickly spin up enterprise systems for their customers that meet all of their needs, allowing them to focus on business – not enterprise application installations.

In fact, we researched how private cloud empowers organizations and enables them to take better advantage of their IT infrastructure (as seen in the report A Simple Path to Private Cloud), delivering a host of key benefits. For example, 71% of organizations benefit from simpler application management and administration after taking advantage of private cloud technologies.

Businesses today are often stuck in a catch-22. They want the latest and greatest technologies and applications, and they want to be a dynamic and agile organization that can move quickly to be competitive. But they also struggle with high technology costs and stressed IT staffs that don’t need additional projects or complexities thrown at them.

By offering enterprise applications based on private cloud technology, smart providers can give their customers the capabilities and innovations that they need without the added complexities and costs. Or, as the Stones would say, “If you try sometime. You find. You get what you need.”

– See more at: http://www.techproessentials.com/use-private-cloud-to-get-the-enterprise-applications-that-you-want/#sthash.ZeJyeiXj.dpuf

The Value of Real User Measurements in the Cloud (With Examples!)

HiRes-1024x940

Guest article by Pete Mastin, Product Evangelist, Cedexis

First, we need to understand some of the emerging best practices for cloud deployments. These come from personal experience with hundreds of clients improving site performance by multi­‐homing cloud infrastructure. What are the five irrefutable truths of public cloud infrastructure adoption? While these principles continue to evolve, these have solidified to the point where they are not negotiable:

Public infrastructure fails and underperforms at times -­‐ just like private infrastructure does. The only way to provide 100% availability for the enterprise is to multi-­home in an active-active configuration. That is the responsibility of the enterprise.
Cloud-based technology must be deployed across multiple geo-­locations (regions) to maximize uptime in case of “acts of God” such as hurricanes or earthquakes, or “acts of man” such as a backhoe cutting network lines. So, for example, if your user base is primarily North America, you should have a cloud instance on both coasts at a minimum. More is typically better.
A multiple vendor approach to public infrastructure dramatically reduces the chances of global outages. Unsurprisingly, vendor-specific outages are more common than ‘acts of God.’ Vendor diversity also helps when negotiating contracts with Cloud providers.
Use of Content Delivery Networks can dramatically improve performance of web/mobile apps. Principle 1 above applies to this piece of infrastructure as well.
Monitoring every element of a multi-­vendor, multi-­homed, active-­active Web app is critical to maintaining its availability and performance goals.
These five best practices, if followed, will unarguably provide Web and mobile Applications with far better uptime. This has been demonstrated continuously over a number of years. When Hurricane Sandy hit the Northeast United States a few years back, many sites that were singly homed in a data center or cloud just stopped, but many did not, and those were multi-­homed.

Qualitative analysis of enterprise cloud deployment

It behooves us to review types of cloud deployments to determine the patterns seen in the marketplace. There are as many varieties of cloud deployments as there are cloud architects. No two are exactly alike.

However, there are some commonalities or broad categorizations that can be made about different types of deployments. Generally, the use of public clouds breaks into four groups:

Single Vendor / Single Instance
Single Vendor / Multi-Instance
Multi-Vendor / Multi-Instance
Hybrid Cloud
First, let’s explain what we mean by these four categories, so that it is clear what we are comparing and contrasting:

Single Vendor / Single Instance – Where most enterprises start in their move into the cloud. This refers to selecting a single cloud instance (i.e. AWS East Coast, SoftLayer Houston, or Rackspace Chicago) and deploying your services there alone. These services exist in a single data center on virtualized servers.
Single Vendor / Multi-Instance – A typical next step for many enterprises, who start to realize the performance penalty that their users pay from other parts of the world, and take steps to rectify that by pushing a portion of their services closer to their user base. Or they have an outage (micro or major) and mitigate risk by deploying a second or third cloud instance of the same provider they already use. A simple example here is the enterprise that is deployed on AWS East Coast, and then decides (based on performance complaints) to deploy similar/same services at AWS Oregon, Frankfurt, and Tokyo.
Multi-Vendor / Multi-Instance – Enterprises with a more mature set of practices around vendor management and cost control often are found using this category of service. This is basically the same model as the previous one, but rather than deploy similar/same services in other geographies, you’d deploy these services using alternative vendors. An example is the enterprise that starts on AWS East Coast, but then for its west coast services, chooses SoftLayer San Jose, and for its APAC cluster, perhaps Azure Asia East. The main advantage to this model (besides vendor management and cost control) is avoiding vendor-related outages.
Hybrid Cloud – Commonly seen in companies that have invested heavily in private data centers and want to get the most out of that sunk capital expenditure. In this case, portions of the traffic continue to flow to private data centers, and the cloud takes the rest. Some services also demand bare-­metal servers, and to accommodate this while also taking advantage of the scalable capacity of the cloud, companies will adopt this model.

for more info, visit this http://www.computer.org/web/aberdeen-group/content?g=6012563&type=article&urlTitle=the-value-of-real-user-measurements-in-the-cloud-with-examples-