Friday, February 24, 2012
Smart Grids are “happening” technology. Smart grids are coming. In fact smart grids are coming right into our homes. So what is Smart Grid all about?
About 2 decades ago the electricity grid of the world had 3 main elements to it namely energy generation, energy transmission and energy distribution to the consumer. According to The Smart Grid) "The grid," refers to the electric grid, a network of transmission lines, substations, transformers and more that deliver electricity from the power plant to your home or business. It’s what you plug into when you flip on your light switch or power up your computer. The issue with the traditional energy grid is that there are enormous losses in transmission and grid would be strained during peak usage. Moreover any outage of the energy grid would have a domino effect and could effectively cause a blackout in large areas. Remember the blackout in US in 2003 which was the largest blackout in US history (Biggest blackout in US history).
The Smart Grid tries to address all these problems of traditional energy grid. The Smart Grid has millions of sensors along the grid which measure and monitor the grid continuously and are equipped with 2 way communication. The “smart” grid will be equipped with controls, sensors, automatic meters and computers that communicate and control the grid. The smart meters and sensor constantly transmit data back to a central command center. The Smart Grid can quickly identify outages and isolate that part of the grid preventing a cascading effect to other parts. The Smart Grid can identify potential network problems and re-route the energy through other parts of the energy network. Moreover the smart meters that are installed in every home can intelligently adjust the energy usage to non-peak hours when the cost of the energy is low.
Some of the key advantages of smart grids
- Better resiliency to failures and quicker recovery times
- Automatic re-routing of energy transmission in case of network failures
- Faster response to outages with the ability to isolate the faults
- Better integration with renewable energy like wind, solar energy
- Reduced losses and more efficiency built into the grid.
Some of the key aspects of the Smart Grid are
Smart Home: As mentioned above the Smart Grid will extend to your home making it a “Smart Home”. Smart Homes will be equipped with smart meters instead of the traditional meters. These meters will be equipped with 2 way communication with your energy utility. All the appliances in your home will be networked into a “Energy Management System” the EMS. Through the EMS you will be able to monitor your energy usage and ensure that save money by utilizing your appliances during off peak hours. Smart Appliances will be able to communicate with the energy utility and automatically turn off during peak periods and turn on during when the cost of the energy is low. This is also known as “demand response” when consumers change their consumption patterns based on lower cost or other incentives offered by the utility companies. The energy price like the stick ticker fluctuates with the energy cost being highest during peak periods during the day.
Home Power Generation: The homes of the future will have solar panels or wind turbines will generate power and sell the excess power back to the Smart Grid.
Distribution Intelligence: The smart grid with its transformers, switches, substations will be fitted with sensors that will measure and monitor the energy flow through the grid. These sensors will be able to quickly detect faults and isolate the faulty network from the rest of the network. The Smart Grid will have computer software that will provide the grid with the capacity to self-heal in case of outages and provide better resiliency to the network. Besides security systems will play a key role in the Smart Grid.
Grid Operation Centers: The Energy grid consists of transformers, power lines and transmission towers. It is absolutely essential that only as power as needed is generated. Otherwise like water sloshing through water pipes excess power generated can cause oscillations and result in the grid to become unstable eventually leading to a black out. The Smart Grid will have sensors all along the way which measure and monitor the energy usage and be able to respond quickly to any instability. It will have the power to self-heal.
Plug-in Electric Vehicles (PEVs) : Plug-in Electric Vehicles like Chevy’s Volt, Ford’s Electric Focus, the Nissan’s Leaf and the Tesla’s electric vehicle. The electric vehicle will run entirely on electricity and will be eventually lead to reducing the carbon emissions and a greener future. The PEVs will plug into the grid and will charge during the off-peak periods. The advantage of the PEVs is that the Smart Grid can utilize the energy stored in the PEVs to other parts of the network which need them most. The PEVs can serve as distributed source of stored energy supplying the energy to isolated regions during blackouts.
Smart Grids truly herald a smart future.
Friday, February 17, 2012
Published in Telecom Asia- Mar 13,2012 - Re-imagining the web portal
Web portals had their heyday in the early 1990’s. Remember Lycos, Alta-vista, Yahoo, and Excite – portals which had neatly partitioned the web into compartments for e.g. Autos, Beauty, Health, and Games etc. Enter Google. It had a webpage with a single search bar. With a single stroke Google pushed all the portals to virtual oblivion.
It became obvious to the user that all information was just a “search away”. There was no longer the need for neat categorization of all the information on the web. There was no need to work your way through links only to find your information at the “bottom of the heap”. The user was content to search their way to needed information.
That was then in the mid 1990s. But much has changed since then. Many pages have been uploaded into the trillion servers that make up the internet. There is so much more information in the worldwide web. News articles, wikis, blogs, tweets, webinars, podcasts, photos, you tube content, social networks etc etc.
Here are some fun facts about the internet – It contains 8.11 billion pages (Worldwidewebsize), has more than 1.97 billion users, 266 million websites (State of the Internet). We can expect the size to keep growing as the rate of information generation and our thirst for information keeps increasing.
In this world of exploding information the “humble search” will no longer be sufficient. As a user we would like to browse the web in a much more efficient, effective and personalized way. Neither will site aggregators like StumbleUpon, Digg, Reddit and the like will be useful. We need to have a smart way to be able to navigate through this information deluge.
It is here I think that there is a great opportunity for re-imagining the Web Portal. As a user of the web it would be great if the user is shown a view of the web that is personalized to the tastes and interests that is centered on him. What I am proposing is a Web portal that metamorphoses dynamically based on the user’s click stream, the user’s browsing preferences, of his interests and inclinations as the focal center. Besides the user’s own interests the web portal would also analyze the click streams of the user’s close friends, colleagues and associates. Finally the portal would also include inputs from what the world at large is interested in and following. The web portal would analyze the key user’s preferences and then create a web portal based on its analysis of what the user would like to see.
This can be represented in the diagram below
We have all heard of Google’s zeitgeist which is a massive database of the world’s inclinations and tendencies. Such a similar database would probably be also held by Yahoo, Microsoft, FB, Twitter etc.
The Web portal in its new incarnation would present contents that are tailored specifically to each user’s browsing patterns. In a single page would be included all news, status updates, latest youtuble videos, tweets etc he would like to see.
In fact this whole functionality can be integrated into the Web browser. In its new avatar the Web Portal would have content that is dynamic, current and personalized to each individual user. Besides every user would also be able to view what his friends, colleagues and the world at large are browsing.
A few years down the line we may see “the return of the dynamic, re-invented Web Portal”.P
Thursday, February 16, 2012
We are headed towards a more connected, more instrumented and more data driven world. This fact is underscored once again in Cisco’s latest Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2011–2016 . The statistics from this report is truly mind boggling
By 2016 130 exabytes (130 * 2 ^ 60) will rip through the internet. The number of mobile devices will exceed the human population this year, 2012. By 2016 the number of connected devices will touch almost 10 billion.
The devices that are connected to the net range from mobiles, laptops, tablets, sensors and the millions of devices based on the “internet of things”. All these devices will constantly spew data on the internet and business and strategic decisions will be made by determining patterns, trends and outliers among mountains of data.
Predictive analytics will be a key discipline in our future and experts will be much sought after. Predictive analytics uses statistical methods to mine information and patterns in structured, unstructured and streams of data. The data can be anything from click streams, browsing patterns, tweets, sensor data etc. The data can be static or it could be dynamic. Predictive analytics will have to identify trends from data streams from mobile call records, retail store purchasing patterns etc.
Predictive analytics will be applied across many domains from banking, insurance, retail, telecom, energy. In fact predictive analytics will be the new language of the future akin to what C was a couple of decades ago. C language was used in all sorts of applications spanning the whole gamut from finance to telecom.
In this context it is worthwhile to mention The R Language. R language is used for statistical programming and graphics. The Wikipedia defines R Language as “R provides a wide variety of statistical and graphical techniques, including linear and nonlinear modeling, classical statistical tests, time-series analysis, classification, clustering, and others”.
Predictive analytics is already being used in traffic management in identifying and preventing traffic gridlocks. Applications have also been identified for energy grids, for water management, besides determining user sentiment by mining data from social networks etc.
One very ambitious undertaking is “the Data-Scope Project” that believes that the universe is made of information and there is a need for a “new eye” to look at this data. The Data-Scope project is described as “a new scientific instrument, capable of ‘observing’ immense volumes of data from various scientific domains such as astronomy, fluid mechanics, and bioinformatics. The system will have over 6PB of storage, about 500GBytes per sec aggregate sequential IO, about 20M IOPS, and about 130TFlops. The Data-Scope is not a traditional multi-user computing cluster, but a new kind of instrument, that enables people to do science with datasets ranging between 100TB and 1000TB. The Data-scope project is based on the premise that new discoveries will come from analysis of large amounts of data. Analytics is all about analyzing large datasets and predictive analytics takes it one step further in being able to make intelligent predictions based on available data.
Predictive analytics does open up a whole new universe of possibilities and the applications are endless. Predictive analytics will be the key tool that will be used in our data intensive future.
I started to wonder whether predictive analytics could be used for some of the problems confronting the world today. Here are a few problems where analytics could be employed
- Can predictive analytics be used to analyze outbreaks of malaria, cholera or AID and help in preventing their outbreaks in other places?
- Can analytics analyze economic trends and predict a upward/downward trend ahead of time.
Wednesday, February 8, 2012
In today’s globalized environment organizations are spread geographically across the globe. Such globalizations result in multiple advantages ranging from quicker penetration into foreign markets, cost advantage of the local workforce etc. This globalization results in the organization having data centers that are spread in different geographical areas. Besides mergers and acquisitions of different businesses spread across the globe results in hardware and server sprawl.
Applications in these dispersed servers tend to be silo’ed with legacy hardware and different OS’es and disparate software that execute on them.
The costs of maintaining different data centers can be a prickly problem. There are different costs in managing a data center. The chief among them are operational costs, real estate costs, power and cooling costs etc. The problem of hardware and server sprawl is a real problem and the enterprise must look to ways to solve this problem.
There are two techniques to manage hardware and server sprawl.
The first method is to use virtualization technologies so that the hardware and server sprawl can be reduced. Virtualization techniques abstract the raw hardware through the use of special software called the hypervisor. Any guest OS namely Windows, Linux or Solaris can execute over the hypervisor. The key benefit that virtualization brings to the enterprise is that it abstracts the hardware, storage and the network and creates a shared pool of compute, storage and network for the different applications to utilize. Hence the server sprawl can be mitigated to some extent through the use of Virtualization Software such as VmWare, XenApp, Hyper-V etc.
The second method requires rationalization and server consolidation. This essentially requires taking a hard look at the hardware infrastructure, the application and their computing needs and trying to come up with a solution which involves more powerful mainframes or servers which can replace the existing less powerful infrastructure. Consolidation has multiple benefits. Many distributed data centers can be replaced with a single consolidated data center with today’s powerful multi-core, multi-processor servers. This results in highly reduced operational costs, easier management, savings from reduction in the need for power and cooling requirements and real estate saving etc. Consolidation truly appears to be the “silver bullet” for server sprawl.
However this brings us to what I call “the data center paradox”. While a consolidated data center can do away with operational expenses of multiple data centers, result in reduction in power and cooling costs and save in real estate costs it introduces WAN latencies. When geographically dispersed data centers across the globe are replaced with a consolidated data center, in a single location, the access times from different geographical areas can result in poor response times and latencies. Besides there is also an inherent cost of data access over the WAN network
The WAN network results in latencies which are difficult to eliminate. There are technologies which can lessen the bandwidth problem to some extent. WAN optimizer is one such technology.
In fact e-commerce and many web applications intentionally spread their application across geographical regions to provide a better response time.
So while on the one hand consolidation results in cost savings, better efficiencies of management of a single data center, reduced power and cooling costs and real estate savings it results in WAN latencies and associated bandwidth costs.
Unless there is a breakthrough innovation in WAN technologies this will be a paradox that architects and CIOs will have to contend with.