Sunday, March 26th, 2017

Infrastructure Re-imagined Panel

0

October 1, 2014, ARM TechCon, Santa Clara, CA—A panel moderated by Ian Ferguson from ARM looked at the changing functions and needs of current infrastructure. For example, five years ago Google started offering high-sped data on fiber in some select areas. Now, they are delivering video, connectivity, and network functionality in those areas. The panelists were Benjamin Wesson from Oracle, Masood Ui Amin from Avicent, Pere Monclus from Plum Grid, and Robert Hormuth from Dell.

Wesson noted that fast data requires management of complexity, and the ability to identify key bits to use the data for insights and action.
Monclus added that data capture and management is critical.
Amin considered that integration and security matter.

How to establish context and develop insights that provide value within a reasonable time to act?
Amin suggested that services and performance are becoming more deterministic, but cloud performance is different. For example, the telecommunications are a poor map to the cloud, because the telcos are geared for high availability, scalability, and hardware abstraction. The cloud works with packets where the focus in on improving quality of service and traffic management. The cloud needs may other changes to be useful for the telcos.
Monclus offered the changes in infrastructure are moving to more open architectures and driving changes in consumption. The value is moving from hardware to software. The ures of the open networks are concerned about getting locked into particular hardware, so standards and communities matter. The boxes are moving to become full platforms, creating new use cases and changing value propositions.

The new networks are now systems of systems. These changing environments are working to get higher levels of abstraction into the systems in areas like software-defined compute, storage, and networks. The developers are decoupling the physical implementations and the software abstraction layers. The hardware is moving to more software for greater flexibility and creative solutions.

Hormuth invoked the many laws driving the industry; Moore for increases in functionality over time, Amdahl for serialization limits on throughput, Bell for an economic view of the decreasing costs of compute per decade, and Brook for considering that management overhead prevents unlimited scaling. Modernizing IT requires compute move to a scale-out plus micro-server configuration. The storage area network (SAN) is moving towards software-defined storage and even virtualized SAN. Discrete networks are becoming converged and aggregated fabrics as general-purpose systems become workload optimized. New server configurations are entering the lexicon and ARM servers are a part of that change.

White box servers, etc. and the place for OEMs?
Hormuth paraphrased Mark Twain about the death of OEMs, and noted that the OEMs are moving up in the value chain to deliver quality and reliability in their hardware. The OEMs are adding new functions and features to their hardware.
Monclus commented that software-defined everything and the movement from hardware to software are indications of changes in value. In the past, the emphasis was on the boxes. Now, the emergence of software-defined networks is moving that hardware into software. The move is accelerating due to the increasing length of the hardware cycles. The next systems will use multiple approaches to differ from the incremental changes in pipes and processors we see now.

The advent of infrastructure as a service provides on-demand capabilities, leading to software and security as services. These are not really new, but reflect changes in business models. The technology is not all from a single company as these software platforms emerge.
Amin stated that many opportunities are opening up as physical implementations move to the cloud. The moves will highlight the many gaps and many problems in the software-defined systems. The platforms need orchestration layers and management. The cloud has a strong business case, so many companies are forming to address the shortcomings.
Wesson added that the advent of fast data and of the Internet of Things product management will generate new IoT cloud services. The key will be in the hardware at the gateway or network edge and will drive more compute to the edge and not to more servers.

Distributed intelligence to move the cloud to the data?
Wesson responded that there is a tradeoff, the expediency for action or the amount of data to make a considered decision. Data doesn't have to be correlated and can be independent. All of the data are not necessary, but just enough to take action.
Monclus noted that everything depends upon the app. The response depends on the use case. In some cases, the cloud provider is contributing an island of compute versus a service provider delivering services and latencies. The ideal system configuration differs for store at the edge or compute at the data, or other use cases.
Hormuth declared that the nature of the data matters. Fast big data is memory and compute intensive while slow big data is not. The issues of compute versus data intensive will determine the directions for systems designs. an app could be FLOP intensive or non-tie sensitive big data. It is possible to use more simple operators for all data and accelerate those functions into an FPGA.
Amin acknowledged the wide distribution of big data. In some cased, latency matters, while other cases have other functions as drivers. All the functions can be virtualized.

Predictions and surprises?
Wesson offered data importance is increasing. software is moving from an investment to free or freemium. The analysis is becoming more valuable than the software or the data.
Amin suggested that everything physical will move to be more virtual. The next step is to separate the hardware and the software development kits to increase uniformity. The industry needs more standards.
Monclus considered the question of the next computer generation. The possibility of subsidized compute ala Google may lead to better cloud functionality but limiting factor is the cost of the investments. The other costs for the changes may be too high to overcome. This move to more in the cloud moves the industry back to mainframe configurations, but with the compute approaching free. The question is who will control this and who will develop the next generation of infrastructure.
Hormuth agreed that the software-defined everything will take many forms, which will include workload optimization, right sizing, and better overall optimization. The big ideas are that compute and storage will be essentially free. The next big thing will be economics driven, and the new functions will make monetizing more important and easier. The winner will be the one who finds a way to benefit due to the reduces costs of compute and storage.
 

 

Speak Your Mind

Tell us what you're thinking...
and oh, if you want a pic to show with your comment, go get a gravatar!