Today, AI and its offshoots of machine learning and deep learning, are dominating the Silicon Valley agenda in both wallet share and mindshare. Gartner listed AI as its top strategic technology of the 10 to watch for in 2018. According to PwC, AI is already having a positive impact with more than half of business execs that it surveyed reporting that AI solutions have increased productivity in their companies.
The AI/ML moment is, of course, driven by multiple business and productivity factors. However, beyond these, AI/ML is actually the foundation for a new era of IT, the cognitive era. This replaces the era of programmability that defined mainframes, client/server, and mobile-cloud architectures. The era of programmability was characterized by human programmers laboriously crafting software that instructed machines to do specific tasks. The cognitive era is defined by machines feeding on data, self-discovering and fine tuning predicted models – in a way programming themselves. These models – whether running in the core or the edge, are able to intuit a different level of insight from data.
The implications for application development, management, and operations are mind boggling. So is the implication for the underlying data center and multicloud infrastructure that will support these new classes of apps, data, and devices. IT organizations are struggling to keep pace with these trends and the associated ramifications. Peter Levine, a general partner at Andreesen Horowitz, described this as both good news/bad news for IT in his edge computing presentation. The good news is that trends such as AI, ML, intelligent apps, data analytics, etc. make everything in business the domain of IT. That’s also the bad news.
Where Apps and Data Go, the Infrastructure Must Follow
The fact is that IT is in uncharted territory, trying to curate and support fragmented, unfamiliar, and rapidly evolving AI/ML software stacks. These stacks power new apps that support use cases both imagined and as yet unimagined. More apps mean, of course, more data. Traditional approaches toward data collection, transport, storage and management are not designed to handle the data volume, velocity, variability and locality of AI at production scale.
In order for the data center to follow the data, IT must extend accelerated computing at the right scale to the right locations across an increasingly distributed landscape. In other words, the data center (and the multicloud infrastructure) is where the data is – and there is an increasing need for data to be processed across a spectrum of locations from the core of the datacenter to the edge.
The promise is of a world of data center and edge environments where smart machines operate freely and autonomously generating massive amounts of data that is collected, analyzed and distilled. Data Centers and connected multicloud environments are consuming this data, feeding deep learning and machine learning models. The resultant models in turn drive better decisions, spawns new apps and services – which are then deployed back out to the enterprise edge. The virtuous cycle is now complete.
Cisco UCS: Powering AI Workloads at Scale
To address this emerging opportunity and associated challenges, Cisco has partnered with Nvidia, the leader in AI computing, to expand the UCS portfolio by adding a new accelerated computing platform. Cisco’s strategy with this UCS platform is to make it easy for our customers to consume AI/ML and shorten time to insights. This innovation will contribute to the democratization of AI by enabling enterprise environments to more easily run powerful machine learning/deep learning based analytics on infrastructure that fits in seamlessly into their on-prem datacenters. With the Cisco UCS C480 ML M5 for deep learning, we can now offer a complete array of computing options right-sized to each element of the AI/ML lifecycle: from data collection and analysis near the edge, to data preparation and training in the data center core, to real-time inference at the heart of AI. The goal is to enable IT organizations to capitalize on the adaptability, programmability, and manageability of Cisco UCS and power AI at any scale and location. Further, with Cisco Intersight – our SaaS infrastructure management offering – only Cisco offers the simplicity and reach of cloud-based infrastructure management, for consistent and unified operations across the entire AI landscape.
For the past several years, Cisco has been working with big data software partners such as Cloudera, Hortonworks and others to help customers integrate data sources into UCS based data lakes and gain insights from their data through contemporary data analytics techniques. We continue to work with these partners so customers can easily incorporate AI/ML methods into these analytics environments as well as connect these data lakes to deep learning infrastructure. Further, we are demystifying this complex ecosystem and reducing risk, with Cisco validated AI/ML solutions, that combine a broad set of technologies and applications to help ensure a faster, more reliable, and predictable deployment. In parallel, we are also easing the move towards containerization of apps and multicloud computing models. To that end, Cisco has contributed code to the Google open source project, Kubeflow, which integrates TensorFlow with Kubernetes. Cisco is also working with Anaconda to ensure that data scientists have a ready programming environment, a set of curated libraries and models, and a training and deployment environment. However, software alone is half the story – the hitherto missing ingredient, we are bringing to market are validated and optimized stacks on our next-gen C480 ML infrastructure platform.
The addition of the C480 ML to our portfolio also creates new opportunities with our strategic storage partners like Pure and NetApp to attach converged infrastructure solutions to machine learning infrastructure.
For digital businesses to reach their full potential, a tight synergy must exist among apps, data, and infrastructure. This is where Cisco, in collaboration with its ecosystem partners, shines – whether that be in traditional domains or, as we are talking about today, in the brave new world of AI/ML. Getting the entire stack right, as Cisco is envisioning, is what makes data centers, multicloud, and edge infrastructures go from digitization promise to digitization reality.
The ecosystem is on fire around AI and we’re thrilled about our announcement this week and the work we’re taking on with our ecosystem partners. The Cisco team is incredibly excited to accelerate the AI journey with our customers and solution partners.
Visit us on the web to learn more about Cisco AI/ML computing solutions.
You must log in to post a comment.