I have seen that most of the OTT providers like Akamai; Google etc are currently pushing towards higher bandwidths and dump pipes in the Internet.
I think this is too narrow thinking from OTT companies, most of them thinks that fat dump network pipes are the only thing they need from the network. It could be a wise decision to rack and stack commodity PCs as an alternative to supercomputers but to build overlay application/service routing is not a scaleable and long term solution. It is like solving one problem and creating other. Applications need to talk to the network in order to get the best and optimal use of the technology. At the moment most of these companies are looking only on the content and especially video, the network (Internet) is more than just video. We might see at the moment some growth due to video but network architecture have to address even unknown future killer applications. I am not suggesting replacing the applications all together and every thing should be in the network (as some people thinks), but there is a part of intelligence in the network which can help applications to concentrate on their core business instead solving network problems. We need a better interface between application and network in order to solve some of the issues. E.g. Network is asking application from number of years to use SSM for content multicast but you know the result, it is still missing from most of the major applications. (MSTV etc) Other way around applications needs to know the routing information in order to do a better distribution decision, but most of the Service Providers are not willing to open their network topology.
Some of the work is already started in IETF with newly formed alto WG/BOF. Other example for QoS improvement is with explicit congestion notification (ECN). Still there is a lot work required at standardization bodies and at Service providers to open their network topologies and quality information to the internet community.