2017 for developers

A lot many developments have happened over 2016, quite a few trends have been over-hyped, with Big Data being one of them. And now with that having died down, the industry is making actual progress. Aside from lacking the right infrastructure and resources to base the industry on, these few years have only been good for the conglomerates that had the money to actually put big data to use. With really fast memory having arrived and compute hardware with large caching becoming more affordable, 2017 looks to be the year when we’re going to see the democratisation of big data. Anyone can easily afford consumer grade compute cards from NVIDIA or AMD and get started with the number crunching. Moreover, we now have access to Machine Learning tools that have a much more beginner-friendly learning gradient.

So what do we expect from 2017? Read on…

There shall be one

This last year quite a lot of proprietary Machine Learning software ended up getting open sourced by the giants from Silicon Valley. Elon Musk backed non-profit called OpenAI was set up with the aim of building AI that is human friendly. Microsoft open sourced their Cognitive Toolkit (CNTK), Facebook has been putting its weight behind Caffe and Torch while Google has open sourced it’s TensorFlow and DeepMind projects. Baidu, one of the biggest entities working on MachineLearning has also open sourced its set of libraries under the name PaddlePaddle. And lastly, Amazon has integrated MXNet into their AWS ML platform.

With so many tools getting open sourced, the community is slowly going to get its hands dirty and from among all these

Deep

Google open sourced DeepMind

will arise only one. While every Machine Learning library has its pros and cons, there are other factors that will come into play when developers start playing around with ML. These libraries will proliferate the developer community and at the end of 2017 we shall see one particular library becoming the gold standard. This gold standard will then be adopted by all major XaaS providers and will be better integrated with their platforms. Data Sets for these ML platforms will also become easier to get your hands on as more and more companies are releasing non-private data. Even those starting out can get their hands on voluminous data sets pertaining to, let’s say Amazon Book Reviews or Dota2 Game matchups / results, Facebook comment volume data, etc.

Hardware is affordable

High Performance GPUs that can really accelerate the pace of Machine Learning like NVIDIA’s Tesla and AMD’s Radeon Instinct have been announced. While both companies have had

Tesla2

GPGPU is getting a massive push thanks to NVIDIA and AMD

their GPUs being used for OpenCL and proprietary GPGPU applications for a while, the move to smaller process nodes for their transistors have resulted in a tremendous rise of compute power. Needless to say, the basic solutions from either company have become really affordable. While industrial applications would still require spending ridiculous amounts of money to set up Machine Learning servers, it will be way easier for startups to dive into the deep end without the fear of sinking.

This widespread adoption will lead to several off-the-shelf solutions becoming a reality and thereafter, becoming even more affordable. Be it GPUs, services on the cloud or neural chipsets for the embedded market, the cost will go down.

Blockchain for all

 Unless you’ve been living under a rock, you’d have heard about blockchain. It’s a distributed database that maintains a steadily growing list of ordered records. They are inherently resistant to past records getting modified and have a high fault toler –

block chain

Banks are championing the development of Blockchain

ance. Conceptualised by Satoshi Nakamoto for the now infamous Bitcoin – a form of cryptocurrency – Blockchain is being readily considered by banks for issuance of bank issued legal currency. R3, a distributed database technology company that leads a consortium of over 70 financial institutions has been investing heavily in blockchain technology. R3 is seeking $150 million from its member companies to fund the development of blockchain-based technologies for the banking sector, and they’ve already secured more than half of that amount. While a few top names have left the consortium – Goldman Sachs, Morgan Stanley, Santander and NAB – more have stepped in to take their place.

Aside from traditional applications, Blockchains are also being targeted at recording Smart Contracts. Simply put, Smart Contracts are a bunch of conditional statements that form the basis of a legal agreement between parties. The moment these conditions are met, the Smart Contract gets executed. Ethereum is a well-known platform that deals in Smart Contracts. And there are a lot many other such as Maidsafe, Ardor, Synereo, etc. which are all working towards creating the decentralised internet. By the time their efforts bear fruits, we should arrive at the end of 2017 and hopefully see Blockchain evolve.

IoT wreaks havoc

 The Internet of Things is going to be huge, with over 6 billion devices that are connected to the internet – A figure that’s expected to exponentially rise in the coming years to hit the 50 billion mark by 2020. However, there is no uniform security framework for IoT. All manufacturers implement their own proprietary solutions which has led to an extremely fragmented scenario. And in the midst of all this, we’ve had the very first massive IoT based botnet attack in October 2016. Now known as the Mirai botnet, this attack crippled the servers of companies like NetFlix, NYTimes, Paypal and Twitter.

Cisco, AT&T, GE, Intel and IBM formed the Industrial Internet Consortium about two years back to focus on the IoT

bot

Mirai Botnet was only the first

industry and to create a unified framework, but it will take a few more attacks by massive IoT botnets like the Mirai, before the industry switches to high gear and comes out with a solution. It would be a lot better if Government agencies step in to keep the corporates in check while these frameworks are designed.

Mixed Reality gathers steam

 While VR and AR applications will continue to steadily grow, Mixed Reality will gain a foothold. Among the well known players, currently only Microsoft’s Hololens and an yet to be revealed MagicLeap project are working on Mixed Reality. Both devices are being readied for launch and should be out before the H2 2017.

Machine

Mixed Reality is going to be big

The myriad set of applications involving mixed reality showcase its true potential. And since enterprise use cases are likely to take the first step, we will see Mixed Reality being used for visualising Big Data. This in turn, will improve the decision making process where large complex data sets are to be factored. We should see plenty of such data visualisation services pop-up thanks to the Big Data industry.

PaaS will dominate cloud services

 If one thing needs to be made clear is that with the rise in growth and accessibility of Machine Learning comes the realisation about the kind of hardware that’s needed for applications. While in-house development can easily be carried out on off-theshelf solutions, going into production is an entirely different ball game altogether. As companies begin to realise that such hardware come at a steep price and that setting up all that hardware also incurs additional expenditure. This is the reason why cloud based services are doing so well in the and machine learning is no different. All the major XaaS providers have already started setting up GPGPU clusters. Towards the end of the year we should see more pricing tiers in the PaaS offerings for GPGPU use cases. And along with that, we’ll also see a push for cognitive computing, streaming media analytics and predictive analytics in the same time.

Pass

More GPGPU PaaS services to come in 2017

With the rise of newer technologies to analyse big data, we’ve seen a sharp fall in Hadoop usage. This fall started towards the end of 2015 but in 2016, the big data community left Hadoop behind. Rival Apache Spark has been experiencing similar ups and downs all across 2016. As programmers are slowly gaining data science skills on their own, those who’d prided themselves as big data gurus are going to be left behind as

data

Data Science is really going to heat up

well. These roles, i.e. Data Scientists, Data Architects and Data Officers are all going to get better defined and thus, a year down the line we’re going to see a spike in demand for such roles as well.

Some of the things mentioned earlier might feel good to you as a developer, some not so much. But what cannot be denied is that 2017 is going to a momentous year for the technology community

Digit: What is CodeCuriosity and what are the reasons behind its inception?

 Gautam: Open-source is being used everywhere today, if you’re using an Android phone then you’re probably using some Open-source component. Even if you look at applications made on the iPhone, they’re probably using some open source library at the core. Microsoft open sourced .NET, Apple open sourced Swift 2, these are the languages and frameworks prevalently used to make applications. These large companies are moving towards open source. So open source has become a very common aspect today. Now, moving towards the developer side, how many people consume open source and how many people contribute to open source? About 99.3 per cent consume open source and maybe less than 0.7 per cent – I’m being optimistic here – are actively contributing back.

Now this is a problem that we’re trying to solve. So the roots of CodeCuriosity started off about 3-4 years ago at Josh when we wanted our team to start contributing back to open source. We were finding it difficult for them to get out of their daily grind to contribute. Majority of them did it as an optional thing. So after years of experimenting, we had something called Open Source Friday where nobody worked on their daily work and just contributed to open source projects. It worked for some months before dying down.

We started brainstorming to figure out that if open source contributions are fun, motivating and rewarding, then we had a solution. We started a competition. And if participants were found to be among the top 5-6 people then they were rewarded. We tried this internally at Josh for the last one year and we got astounding results.

Once we started tracking people’s activities on GitHub, we were able to reward people for their work. And once we started doing that, we saw contributions increase. So we set up a platform and called it CodeCuriosity to track open source contributions. And through this platform, we were tracking every activity that was being done every day. And we were rewarding users.

We wanted it to be motivative and not competitive. So we started working towards the Fitbit model. Everyone knows running is healthy but it was only after wearables came into the picture and started telling you how much you’ve benefitted that you started getting motivated to run more.

CodeCuriosity does that for you. We set up goals, you can choose goals and then you start contributing on a monthly basis. The platform tells you how much you have contributed, how much is left for your target and when you do hit your targets, you are rewarded.

Digit: So how do you score a user?

 Gautam: We have an automated algorithm which is open source and continues to evolve based on feedback. We try to improve and try to score every open source contributions – it could be a code commit, or raising an issue, etc. We’ve also been trying to integrate StackExchange into CodeCuriosity.

So once you’re able to track all these activities, you get rewarded points. And 10 points = $1, so you can actually start making money out of your open source contributions. So not only are you making it fun, motivational and rewarding but you’re also getting people to contribute back. So open source contributions become part of your daily routine. This is our aim. To ensure that we can tilt the balance of consumers:contributors from 99.3:0.7 to 95:5. Even if we are able convert 10% of the consumers into contributors, the results will be astounding. Because the benefits are huge!

Leave a Comment

Your email address will not be published. Required fields are marked *