How many of you still like to watch old English movies? If yes, which one you want to start with? which suits to your interest? And many of us are uncertain which movies we may like to watch? Ever wondered you can have all the above issues addressed in one place, so here we come with the Recommendation Engine which gives you the best match for your movie interest.


What question strikes your mind when you are planning to buy a new gadget? There can be one common questions in everyone’s mind, how long this technology may sustain in the market? We will know the fate of todays technology only when we know what is coming next.


Ever wondered whats the best way to get real audience opinions about the movie you are about to watch? Even we wondered. What we have now is all opinionated critic reviews or fan dominated IMDB.


In software industry with adoption of Hadoop, data scientists are in high demand. There is a well known fact that people from data science background always face difficulty to apply data science on bigdata due to lack of bigdata knowledge and people from programming background face the same when they try data science on bigdata due to lack of data science knowledge. Here we are seeing two different set of people whose end goal “Machine Learning on Big Data” appears similar. So we try to solve this and give you the correct steps to get started in this regard.


Problem:

Assuming there was full day of outage, Calculate the Revenue Loss for a particular day next year by finding the Average Revenue per day (ARPD) of the household(Use the below tariff plan).

Time of Use (TOU) tariff plan:

Time Period Tariff (Rupees per KWh)
12 AM to 5 AM 4
5 AM to 7 AM 6
7 AM to 10 AM 12
10 AM to 4 PM 4
4 PM to 8 PM 6
8 PM to 10 PM 10
10 PM to 12 AM 6

This is a sequel of the previous blogs from our team members, where the solution for the problem related to energy consumption was explained. In this blog, let me explain our solution to another problem on the same data whose schema is mentioned here.

Problem

What would be household’s peak time load (Peak time is between 7 AM to 10 AM) for the next month.

  • During Weekdays?
  • During Weekends?

Its our pleasure to share our approach in solving the hackathon problems briefly. So, lets start the exploration with details about the data we handled. The given data represents the energy consumption details recorded for each minute in household usage.The hourly data was having 2075259 measurements gathered between December 2008 and November 2012 (almost 4 years).You can find the schema for data here.


This series of blogs will walk you through our complete solution , designed and implemented for predicting future energy demand using spark.

This problem was solved within 24hrs in hackathon.

Problem statement

Predict the global energy demand for next year using the energy usage data available for last four years, in order to enable utility companies effectively handle the energy demand.


Team

This weekend more than 20 companies came together at BigData conclave to explore big ideas. It was two day event held at Sheraton hotel, Bangalore. Developers from all across country flew to Bangalore to participate in the event.

Flutura, a bigdata company, hosted hackathon to crack big data problem within 24 hours. More than 53 teams from various companies had participated in the event.We were able to crack the problem and win the hackathon. It was a great team performance.


Hadoop 2.0 is here. After 5 ½ year of initial proposal hadoop community has delivered next major version update to the world’s most popular big data stack. Though it looks like single number upgrade, its going to redefine how we use Hadoop.

It’s right time to get on the next generation platform. If you are still not convinced, the following reasons should get you excited.