Google billionaire Eric Schmidt says this is the skill employers will look for in the future
If you want to pick up the skill more employers will be looking for in the future, heed the advice of executives from a global leader in technology. In an interview with CNBC, both Eric Schmidt, executive chairman of Google's parent company Alphabet and Jonathan Rosenberg, adviser to CEO Larry Page , say that data analytics will become increasingly important in workplaces. "I think a basic understanding of data analytics is incredibly important for this next generation of young people," Schmidt tells CNBC. "That's the world you're going into." "By data analytics," the executive chairman says, "I mean a basic knowledge of how statistics works, a basic knowledge of how people make conclusions over big data." Focusing more on data analytics will help businesses too, the executives say. Hiring professionals with the right skills and a penchant for bold, creative thinking was a strategy that drove Google's innovation, Schmidt and Rosenberg write in a recently updated version of their book, "How Google Works." According to the Bureau of Labor Statistics (BLS), the number of roles for individuals with this skills set is expected to grow by 30 percent over the next seven years, well above average. Data analysts make sense of large amounts of information using statistical tools and techniques. They're able to pinpoint trends and correlations using programs such as Excel, SAS and SQL and Tableau. They typically study statistics, data science or math. Schmidt says that being able to use calculus would be a great asset to an employee, but an understanding of how to approach big data would still be very helpful in finding a job. Rosenberg agrees. "My favorite statement that echoes Eric's," he says," is 'Data is the sword of the 21st century, those who wield it well, the samurai.'" The quote comes from an internal memo Rosenberg sent to employees in 2009, following the inauguration of President Barack Obama. "Everyone should be able to defend arguments with data," he writes in the memo. "Information transparency helps people [...] determine who is telling the truth." Source: https://au.finance.yahoo.com/news/google-billionaire-eric-schmidt-says-171636125.html Disclaimer: The above article is published here in addition to providing a link in other pages of Big Data Space website so that visitors can still read the article in the event of having a broken link to the original article.
0 Comments
Due to busy schedule, I have not posted any blog in March 2017.
Today, I want to share an article titled Wide vs Long Data. It is an eye opener that there are so many ways to describe and present data. An abstract of the article: Wide & Long Data Contents Wide versus long dataA case for long dataWide to long conversionExercises Summary This tutorial has three purposes: to explain the difference between long and wide form datasets, to show why it’s generally preferable to use long form datasets, and to go over how to convert datasets from wide to long form. Click here to read full article. Using Big Data to Optimise Road Improvement Spend Data collection and analysis that would have taken months can now be performed in seconds. And big data can help planners and funders ensure that road infrastructure spending is optimised and society gains the most. Economics is the study of scarce resources, and what is more scarce than our roadways. For example, in England the Strategic Road Network represents just 2% of all roads but carries over a third of all traffic and two thirds of all freight. [ Read more ] Is The Location of Where Your Data Is Stored Important To You? Kilo, Mega, and Giga. What's next?
As we deal with more data, we deal with more zeroes. We have kilo, mega, giga and so on, but how big are they or how many zeroes do they carry? Will we run out of words to deal with more zeroes? I found a table of SI Prefixes to share with all: Big Data at a Top 5 Property and Casualty Insurer
Started in 1922 by a handful of military officers who offered to insure each other’s vehicles when no one else would, the insurer has become a financial services powerhouse, offering a broad range of insurance and banking services to military members and their families. The size of its customer base and breadth of products make big data a natural next step in the company’s already-advanced technology portfolio. Consistently named one of the country’s best places to work and lauded for its many customer service awards, the insurer has made understanding customer behaviors and preferences core to its mission. “We have a strategy of continuing our evolution as a ‘relationship’ company,” explained the Lead Information Architect in the insurer’s BI Lab and one of the visionaries behind the company’s big data roadmap. “This means taking into account as many data sources as possible, and being able to harness as many new types of data as we need.” In addition to cultivating a deeper view into customers’ product needs and service preferences, the insurer is using a new crop of big data solutions for fraud detection—monitoring data patterns to pinpoint “points of compromise”—using telematics data to provide in-vehicle service, and sensory telemetry information for its mobile apps. Source: Big Data in Big Companies, Thomas H. Davenport and Jill Dyché, May 2013 (Go to Suggested Readings to view full article) Big Data at Bank of America
Given Bank of America’s large size in assets (over $2.2 trillion in 2012) and customer base (50 million consumers and small businesses), it was arguably in the big data business many years ago. Today the bank is focusing on big data, but with an emphasis on an integrated approach to customers and an integrated organizational structure. It thinks of big data in three different “buckets”—big transactional data, data about customers, and unstructured data. The primary emphasis is on the first two categories. With a very large amount of customer data across multiple channels and relationships, the bank historically was unable to analyze all of its customers at once, and relied on systematic samples. With big data technology, it can increasingly process and analyze data from its full customer set. Other than some experiments with analysis of unstructured data, the primary focus of the bank’s big data efforts is on understanding the customer across all channels and interactions, and presenting consistent, appealing offers to well-defined customer segments. For example, the Bank utilizes transaction and propensity models to determine which of its primary relationship customers may have a credit card, or a mortgage loan that could benefit from refinancing at a competitor. When the customer comes online, calls a call center, or visits a branch, that information is available to the online app, or the sales associate to present the offer. The various sales channels can also communicate with each other, so a customer who starts an application online but doesn’t complete it, could get a follow-up offer in the mail, or an email to set up an appointment at a physical branch location. A new program of “BankAmeriDeals,” which provides cash-back offers to holders of the bank’s credit and debit cards based on analyses of where they have made payments in the past. There is also an effort to understand the nature of and satisfaction from customer journeys across a variety of distribution channels, including online, call center, and retail branch interactions. The bank has historically employed a number of quantitative analysts, but for the big data era they have been consolidated and restructured, with matrixed reporting lines to both the a central analytics group and to business functions and units. The consumer banking analytics group, for example, made up of the quantitative analysts and data scientists, reports to Aditya Bhasin, who also heads Consumer Marketing and Digital Banking. It is working more closely with business line executives than ever before. Source: Big Data in Big Companies, Thomas H. Davenport and Jill Dyché, May 2013 (Go to Suggested Readings to view full article) Big Data at an International Financial Services Firm
For one multinational financial services institution, cost savings is not only a business goal, it’s an executive mandate. The bank is historically known for its experimentation with new technologies, but after the financial crisis, it is focused on building its balance sheet and is a bit more conservative with new technologies. The current strategy is to execute well at lower cost, so the bank’s big data plans need to fit into that strategy. The bank has several objectives for big data, but the primary one is to exploit “a vast increase in computing power on dollar-for-dollar basis.” The bank bought a Hadoop cluster, with 50 server nodes and 800 processor cores, capable of handling a petabyte of data. IT managers estimate an order of magnitude in savings over a traditional data warehouse. The bank’s data scientists—though most were hired before that title became popular—are busy taking existing analytical procedures and converting them into the Hive scripting language to run on the Hadoop cluster. According to the executive in charge of the big data project, “This was the right thing to focus on given our current situation. Unstructured data in financial services is somewhat sparse anyway, so we are focused on doing a better job with structured data. In the near to medium term, most of our effort is focused on practical matters—those where it’s easy to determine ROI—driven by the state of technology and expense pressures in our business. We need to self-fund our big data projects in the near term. There is a constant drumbeat of ‘We are not doing “build it and they will come’—we are working with existing businesses, building models faster, and doing it less expensively. This approach is more sustainable for us in the long run. We expect we will generate value over time and will have more freedom to explore other uses of big data down the road.” Source: Big Data in Big Companies, Thomas H. Davenport and Jill Dyché, May 2013 (Go to Suggested Readings to view full article) Big Data at Schneider National
Schneider National, one of North America’s largest truckload, logistics and intermodal services providers, has been pursuing various forms of analytical optimization for a couple of decades. What has changed in Schneider’s business over the past several years is the availability of lowcost sensors for its trucks, trailers and intermodal containers. The sensors monitor location, driving behaviors, fuel levels and whether a trailer/container is loaded or empty. Schneider has been transitioning to a new technology platform over the last five years, but leaders there don’t draw a bright line between big data and more traditional data types. However, the quality of the optimized decisions it makes with the sensor data – dispatching of trucks and containers, for example – is improving substantially, and the company’s use of prescriptive analytics is changing job roles and relationships. New sensors are constantly becoming available. For example, fuel-level sensors, which Schneider is beginning to implement, allow better fueling optimization, i.e., identifying the optimal location at which a driver should stop for fuel based on how much is left in the tank, the truck’s destination and fuel prices along the way. In the past, drivers have entered the data manually, but sensor data is both more accurate and free of bias. Safety is a core value at Schneider. Driving sensors are triggering safety discussions between drivers and their leaders. Hard braking in a truck, for example, is captured by sensors and relayed to headquarters. This data is tracked in dashboard-based safety metrics and initiates a review between the driver and his/her leader. Schneider is piloting a process where the sensor data, along with other factors, goes into a model that predicts which drivers may be at greater risk of a safety incident. The use of predictive analytics produces a score that initiates a pre-emptive conversation with the driver and leads to less safety-related incidents. Source: Big Data in Big Companies, Thomas H. Davenport and Jill Dyché, May 2013 (Go to Suggested Readings to view full article) Big Data at UPS
UPS is no stranger to big data, having begun to capture and track a variety of package movements and transactions as early as the 1980s. The company now tracks data on 16.3 million packages per day for 8.8 million customers, with an average of 39.5 million tracking requests from customers per day. The company stores over 16 petabytes of data. Much of its recently acquired big data, however, comes from telematics sensors in over 46,000 vehicles. The data on UPS package cars (trucks), for example, includes their speed, direction, braking, and drive train performance. The data is not only used to monitor daily performance, but to drive a major redesign of UPS drivers’ route structures. This initiative, called ORION (OnRoad Integrated Optimization and Navigation), is arguably the world’s largest operations research project. It also relies heavily on online map data, and will eventually reconfigure a driver’s pickups and drop-offs in real time. The project has already led to savings in 2011 of more than 8.4 million gallons of fuel by cutting 85 million miles off of daily routes. UPS estimates that saving only one daily mile driven per driver saves the company $30 million, so the overall dollar savings are substantial. The company is also attempting to use data and analytics to optimize the efficiency of its 2000 aircraft flights per day. Source: Big Data in Big Companies, Thomas H. Davenport and Jill Dyché, May 2013 (Go to Suggested Readings to view full article) |
AuthorWe are writing to share what we read about Big Data and related subjects with readers from around the world. Archives
December 2017
Categories |