Chinese telecoms giant Huawei posted an eye-popping 70% growth in 2015. It is now beating HTC and Sony when it comes to market share in Europe and is third only to Apple and Samsung when it comes to global smartphone sales.
Huawei’s growth is another indication of how Chinese companies are successfully moving away from their traditional strategy of producing cheaper products to attack the low-end of the market.
The world of smartphones, tablets, smart watches and connected devices of the internet of things is the new battleground to provide digital services. And Huawei has quickly announced itself as a serious player in it. Last year’s success was built on a strategy that rival Western firms have excelled in – marketing, brand building and customer service.
The race to engage
Existing brands such as Blackberry and Sony are already engaged in competitive marketing in both product and customer engagement to grab a piece of the large but finite customer base, so Huawei’s rapid rise to overtake them is very impressive. Huawei increased its global share of the smartphone market from 6.8% in 2014 to 9% in 2015 – a massive 50% gain compared to Apple’s growth of 27%.
It suggests Huawei now understands how to play the smartphone market. New organisational and workforce strategies that have a laser focus on a customer-first mindset, tightly driven by the needs of the local market, have been put together. In a crowded price-sensitive android market this matters greatly where choice and brand awareness are critical.
Just competing on product functionality and a brand name is not enough when the nature of “smart” means you have connected consumers and feedback on social media is instant. Huawei understands that the mobile market is now all about the customer service experience and no longer just a telecoms commodity. Getting hold of the consumer and personalising the digital world for them with a good experience and price point that fits their needs and lifestyle choices is also critical.
Huawei is a prime example of a modern commercial mindset emerging from Chinese industries. Its marketing embraces local markets and they aggressively target consumers with sponsorship deals that include a host of big football clubs across Europe, including Arsenal, Paris Saint-Germain and AC Milan. Plus, their product portfolio spans the complete modern telecoms provider from hardware to software.
Customers cannot be taken for granted in this tough market. History shows that customers lack pure brand loyalty – they are more loyal to the experience, the community and ecosystem of services that best fits their needs. With so much choice out there, it is more about listening and socially engaging with the customer that is key.
At least three key strategies seem to be emerging in the growth of the telecoms players in the new mobile, wearables and connected internet of things services.
A strong focus on “customer first” philosophy from the CEO down through the whole organisation to effectively manager customer experience 24/7.
Using third parties to sell your products and a service that adds value and extends penetration into new markets. Huawei has a partner programme that drives sales across its portfolio. Recent awards in Asia in 2015 follow expansions into Australia in 2011 and similar regional strategies across Latin America, the Middle East and Europe. This federated supply chain model allows companies to extend their workforce and has accounted for more than 55% of Huawei’s growth from these third party sales. It is increasingly essential for scaling up sales regionally.
Embracing international standards to get into thought leadership positions. For example by joining The Open Group, a major software standards consortium, and its Digital Business and Customer Experience (DBCX) workgroup helps Huawei define initiatives on connected customer and product design. Taking part in these kinds of groups enables firms to raise their game and influence in new markets by getting smarter in the way they interact with existing and potential customers and partners. It can then be translated into improving working practices – from managing the supply chain to service delivery – across the board.
What Huawei has done well is realise that consumers are always connected. Companies that exploit this will start to gain more ground in the battle for owning the digital market. It requires more than the specific strategies, but thinking holistically about how to transform to a digital operating model in this new world of connected things.
Will 2016 be the year you start building real wealth? It can be if you set your mind to it. Every year, the personal finance site GOBankingRates asks the world’s most famous financial experts for their tips for the coming year. Here are some of the best–which you can do no matter how much or how little money you have at the moment. Follow this advice and you’ll end 2016 with more money in the bank (or investments) than you have now.
These are the seven best tips from the full list on how to save money:
1. Never lose money.
This is one bit of wisdom that Warren Buffett likes to repeat. He puts it this way: „Rule No. 1: Never lose money. Rule No. 2: Never forget rule No. 1.“ What does this actually mean? Especially coming from the Oracle of Omaha, who has taken some fairly public losses on some very big investments himself–while remaining one of the most successful investors ever known?
Interpretations differ but I think it means to carefully consider the down side of any investment and to avoid investing in anything that doesn’t inspire high confidence in the value of the investment and that you don’t have a thorough understanding of. (This is why Buffett has often said he doesn’t invest in tech, and when he broke that rule by investing in IBM, he broke his rule about never losing money as well.)
2. Build a carefully balanced portfolio.
Angered by the losses ordinary people incurred due to banker misbehavior in the financial crisis, Tony Robbins went on a mission to learn what he could about finance from the best minds in the business. His advice is to create a mix of investments that adheres to the following four principles: Never lose money (see above); find investments which offers potential rewards that are greater than potential risks; create a tax-efficient portfolio so you get to keep your money instead of having to give it to the government; and diversify your investments. Do that, he says, and „you’re protected no matter what.“
3. Save. Any amount.
Bestselling author and analyst Whitney Johnson advises people to invest–no matter what. Even saving a few dollars a week can amount to a surprisingly large amount of money if you do it over many years. And to be safe in case of an unexpected financial setback, she says, you should have „at least six months of what you spend monthly in the bank. Period.“
4. Plan for how you’ll reach your financial goals.
Setting a financial goal is the easy part, says former Buffalo Bills wide receiver and personal finance author Chris Hogan. It’s like the difference between wishing you could go to the beach and loading up the car with towels and putting gas in the tank. „The necessity of a plan sounds simple, but it is the one thing that many people overlook when it comes to their money,“ he says. „And a dream without a plan is simply a wish.“
5. Negotiate everything.
Everything from your cable plan to your medical expenses can be negotiated, advises plainspoken financial expert Nicole Lapin, author of Rich Bitch. All it takes is a small investment of time and a little bit of guts.
„The worst thing they can say is no. And they usually won’t,“ she says. So, she advises, call all your providers right now and ask for better pricing. „It’s the best way to start a financially fabulous New Year.“
6. Stop spending your future wealth.
Yup, that Apple Watch is awfully tempting. But the more you give in to short-term splurges, the less wealth you’ll save in the long term, says financial coach and serial entrepreneur Josh Felber. For purchases large and small, consider whether you’d rather have that item right now or whether you can make do with something less expensive, older, or used, or something you already have.
„To create real wealth, you must quit spending your future wealth on goods and services that you want today, but deprive you of wealth long term,“ he says.
7. Learn about finances.
Don’t let someone else make the decisions about your money just because you feel like you can’t understand finance, advises Rich Dad, Poor Dad author Robert Kiyosaki.
„Don’t wait for the government, a financial adviser, or your boss to take care of you,“ he says. Instead, he says, become financially educated so that you can make informed decisions for yourself. „Take responsibility for your life and your future,“ he says. „Don’t give that right away.“
Warren Buffett’s 23 Most Brilliant Insights About Investing
Warren Buffett, the billionaire „Oracle of Omaha“ continues to be involved in some of the biggest investment plays in the world.
Buffett is undoubtedly the most successful investor in history. His investment philosophy is no secret, and he has repeatedly shared bits and pieces of it through a lifetime of quips and memorable quotes.
His brilliance is timeless, and we find ourselves referring back to them over and over again.
We compiled a few of Buffett’s best quotes from his TV appearances, newspaper op-eds, magazine interviews, and of course his annual letters.
Buying a stock is about more than just the price.
YouTube/Coca-Cola
Coca-Cola is one of Buffett’s most successful investments.
„It’s far better to buy a wonderful company at a fair price than a fair company at a wonderful price.“
„To invest successfully, you need not understand beta, efficient markets, modern portfolio theory, option pricing or emerging markets. You may, in fact, be better off knowing nothing of these. That, of course, is not the prevailing view at most business schools, whose finance curriculum tends to be dominated by such subjects. In our view, though, investment students need only two well-taught courses – How to Value a Business, and How to Think About Market Prices.“
„None of this means, however, that a business or stock is an intelligent purchase simply because it is unpopular; a contrarian approach is just as foolish as a follow-the-crowd strategy. What’s required is thinking rather than polling. Unfortunately, Bertrand Russell’s observation about life in general applies with unusual force in the financial world: „Most men would rather die than think. Many do.“
„I have pledged – to you, the rating agencies and myself – to always run Berkshire with more than ample cash. We never want to count on the kindness of strangers in order to meet tomorrow’s obligations. When forced to choose, I will not trade even a night’s sleep for the chance of extra profits.“
„Over the long term, the stock market news will be good. In the 20th century, the United States endured two world wars and other traumatic and expensive military conflicts; the Depression; a dozen or so recessions and financial panics; oil shocks; a flu epidemic; and the resignation of a disgraced president. Yet the Dow rose from 66 to 11,497.“
„The line separating investment and speculation, which is never bright and clear, becomes blurred still further when most market participants have recently enjoyed triumphs. Nothing sedates rationality like large doses of effortless money. After a heady experience of that kind, normally sensible people drift into behavior akin to that of Cinderella at the ball. They know that overstaying the festivities ¾ that is, continuing to speculate in companies that have gigantic valuations relative to the cash they are likely to generate in the future ¾ will eventually bring on pumpkins and mice. But they nevertheless hate to miss a single minute of what is one helluva party. Therefore, the giddy participants all plan to leave just seconds before midnight. There’s a problem, though: They are dancing in a room in which the clocks have no hands.“
„Your goal as an investor should simply be to purchase, at a rational price, a part interest in an easily-understandable business whose earnings are virtually certain to be materially higher five, ten and twenty years from now. Over time, you will find only a few companies that meet these standards – so when you see one that qualifies, you should buy a meaningful amount of stock. You must also resist the temptation to stray from your guidelines: If you aren’t willing to own a stock for ten years, don’t even think about owning it for ten minutes. Put together a portfolio of companies whose aggregate earnings march upward over the years, and so also will the portfolio’s market value.“
„Investors should remember that excitement and expenses are their enemies. And if they insist on trying to time their participation in equities, they should try to be fearful when others are greedy and greedy only when others are fearful.“
„The stock market is a no-called-strike game. You don’t have to swing at everything–you can wait for your pitch. The problem when you’re a money manager is that your fans keep yelling, ‚Swing, you bum!'“
Ignore politics and macroeconomics when picking stocks.
REUTERS/Rebecca Cook
A car blows up on the set of „Red Dawn“ in Detroit, Michigan.
„We will continue to ignore political and economic forecasts, which are an expensive distraction for many investors and businessmen. Thirty years ago, no one could have foreseen the huge expansion of the Vietnam War, wage and price controls, two oil shocks, the resignation of a president, the dissolution of the Soviet Union, a one-day drop in the Dow of 508 points, or treasury bill yields fluctuating between 2.8% and 17.4%.
„But, surprise – none of these blockbuster events made the slightest dent in Ben Graham’s investment principles. Nor did they render unsound the negotiated purchases of fine businesses at sensible prices. Imagine the cost to us, then, if we had let a fear of unknowns cause us to defer or alter the deployment of capital. Indeed, we have usually made our best purchases when apprehensions about some macro event were at a peak. Fear is the foe of the faddist, but the friend of the fundamentalist.
„Long ago, Sir Isaac Newton gave us three laws of motion, which were the work of genius. But Sir Isaac’s talents didn’t extend to investing: He lost a bundle in the South Sea Bubble, explaining later, “I can calculate the movement of the stars, but not the madness of men.” If he had not been traumatized by this loss, Sir Isaac might well have gone on to discover the Fourth Law of Motion: For investors as a whole, returns decrease as motion increases.“
„Long ago, Ben Graham taught me that ‚Price is what you pay; value is what you get.‘ Whether we’re talking about socks or stocks, I like buying quality merchandise when it is marked down.“
There are no bonus points for complicated investments.
REUTERS/Vasily Fedosenko
„Our investments continue to be few in number and simple in concept: The truly big investment idea can usually be explained in a short paragraph. We like a business with enduring competitive advantages that is run by able and owner-oriented people. When these attributes exist, and when we can make purchases at sensible prices, it is hard to go wrong (a challenge we periodically manage to overcome).
„Investors should remember that their scorecard is not computed using Olympic-diving methods: Degree-of-difficulty doesn’t count. If you are right about a business whole value is largely dependent on a single key factor that is both easy to understand and enduring, the payoff is the same as if you had correctly analyzed an investment alternative characterized by many constantly shifting and complex variables.“
„SUPPOSE that an investor you admire and trust comes to you with an investment idea. “This is a good one,” he says enthusiastically. “I’m in it, and I think you should be, too.”
Would your reply possibly be this? “Well, it all depends on what my tax rate will be on the gain you’re saying we’re going to make. If the taxes are too high, I would rather leave the money in my savings account, earning a quarter of 1 percent.” Only in Grover Norquist’s imagination does such a response exist.“
„Our approach is very much profiting from lack of change rather than from change. With Wrigley chewing gum, it’s the lack of change that appeals to me. I don’t think it is going to be hurt by the Internet. That’s the kind of business I like.“
With the launch of the Apple Watch, the iPhone 6s and the 6s Plus, the new Apple TV, and the iPad Pro, 2015 was a major year for Apple. The Apple Watch introduced a whole new category, the iPhone 6s and 6s Plus saw the debut of 3D Touch, and the iPad Pro brought Apple’s largest iOS device yet.
iOS 9, watchOS 2, and OS X 10.11 El Capitan brought refinements to Apple’s operating systems, and the fourth-generation Apple TV came with a brand new operating system, tvOS. 2015 saw a huge number of new products and software updates, and 2016 promises to be just as exciting.
A second-generation Apple Watch is in the works and could launch in early 2016, while new flagship iPhones, the iPhone 7 and the iPhone 7 Plus, are coming in late 2016. Those who love smaller devices will be excited to hear a 4-inch iPhone 6c may be coming early in 2016, and Apple’s Mac lineup is expected to gain Skylake chip updates.
New software, including iOS 10, OS X 10.12, watchOS 3, and an upgraded version of tvOS are all expected in 2016, and Apple will undoubtedly work on improving services like HomeKit, Apple Pay, and Apple Music.
As we did for 2014 and 2015, we’ve highlighted Apple’s prospective 2016 product plans, outlining what we might see from Apple over the course of the next 12 months based on current rumors, past releases, and logical upgrade choices.
Apple Watch 2 (Early 2016)
A second-generation Apple Watch is rumored to be debuting in March of 2016, approximately one year after the launch of the first Apple Watch. A March event could see the introduction of the device, with shipments beginning in April 2016.
Early rumors suggest the Apple Watch 2 will perhaps include some of the sensors that were nixed from the first version, including skin conductivity, blood oxygen level, and blood pressure. The device may be thinner than the first Apple Watch, and it could include features like a FaceTime camera to allow Apple Watch users to make and receive FaceTime calls and an upgraded Wi-Fi chip that may allow the Apple Watch to do more without an iPhone.
The Apple Watch 2 could be thinner than the existing Apple Watch, with new sensors and a camera.
It is not clear if the new Apple Watch will continue to use the same lugs and bands as the first-generation Apple Watch, but given the large number of bands owned by Apple Watch users, it seems likely the device won’t require users to purchase all new hardware. There have been no rumors on the prospective hardware, aside from early analyst predictions pointing towards the thinner size.
Regardless, the second-generation Apple Watch is likely to be accompanied by the launch of bands in new colors and designs as Apple has set a precedent of changing the available bands multiple times per year.
The iPhone 7 and the iPhone 7 Plus will come at the tail end of 2016, likely making their debut in September in line with past iPhone launches. Apple is expected to continue offering the phones in 4.7 and 5.5-inch sizes, but we can count on a redesigned external chassis because 2016 marks a major upgrade year.
Details about the exterior of the phone and its internal updates are largely unknown at this early date, but based on past upgrades, we can expect a thinner body, an improved processor, and a better camera. Flagship features like 3D Touch and Touch ID will continue to be available, and Apple likely has additional features planned to make its latest iPhone stand out.
Taking into account past rumors and acquisitions, the camera is one area that could see significant improvements, perhaps incorporating a dual-lens system that offers DSLR quality in a compact size. Some of these rumors were originally attached to the iPhone 6s, but could have been delayed for later devices especially given the 2015 acquisition of Israeli camera company LinX.
The current iPhone 6s and 6s Plus. The iPhone 7 is rumored to be slimmer with no antenna bands and a new material composition.
Apple is expected to continue using in-cell display panels for the iPhone 7, which will allow it to shrink the thickness of the device, perhaps making it as thin as the 6.1mm iPod touch. The iPhone 7 is also likely to include a TFT-LCD display as the AMOLED technology Apple is rumored to be working on is not yet ready for use in iOS devices.
Analyst Ming-Chi Kuo, who often accurately predicts Apple’s plans, has said RAM could be a differentiating factor between the two iPhone 7 models. The smaller 4.7-inch iPhone 7 may continue to ship with 2GB RAM, while the larger 5.5-inch iPhone 7 Plus may ship with 3GB RAM.
Other rumors about the iPhone 7 have pointed towards the removal of the headphone jack in favor of headphones that attach to the device using the Lightning port, a change that may also help Apple shave 1mm off of the thickness of the iPhone.
Some early rumors out of the Asian supply chain have suggested the iPhone 7 may include a strengthened, waterproof frame that ditches Apple’s traditional aluminum casing for an all new material and does away with the prominent rear antenna bands on the iPhone 6, iPhone 6 Plus, iPhone 6s, and iPhone 6s Plus. The rumors of a waterproof, dust-proof casing are from somewhat unreliable sources and should not be viewed as fact until further evidence becomes available.
Since the launch of the larger-screened iPhone 6 and iPhone 6 Plus, Apple has been rumored to be working on an upgraded 4-inch iPhone for customers who prefer smaller screens. The „iPhone 6c“ is rumored to be launching during the first months of 2016, and it’s another device that could potentially make an appearance at Apple’s rumored March event. If the 4-inch iPhone launches in early 2016, it will be the first iPhone to launch outside of the fall months since 2011.
Apple’s 4-inch iPhone is described as a cross between an iPhone 5s and an iPhone 6, with an aluminum body and iPhone 6-style curved cover glass. There have been some sketchy rumors suggesting it will come in multiple colors like the iPod touch, but that has not yet been confirmed. KGI Securities analyst Ming-Chi Kuo has pointed towards „two or three“ color options for the device, but he did not specify which colors.
Rumors have disagreed over whether the iPhone 6c will include an A8 processor or an A9 processor, but Kuo believes Apple will use the same A9 processor that’s used in the iPhone 6s. Other rumors out of the Asian supply chain suggest Apple could also include 2GB RAM in the device, and with an A9 processor and 2GB RAM, the iPhone 6c could be on par with the iPhone 6s when it comes to raw performance.
Other features rumored for the iPhone 6c include a 1,642 mAh battery that’s somewhat larger than the battery used in the iPhone 5s, an 8-megapixel rear-facing camera with an ƒ/2.2 aperture, a 1.2-megapixel front-facing camera, 802.11ac Wi-Fi, and Bluetooth 4.1. The iPhone 6c is not expected to include 3D Touch, as it is a flagship feature of the iPhone 6s, but it is likely to include NFC to enable Apple Pay functionality.
Since the iPad launched in 2010, Apple has upgraded the tablet on a yearly basis, producing a new version each fall. In 2015, Apple did not upgrade the iPad Air 2, instead focusing on releasing the iPad Pro and the iPad mini 4. Combined with the minor update the iPad mini 2 received in 2014, Apple may be signaling its intention to update its iPads on an 18-month to two-year schedule going forward.
Recent rumors have suggested that Apple is developing an iPad Air 3 that will launch during the first half of 2016. Little is known about the third-generation iPad Air at this time, but it will include an upgraded processor to improve performance. It may also offer RAM upgrades and camera improvements, but it will not include the 3D Touch feature introduced with the iPhone 6s and the iPhone 6s Plus due to manufacturing difficulties expanding the technology to a larger screen size.
Apple likely has something planned to make the iPad Air 3 stand out, but it is not yet clear what that might be.
Following the launch of the Retina MacBook in April of 2015, the future of the MacBook Air became uncertain. There has been speculation that the MacBook line will subsume the MacBook Air line as component prices decrease, but some recent rumors have led to hope that the MacBook Air will continue to exist alongside the Retina MacBook and the Retina MacBook Pro, offering a compromise between performance, portability, and cost.
Though it lacks the power of the Retina MacBook Pro and the Retina display of the MacBook, the MacBook Air continues to be popular with consumers for its low price point.
Current rumors suggest Apple will continue producing the MacBook Air, with plans to launch 13 and 15-inch MacBook Air models during the third quarter of 2016, perhaps unveiling the machines around the annual Worldwide Developers Conference.
The MacBook Air’s design has remained unchanged since 2010, so a 2016 redesign that focuses on a slimmer chassis with bigger screens and revamped internals is not out of the realm of possibility. Apple has been increasing the sizes of its devices, introducing a larger 5.5-inch iPhone and a 12.9-inch iPad Pro, so a 15-inch MacBook Air also seems reasonable. The rumor does not mention an 11-inch MacBook Air, suggesting it will potentially be phased out in favor of larger screen sizes and to let the 12-inch Retina MacBook stand out as the sole ultraportable machine.
Current 11 and 13-inch MacBook Air compared to 15-inch Retina MacBook Pro
If Apple does introduce a 2016 MacBook Air, it will likely include Intel’s next-generation Skylake chips, which will offer 10 percent faster CPU performance, 34 percent faster Intel HD graphics, and 1.4 hours of additional battery life compared to the equivalent Broadwell chips in current MacBook Air models. Skylake U-Series 15-watt chips appropriate for the MacBook Air will be shipping in early 2016.
While the current rumor has suggested the new MacBook Air models will launch in the third quarter of 2016, they could potentially be ready to debut earlier in the year. The last MacBook Air update was in March of 2015 and Apple may not want to wait more than a full year before introducing a refresh.
As there haven’t been many rumors about a new MacBook Air at this time, an update should not be viewed as a sure thing. Supply chain information is not always accurate, and there’s a chance the information shared about the alleged 13 and 15-inch MacBook Air could instead apply to the Retina MacBook Pro.
Over the course of the past two years, Intel’s chip delays have significantly impacted Apple’s Retina MacBook Pro release plans, especially for the 15-inch model. Broadwell delays resulted in staggered update timelines for 13 and 15-inch models, which were last updated in March and May of 2015, respectively.
While the 13-inch Retina MacBook Pro was updated with Broadwell chips, the 15-inch machine has continued to offer Haswell processors, and Apple’s upgrade path for the 15-inch Retina MacBook Pro isn’t quite clear.
Broadwell chips appropriate for a 15-inch Retina MacBook Pro update became available in June of 2015, so Apple could release an updated 15-inch Retina MacBook Pro in early 2016 using these chips. Alternatively, and more likely, Apple could bypass Broadwell altogether in favor of a Skylake update for both the 13 and 15-inch Retina MacBook Pro.
Skylake U-Series 28-watt chips appropriate for the 13-inch Retina MacBook Pro will begin shipping from Intel in early 2016, as will 45-watt H-Series chips with Intel Iris Pro graphics appropriate for the 15-inch Retina MacBook Pro. Exact shipping timelines for the chips are not yet known, but with an early 2016 release timeline, new Retina MacBook Pro models could come within the first few months of the year, perhaps being unveiled at the aforementioned rumored March event. Should the chips come at different times, Apple could stagger the 2016 MacBook Pro updates as it did in 2015.
Aside from prospective chip updates, little is known about the next-generation Retina MacBook Pro. Given that it’s been four years since the machine was redesigned, it’s possible we could see a refreshed, slimmer body and an improved Retina display, but there have been no rumors to suggest this is the case.
Skylake Core M chips appropriate for a second-generation Retina MacBook are already available, meaning refreshed Retina MacBook could be introduced at any moment. The new Core M chips offer 10 hours of battery life and 10 to 20 percent faster CPU performance compared to the Broadwell chips used in the first-generation machine.
The most notable upgrade in a second-generation Retina MacBook that uses Skylake chips would come in the form of graphics improvements, as the Skylake Core M chips offer up to 40 percent faster graphics performance.
Beyond Skylake chips, it is not known what other improvements Apple might offer in a second-generation Retina MacBook. Given that the design was just introduced in April of 2015, the new machine will undoubtedly use the same chassis, but a Rose Gold color option to match the new Rose Gold iPhone 6s is a possibility.
If Apple is planning to introduce new Macs at a rumored Apple Watch-centric event in March, that may be when the new Retina MacBook will debut.
Apple’s iMac, like its MacBook Pro, has been impacted by Intel’s chip delays. Current higher-end models already use Skylake graphics but lower-end models continue to use Broadwell chips. Given that the iMac lineup was just refreshed in October of 2015, another update may not come until late in 2016.
Apple’s future chip plans for the iMac are difficult to decipher, as Intel does not plan to introduce desktop class socketed Skylake chips with integrated Iris or Iris Pro graphics that would be appropriate for lower-end iMacs that use integrated graphics instead of discrete graphics.
With no prospective chips available for the lower-end iMacs, it is not clear what Apple is going to do in terms of processor upgrades, making it nearly impossible to predict when we might see the next iMac update or what it might include. Intel plans to release Kaby Lake processors in late 2016, but details on Kaby Lake chips appropriate for the iMac are not available, and it’s possible Kaby Lake could see delays.
There are also no rumors on other features that could be included with a next-generation iMac update, but going forward, Apple may fully drop non-Retina 21.5-inch models as hardware prices come down in favor of an all-Retina lineup.
iOS 10 (Late 2016) Each September, Apple launches an updated version of iOS to accompany its latest iPhones. In 2016, the company is expected to debut iOS 10, the successor to iOS 9. iOS 8 and iOS 9 both focused more on features than design, so it is quite possible iOS 10 will be an update that introduces more significant design changes, similar to iOS 7.
Because iOS 9 just launched three and a half months ago, iOS 10 rumors have not yet begun. As the year progresses, we’ll get a glimpse at what to expect in September, but for now, all we know is that there’s an update coming.
OS X 10.12 (Late 2016) Along with iOS, OS X is also updated on a yearly basis, with an update coming each fall around September or October. In 2016, we expect to see the debut of OS X 10.12, the followup to OS X 10.11 El Capitan.
El Capitan was an update designed to introduce bug fixes and build on the features that debuted with OS X 10.10 Yosemite, so it’s likely OS X 10.12 will be a bigger standalone update that includes design tweaks and new features.
watchOS 3 (Early 2016) watchOS is the software that runs on the Apple Watch, and in 2016, Apple is expected to launch a third version of the software. watchOS debuted alongside of the Apple Watch in April, while watchOS 2 came out just months later in September with iOS 9.
Apple has thus far tied its watchOS releases to iOS releases, but it’s quite possible that watchOS 3 will launch alongside an updated second-generation Apple Watch rather than alongside iOS 10 in September. A second-generation Apple Watch will potentially require some significant software updates if major hardware changes like new sensors or cameras are introduced.
New versions of the iPhone ship with new versions of iOS, so it’s logical to expect the same thing to happen with the Apple Watch, but thus far there are no rumors about the watchOS 3 update or what features might be included.
tvOS 10?
Apple TV software traditionally has not seen the same major software updates as iOS devices and the Apple Watch, so Apple’s plans for tvOS are not clear. So far, there have been some minor tvOS updates, but it is not yet known if Apple will push major version upgrades with new features and design changes on a yearly basis.
If Apple is planning to offer iOS-style updates for tvOS, the first major tvOS software update could come in the fall, perhaps alongside iOS 10.
Other Possibilities
Fifth-generation Apple TV
Shortly after the launch of the fourth-generation Apple TV, there was a sketchy rumor suggesting development and production had already begun on a fifth-generation Apple TV with an upgraded CPU. While it’s possible Apple has plans to release an updated Apple TV in 2016, it’s highly unlikely such a device is already in production and it’s equally unlikely Apple would release it before the fall of 2016.
Prior to the launch of the fourth-generation Apple TV, the set-top box went multiple years without a significant update. It is not clear how often Apple will update the Apple TV now that a new version has been released, so we will need to wait until later in the year for more information on the Apple TV upgrade schedule.
iPad Pro 2
The iPad Pro was released in November of 2015 and Apple’s plans for a second-generation device are not yet known. For several years, Apple was updating its iPads on a yearly basis, but its more recent update timelines suggest it is potentially moving to an 18 month or 24 month upgrade cycle for iPads, making it unclear when we might see an iPad Pro 2.
With the iPad Air line, for example, Apple introduced an iPad Air 2 in 2014 but neglected to upgrade it to an iPad Air 3 in 2015. The iPad mini 2 update was similar, with a 2014 update introducing only Touch ID to the 2013 model, while the 2015 iPad mini 4 featured a more significant revamp.
An iPad Pro 2 could potentially debut in 2016 with an updated processor and other improved features, but it’s also just as likely Apple will wait until mid-to-late 2017 to introduce a second-generation iPad Pro. More information on Apple’s iPad Pro plans will come later in 2016, firming up potential release timelines.
iPad mini 5
Apple introduced the iPad mini 4 in late 2015, following the launch of the iPad mini 2 in 2013 and the minor iPad mini 3 update in 2014. With Apple seemingly shifting away from a yearly upgrade cycle for its iPad lineup, we may not see an iPad mini 5 in 2016.
Instead, 2016 may see the launch of an updated iPad Air 3, followed by an iPad mini update in 2017. Apple’s iPad sales have been flagging in recent years as customers do not update their tablets as often as their phones, which has led Apple to try different upgrade strategies and cycles. With Apple’s shifting plans, it is not yet clear when the iPad mini will see another update.
Ahead of the launch of the iPad mini 4, there were some rumors that Apple would discontinue its smallest tablet, but with the iPad mini 4, Apple has signaled its intention to continue offering the iPad in three screen sizes to meet different customer needs.
Mac Pro
The Mac Pro launched in late 2013, and since then, it has not seen an update. It’s quite possible 2016 will be the year Apple will refresh the machine, as potential references to an updated Mac Pro were discovered in OS X El Capitan.
Grantley Xeon E5 V3 Haswell-EP processors appropriate for a high-end Mac Pro upgrade were introduced in 2014, but Apple may be waiting on E5 V4 Broadwell-EP chips for the top-of-the-line Mac Pro that are set to launch in the first half of 2016. E3 V4 chips appropriate for lower-end machines are already available, as are Skylake E3 V5 chips.
If this is the case, a Mac Pro launch will happen after the chips become available, with the machine perhaps seeing a mid-to-late 2016 debut.
Updated AMD FirePro graphics cards were introduced in 2015, as were cards built on AMD’s Fury platform, both of which could potentially be used in a next-generation Mac Pro. Fury graphics are more likely, and an updated Mac Pro could also include faster memory, improved storage, and Thunderbolt 3 connectivity introduced through a shift to USB-C.
In the past, prior to its 2013 redesign, the Mac Pro was updated in 2006, 2008, 2009, 2010, and 2012.
Mac mini
The Mac mini was last updated in 2014, introducing Haswell processors and features like 802.11ac WiFi and Thunderbolt 2. Given that it’s now been two years since the update, Apple could introduce new Mac mini models with Skylake processors in 2016. Two years is the longest the Mac mini has gone without a refresh.
Apple’s Mac mini line uses the same U-Series chips that are found in the MacBook Air and the 13-inch Retina MacBook Pro, and Skylake chips appropriate for an updated Mac mini will be shipping in the first months of 2016. A new Mac mini may debut in early-to-mid 2015 alongside a refreshed MacBook Air and MacBook Pro.
In the past, the Mac mini saw upgrades in 2006, 2007, 2009, 2010, 2011, and 2012 before going sans upgrade for two years after a late 2012 update.
Presidential hopefuls are arguing about it. Officials like FBI Director James Comey have publicly criticized tech companies for their encryption practices. Facebook-owned WhatsApp was temporarily banned in Brazil last week for failing to hand over user info it claims it didn’t have.
Almost all messaging companies encrypt messages en route between a user’s device and company servers, where a company could then read them if needed. The problem arises, though, when messages are end-to-end encrypted, which means they are only readable on the sender’s and receiver’s devices. That means the messaging companies can’t read them. Companies like Apple offer this level of security to satisfy users looking for total privacy. Law enforcement officials hate it because it poses a serious security threat.
Who can read your private messages? We checked in with some of the most popular messaging companies out there, and here’s what we found.
These Companies Can’t Always Read Your Messages
Apple: Apple’s iMessages are end-to-end encrypted, which means they can only be read on users’ phones and the company can’t read them. There’s a caveat here, though. If you back up your messages in iCloud, then Apple can read them and could be forced to hand them over to authorities if provided with an appropriate warrant.
WhatsApp*: WhatsApp gets an asterisk here because while it’s almost done rolling out end-to-end encryption to all of its users, it’s not officially there yet. Either way, the company claims that it does not store messages on its servers, which means it can’t hand over messages if approached by law enforcement officials. (This is what got WhatsApp into trouble in Brazil.)
Telegram**: Telegram messages can be totally private if you want them to be. The company offers end-to-end encryption if users turn on the app’s “secret chat” feature and thus can’t read those user messages. Regular messages are stored on Telegram’s servers. The app benefited immensely from Brazil’s temporary WhatsApp ban. Telegram claims that it added 5.7 million new users on the day WhatsApp was blocked.
Signal: Owned by Open Whisper Systems, Signal is also end-to-end encrypted. The company explicitly states on its website that it “does not have access to the contents of any messages sent by Signal users.”
Line*: Line offers end-to-end encryption, but only if both the sender and recipient of a message turn on a feature called “Letter Sealing.” This will encrypt your messages so the company can’t read them, but regular messages without the feature are not end-to-end encrypted and Line may have to hand them over if required by Japanese law.
These Companies Can Read Your Messages
Kik*: Kik also gets an asterisk here. Messages are not end-to-end encrypted, so the company can theoretically read them. But Kik claims it deletes user messages from its servers as soon as they’re delivered to a user’s device. That means it wouldn’t be able to share your messages with authorities if requested, and the length of time during which it could read your messages is extremely short.
Facebook (Messenger and Instagram): Both Facebook Messenger and Facebook-owned Instagram encrypt messages only when they are en route between a user’s device and company servers where they are stored. This means Facebook might have to hand over private messages if required by law.
Google: Messages sent via Google Hangouts are also encrypted en route and even on the company’s servers, but Google can still read them if needed. Encrypting the messages while on Google servers is intended to keep others from jacking in and reading them, but Google itself has the encryption key. This means Google might have to hand over private messages if required by law.
Snapchat: Like Google, Snapchat messages are encrypted while at rest on Snapchat’s servers (though the company has the encryption key if needed). Snaps are deleted from the servers as soon as they’re opened by the intended recipients, and Snapchat claims these delivered messages “typically cannot be retrieved from Snapchat’s servers by anyone, for any reason.” But unopened Snaps are kept on the servers for 30 days before being deleted. That means Snapchat might have to hand over unopened, private messages if required by law.
Twitter: Direct messages on Twitter are not end-to-end encrypted. The company might have to hand over private messages if required by law.
Skype: Microsoft-owned Skype does not offer end-to-end encryption for instant messages. They are stored on Skype’s servers for a “limited time,” which means Skype might have to hand over private messages if required by law.
**The Telegram section was updated to include distinction that end-to-end encryption is only available for the app’s “secret chats.”
Tesla CEO Elon Musk has made a bold prediction: Tesla Motors will have a self-driving car within two years.
“I think we have all the pieces,” Musk told Fortune, “and it’s just about refining those pieces, putting them in place, and making sure they work across a huge number of environments — and then we’re done. It’s a much easier problem than people think it is.”
Although Musk’s comments to Fortune came Monday, The Street pegged a rise in Tesla’s shares to the comments on Tuesday. The ambitious timeframe appeared to be offering support to the stock again today, with shares trading up $1.47, or 0.64 percent, at $231.42 around 7:18 a.m. PST.
This is the most aggressive timeline Musk has mentioned. While Musk claims the problem is easier than people think it is, he doesn’t think the tech is so accessible that any hacker could create a self-driving car. Musk took the opportunity to call out hacker George Hotz, who claimed via a Bloomberg article last week that he had developed self-driving car technology that could compete with Tesla’s. Musk said he wasn’t buying it.
“But it’s not like George Hotz, a one-guy-and-three-months problem,” Musk said to Fortune. “You know, it’s more like, thousands of people for two years.”
The company went so far as to post a statement last week about Hotz’s achievement.
“We think it is extremely unlikely that a single person or even a small company that lacks extensive engineering validation capability will be able to produce an autonomous driving system that can be deployed to production vehicles,” the company stated. “It may work as a limited demo on a known stretch of road — Tesla had such a system two years ago — but then requires enormous resources to debug over millions of miles of widely differing roads.”
While Tesla is unconcerned about Hotz, the company’s new timeline may have other autonomous car developers hitting the accelerator. Tech companies like Google and Apple, in addition to automakers such as Volvo and General Motors are all competing to be among the first to offer some form of self-driving tech. Many believe the early 2020s would be a realistic timeframe to expect to see the public engaging with self-driving cars.
Just yesterday, it was reported that Google and Ford will enter into a joint venture to build self-driving vehicles with Google’s technology, according to Yahoo Autos, citing sources familiar with the plans. The official announcement is expected to come during the Consumer Electronics Show in January, but there is no manufacturing timeline.
But even if Tesla moves quickly on self-driving cars, are consumers ready for them? The Palo Alto-based carmaker’s recent Firmware 7.1 Autopilot update includes restrictions on self-driving features. The update only allows its Autosteer feature to engage when the Model S is traveling below the posted speed limit. The update came shortly after it was reported that drivers were involved in dangerous activities while the Autopilot features were engaged.
Elon Musk’s Billion-Dollar AI Plan Is About Far More Than Saving the World
Elon Musk. Nathaniel Wood Elon Musk and Sam Altman worry that artificial intelligence will take over the world. So, the two entrepreneurs are creating a billion-dollar not-for-profit company that will maximize the power of AI—and then share it with anyone who wants it.At least, this is the message that Musk, the founder of electric car company Tesla Motors, and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor, an unprecedented outfit called OpenAI. In an interview with Steven Levy of Backchannel, timed to the company’s launch, Altman said they expect this decades-long project to surpass human intelligence. But they believe that any risks will be mitigated because the technology will be “usable by everyone instead of usable by, say, just Google.”
If OpenAI stays true to its mission, it will act as a check on powerful companies like Google and Facebook.
Naturally, Levy asked whether their plan to freely share this technology would actually empower bad actors, if they would end up giving state-of-the-art AI to the Dr. Evils of the world. But they played down this risk. They feel that the power of the many will outweigh the power of the few. “Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements,” said Altman, “we think its far more likely that many, many AIs, will work to stop the occasional bad actors.”
It’ll be years before we know if this counterintuitive argument holds up. Super-human artificial intelligence is an awfully long way away, if it arrives at all. “This idea has a lot of intuitive appeal,” says Miles Brundage, a PhD student at the Arizona State University who deals in the human and social dimensions of science and technology, says of OpenAI. “But it’s not yet an open-and-shut argument. At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”
But in the creation of OpenAI, there are more forces at work than just the possibility of super-human intelligence achieving world domination. In the shorter term, OpenAI can directly benefit Musk and Altman and their companies (Y Combinator backed such unicorns as Airbnb, Dropbox, and Stripe). After luring top AI researchers from companies like Google and setting them up at OpenAI, the two entrepreneurs can access ideas they couldn’t get their hands on before. And in pooling online data from their respective companies as they’ve promised to, they’ll have the means to realize those ideas. Nowadays, one key to advancing AI is engineering talent, and the other is data.
If OpenAI stays true to its mission of giving everyone access to new ideas, it will at least serve as a check on powerful companies like Google and Facebook. With Musk, Altman, and others pumping more than a billion dollars into the venture, OpenAI is showing how the very notion of competition has changed in recent years. Increasingly, companies and entrepreneurs and investors are hoping to compete with rivals by giving away their technologies. Talk about counterintuitive.
Yes, such sharing is a way of competing. If a company like Google or Facebook openly shares software or hardware designs, it can accelerate the progress of AI as a whole. And that, ultimately, advances their own interests as well. For one, as larger community improves these open source technologies, Google and Facebook can push the improvement back into their own businesses. But open sourcing also is a way of recruiting and retaining talent. In the field of deep learning in particular, researchers—many of whom come from academia—are very much attracted to the idea of openly sharing their work, of benefiting as many people as possible. “It is certainly a competitive advantage when it comes to hiring researchers,” Altman tells WIRED. “The people we hired … love the fact that [OpenAI is] open and they can share their work.”
‚Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid.‘ Chris Nicholson, Skymind
This competition may be more direct than it might seem. We can’t help but think that Google open sourced its AI engine, TensorFlow, because it knew OpenAI was on the way—and that Facebook shared its Big Sur server design as an answer to both Google and OpenAI. Facebook says this was not the case. Google didn’t immediately respond to a request for comment. And Altman declines to speculate. But he does say that Google knew OpenAI was coming. How could it not? The project nabbed Ilya Sutskever, one of its top AI researchers.
That doesn’t diminish the value of Google’s open source project. Whatever the company’s motives, the code is available to everyone to use as they see fit. But it’s worth remembering that, in today’s world, giving away tech is about more than magnanimity. The deep learning community is relatively small, and all of these companies are vying for the talent that can help them take advantage of this extremely powerful technology. They want to share, but they also want to win. They may release some of their secret sauce, but not all. Open source will accelerate the progress of AI, but as this happens, it’s important that no one company or technology becomes too powerful. That’s why OpenAI is such a meaningful idea.
His Own Apollo Program
You can also bet that, on some level, Musk too sees sharing as a way of winning. “As you know, I’ve had some concerns about AI for some time,” he told Backchannel. And certainly, his public fretting over the threat of an AI apocalypse is well known. But he also runs Tesla, which stands to benefit from the sort of technology OpenAI will develop. Like Google, Tesla is building self-driving cars, which can benefit from deep learning in enormous ways.
Deep learning relies on what are called neural networks, vast networks of software and hardware that approximate the web of neurons in the human brain. Feed enough photos of a cat into a neural net, and it can learn to recognize a cat. Feed it enough human dialogue, and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react, and it can learn to drive.
Yes, Musk could just hire AI researchers to work at Tesla. And he is. But with OpenAI, he can hire better researchers (because it’s open, and because it’s not constrained by any one company’s business model or short-term interest). He can even lure researchers away from Google. Plus, he can create a far more powerful pool of data that can help feed the work of these researchers. Altman says that Y Combinator companies will share their data with OpenAI, and that’s no small thing. Pair their data with Tesla’s, and you start to rival Google—at least in some ways.
“It’s probably better in some dimensions and worse in others,” says Chris Nicholson, the CEO of deep learning startup called Skymind, which was recently accepted into the Y Combinator program. “I’m sure Airbnb has great housing data that Google can’t touch.”
Musk was an early investor in a company called DeepMind—a UK-based outfit that describes itself as “an Apollo program for AI.” And this investment gave him a window into how this remarkable technology was developing. But then Google bought DeepMind, and that window closed. Now, Musk has started his own Apollo program. He once again has the inside track. And OpenAI’s other investors are in a similar position, including Amazon, an Internet giant trails Google and Facebook in the race to AI.
Pessimistic Optimists
But, no, this doesn’t diminish the value of Musk’s open source project. He may have selfish as well as altruistic motives. But the end result is still enormously beneficial to the wider world of AI. In sharing its tech with the world, OpenAI will nudge Google, Facebook, and others to do so as well—if it hasn’t already. That’s good for Tesla and all those Y Combinator companies. But it’s also good for everyone that’s interested in using AI.
Of course, in sharing its tech, OpenAI will also provide new ammunition to Google and Facebook. And Dr. Evil, wherever he may lurk. He can feed anything OpenAI builds back into his own systems. But the biggest concern isn’t necessarily that Dr. Evil will turn this tech loose on the world. It’s that the tech will turn itself loose on the world. Deep learning won’t stop at self-driving cars and natural language understanding. Top researchers believe that, given the right mix of data and algorithms, its understanding can extend to what humans call common sense. It could even extend to super-human intelligence.
“The fear is of a super-intelligence that recursively improves itself, reaches an escape velocity, and becomes orders of magnitude smarter than any human could ever hope to be,” Nicholson says. “That’s a long ways away. And some people think it might not happen. But if it did, that will be scary.”
This is what Musk and Altman are trying to fight. “Developing and enabling and enriching with technology protects people,” Altman tells us. “Doing this is the best way to protect all of us.” But at the same time, they’re shortening the path to super-human intelligence. And though Altman and Musk may believe that giving access to super-human intelligence to everyone will keep any rogue AI in check, the opposite could happen. As Brundage points out: If companies know that everyone is racing towards the latest AI at breakneck speed, they may be less inclined to put safety precautions in place.
How necessary those precautions really are depend, ironically, on how optimistic you are about humanity’s ability to accelerate technological progress. Based on their past successes, Musk and Altman have every reason to believe the arc of progress will keep bending upward. But others aren’t so sure that AI will threaten humanity in the way that Musk and Altman believe it will. “Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid,” Nicholson says.
Either way, the Googles and the Facebooks of the world are rapidly pushing AI towards new horizons. And at least in small ways, OpenAI can help keep them—and everyone else—in check. “I think that Elon and that group can see AI is unstoppable,” Nicholson says, “so all they can hope to do is affect its trajectory.”
Today, Porsche announced it’s investing more than a billion dollars to bring the Mission E to production. As in, you’ll be able to buy one. We’re light on details—like the size of the battery, or when we’ll actually see one on the road—but we’ve got the most important numbers. The motor (or motors, Porsche hasn’t said) will produce more than 600 horsepower. The four-seater Mission E will go from 0 to 62 mph in under 3.5 seconds. And it will go 310 miles on a charge.
Porsche, which faces increasingly strict fuel emission standards from US and European authorities, been working with batteries for a few years now, with top notch results. It already offers plug-in hybrid versions of the Panamera and Cayenne, it’s successfully raced a 911 hybrid. Then there’s the flat-out amazing gas-electric 918 Spyder supercar and 919 Hybrid that won at Le Mans this year. So it makes sense to make the next step a full electric.
Compared to Tesla’s current range-topper, the excellent Model S P90D, the Mission E will offer a bit less power and a slower acceleration time. But Porsche wins on range—the longest-legged Tesla goes roughly 286 miles on a charge. Here, the Germans have a second advantage: They’re working on an 800-volt charger that will power the car up to 80 percent in just 15 minutes, half the time it takes the Tesla.
Porsche, which faces increasingly strict fuel emission standards from US and European authorities, been working with batteries for a few years now, with top notch results. It already offers plug-in hybrid versions of the Panamera and Cayenne, it’s successfully raced a 911 hybrid. Then there’s the flat-out amazing gas-electric 918 Spyder supercar and 919 Hybrid that won at Le Mans this year. So it makes sense to make the next step a full electric.
Porsche plans to build the battery into the floor of the car, like Tesla does, so you can expect a very low center of gravity, great news for performance. But really, the Mission E wins on looks. The Model S and Model X SUV are lovely designs, but the Porsche is simply gorgeous, in the way only a Porsche can be. We’ve only seen the concept version, but hopefully Porsche will be smart enough to change as little as possible on the way to production.
Apple is known for being one of the most challenging and exciting places to work, so it’s not surprising to learn that getting a job there is no easy task.
Like Google and other big tech companies, Apple asks both technical questions based on your past work experience and some mind-boggling puzzles.
We combed through recent posts on Glassdoor to find some of the toughest interview questions candidates have been asked.
Some require solving tricky math problems, while others are simple but vague enough to keep you on your toes.
“If you have 2 eggs, and you want to figure out what’s the highest floor from which you can drop the egg without breaking it, how would you do it? What’s the optimal solution?” — Software Engineer candidate
„You have a 100 coins laying flat on a table, each with a head side and a tail side. 10 of them are heads up, 90 are tails up. You can’t feel, see or in any other way find out which side is up. Split the coins into two piles such that there are the same number of heads in each pile.“ — Software Engineer candidate
AP Photo/Ariel Schalit
„Describe yourself, what excites you?“ — Software Engineer candidate
„There are three boxes, one contains only apples, one contains only oranges, and one contains both apples and oranges. The boxes have been incorrectly labeled such that no label identifies the actual contents of the box it labels. Opening just one box, and without looking in the box, you take out one piece of fruit. By looking at the fruit, how can you immediately label all of the boxes correctly?“ — Software QA Engineer candidate
„Scenario: You’re dealing with an angry customer who was waiting for help for the past 20 minutes and is causing a commotion. She claims that she’ll just walk over to Best Buy or the Microsoft Store to get the computer she wants. Resolve this issue.“ — Specialist candidate
„Have you ever disagreed with a manager’s decision, and how did you approach the disagreement? Give a specific example and explain how you rectified this disagreement, what the final outcome was, and how that individual would describe you today.“ — Software Engineer candidate
Jamie Squire / Getty
“You put a glass of water on a record turntable and begin slowly increasing the speed. What happens first — does the glass slide off, tip over, or does the water splash out?“ — Mechanical Engineer candidate
Digital Trends
„Tell me something that you have done in your life which you are particularly proud of.“ — Software Engineering Manager candidate
„Given an iTunes type of app that pulls down lots of images that get stale over time, what strategy would you use to flush disused images over time?“ — Software Engineer candidate
iTunes
„If you’re given a jar with a mix of fair and unfair coins, and you pull one out and flip it 3 times, and get the specific sequence heads heads tails, what are the chances that you pulled out a fair or an unfair coin?“ — Lead Analyst candidate
slgckgc/flickr
„What was your best day in the last 4 years? What was your worst?“ — Engineering Project Manager candidate
Kreatives Denken, knifflige Logikprobleme Den Jobkandidaten werden je nach dem Bereich, für den sie sich bewerben, Fragen zu ihrem technischen Verständnis gestellt. Teilweise müssen sie Empathie beweisen oder Logikrätsel lösen und kreatives Denken an den Tag legen.
Frage an einen Softwareentwickler: Wenn Sie zwei Eier halten und überprüfen möchten aus welcher Höhe Sie sie fallen lassen können, ohne sie kaputt zu machen. Wie würden Sie das angehen?
Frage an einen Hardware-Ingenieur: Sie stellen ein Glas Wasser auf einen Plattenspieler, der sich zunehmend schneller dreht. Was geschieht zuerst: rutscht das Glas herunter, schwappt das Wasser über oder kippt das Glas um?
Frage an einen Kandidaten für den Telefonsupport: Erklären Sie einem Achtjährigen wie ein Modem/Router funktioniert.
Frage an einen Bewerber im globalen Vertrieb: Wie viele Kinder kommen täglich zur Welt?
Frage für einen Family-Room-Bewerber: Sie wirken sehr positiv, was sorgt bei Ihnen für schlechte Laune?
Frage an einen Apple-Specialist-Kandidaten: Warum änderte Apple seinen Namen von Apples Computer Incorporated zu Apple Inc.?
Frage an einen Software-Entwickler: Auf einem Tisch liegen 100 Münzen. Zehn mit der Kopfseite nach oben, 90 mit der Zahl. Sie können weder erfühlen, noch sehen, noch auf irgendeine andere Weise herausfinden mit welcher Seite die Münzen nach oben zeigen. Wie teilen sie die Münzen in zwei Stapel, damit bei beiden dieselbe Anzahl mit dem Kopf nach oben zeigt?
Frage an einen Software-Entwickler: Wie würden Sie einen Toaster testen?
Frage an einen Bewerber im globalen Vertrieb: Wie berechnen Sie die Kosten für einen Kugelschreiber?
Frage an einen Apple-Specialist-Kandidaten: Sie haben es mit einer verärgerten Kundin zu tun, die seit 20 Minuten auf Hilfe wartet und für Wirbel sorgt. Sie sagt, dass sie nun zu Best Buy oder einem Microsoft-Store geht, um den Computer zu kaufen, die sie möchte. Lösen Sie dieses Problem.
Fragen an einen Bewerber für den Apple-Care-Telefonsupport: Ein Mann ruft an und hat einen Computer, der im Grunde nur noch Schrott ist. Was tun Sie?
Click to Open Overlay GalleryBuried deep within each cell in Sergey Brin’s body—in a gene called LRRK2, which sits on the 12th chromosome—is a genetic mutation that has been associated with higher rates of Parkinson’s. Rafa JennSeveral evenings a week, after a day’s work at Google headquarters in Mountain View, California, Sergey Brin drives up the road to a local pool. There, he changes into swim trunks, steps out on a 3-meter springboard, looks at the water below, and dives.
Brin is competent at all four types of springboard diving—forward, back, reverse, and inward. Recently, he’s been working on his twists, which have been something of a struggle. But overall, he’s not bad; in 2006 he competed in the master’s division world championships. (He’s quick to point out he placed sixth out of six in his event.)
The diving is the sort of challenge that Brin, who has also dabbled in yoga, gymnastics, and acrobatics, is drawn to: equal parts physical and mental exertion. “The dive itself is brief but intense,” he says. “You push off really hard and then have to twist right away. It does get your heart rate going.”
There’s another benefit as well: With every dive, Brin gains a little bit of leverage—leverage against a risk, looming somewhere out there, that someday he may develop the neurodegenerative disorder Parkinson’s disease. Buried deep within each cell in Brin’s body—in a gene called LRRK2, which sits on the 12th chromosome—is a genetic mutation that has been associated with higher rates of Parkinson’s.
Not everyone with Parkinson’s has an LRRK2 mutation; nor will everyone with the mutation get the disease. But it does increase the chance that Parkinson’s will emerge sometime in the carrier’s life to between 30 and 75 percent. (By comparison, the risk for an average American is about 1 percent.) Brin himself splits the difference and figures his DNA gives him about 50-50 odds.
That’s where exercise comes in. Parkinson’s is a poorly understood disease, but research has associated a handful of behaviors with lower rates of disease, starting with exercise. One study found that young men who work out have a 60 percent lower risk. Coffee, likewise, has been linked to a reduced risk. For a time, Brin drank a cup or two a day, but he can’t stand the taste of the stuff, so he switched to green tea. (“Most researchers think it’s the caffeine, though they don’t know for sure,” he says.) Cigarette smokers also seem to have a lower chance of developing Parkinson’s, but Brin has not opted to take up the habit. With every pool workout and every cup of tea, he hopes to diminish his odds, to adjust his algorithm by counteracting his DNA with environmental factors.
“This is all off the cuff,” he says, “but let’s say that based on diet, exercise, and so forth, I can get my risk down by half, to about 25 percent.” The steady progress of neuroscience, Brin figures, will cut his risk by around another half—bringing his overall chance of getting Parkinson’s to about 13 percent. It’s all guesswork, mind you, but the way he delivers the numbers and explains his rationale, he is utterly convincing.
Brin, of course, is no ordinary 36-year-old. As half of the duo that founded Google, he’s worth about $15 billion. That bounty provides additional leverage: Since learning that he carries a LRRK2 mutation, Brin has contributed some $50 million to Parkinson’s research, enough, he figures, to “really move the needle.” In light of the uptick in research into drug treatments and possible cures, Brin adjusts his overall risk again, down to “somewhere under 10 percent.” That’s still 10 times the average, but it goes a long way to counterbalancing his genetic predisposition.
It sounds so pragmatic, so obvious, that you can almost miss a striking fact: Many philanthropists have funded research into diseases they themselves have been diagnosed with. But Brin is likely the first who, based on a genetic test, began funding scientific research in the hope of escaping a disease in the first place.
His approach is notable for another reason. This isn’t just another variation on venture philanthropy—the voguish application of business school practices to scientific research. Brin is after a different kind of science altogether. Most Parkinson’s research, like much of medical research, relies on the classic scientific method: hypothesis, analysis, peer review, publication. Brin proposes a different approach, one driven by computational muscle and staggeringly large data sets. It’s a method that draws on his algorithmic sensibility—and Google’s storied faith in computing power—with the aim of accelerating the pace and increasing the potential of scientific research. “Generally the pace of medical research is glacial compared to what I’m used to in the Internet,” Brin says. “We could be looking lots of places and collecting lots of information. And if we see a pattern, that could lead somewhere.”
In other words, Brin is proposing to bypass centuries of scientific epistemology in favor of a more Googley kind of science. He wants to collect data first, then hypothesize, and then find the patterns that lead to answers. And he has the money and the algorithms to do it.
Click to Open Overlay GalleryGiven what seems like very bad news, most of us would actually do what Brin did: Go over our options, get some advice, and move on with life. Nathan Fox
Brin’s faith in the power of numbers—and the power of knowledge, more generally—is likely something he inherited from his parents, both scientists. His father, Michael, is a second-generation mathematician; his mother Eugenia is trained in applied mathematics and spent years doing meteorology research at NASA. The family emigrated from Russia when Brin was 6. At 17, he took up mathematics himself at the University of Maryland, later adding a second major in computer science. When he reached Stanford for his PhD—a degree he still hasn’t earned, much to his parents’ chagrin—he focused on data mining. That’s when he began thinking about the power of large data sets and what might come of analyzing them for unexpected patterns and insights.
Around the same time, in 1996, Brin’s mother started to feel some numbness in her hands. The initial diagnosis was repetitive stress injury, brought on by years of working at a computer. When tests couldn’t confirm that diagnosis, her doctors were stumped. Soon, though, Eugenia’s left leg started to drag. “It was just the same as my aunt, who had Parkinson’s years ago,” she recalls. “The symptoms started in the same way, at the same age. To me, at least, it was obvious there was a connection.”
At the time, scientific opinion held that Parkinson’s was not hereditary, so Brin didn’t understand his mother’s concern. “I thought it was crazy and completely irrational,” he says. After further tests at Johns Hopkins and the Mayo Clinic, though, she was diagnosed with Parkinson’s in 1999.
Even after the LRRK2 connection was made in 2004, Brin still didn’t connect his mother’s Parkinson’s to his own health. Then, in 2006, his wife-to-be, Anne Wojcicki, started the personal genetics company 23andMe (Google is an investor). As an alpha tester, Brin had the chance to get an early look at his genome. He didn’t find much of concern. But then Wojcicki suggested he look up a spot known as G2019S—the notch on the LRRK2 gene where an adenine nucleotide, the A in the ACTG code of DNA, sometimes substitutes for a guanine nucleotide, the G. And there it was: He had the mutation. His mother’s 23andMe readout showed that she had it, too.
Brin didn’t panic; for one thing, his mother’s experience with the disease has been reassuring. “She still goes skiing,” he says. “She’s not in a wheelchair.” Instead, he spent several months mulling over the results. He began to consult experts, starting with scientists at the Michael J. Fox Foundation and at the Parkinson’s Institute, which is not far from Google’s headquarters. He quickly realized it was going to be impractical to keep his risk from the public. “I can’t talk to 1,000 people in secret,” he says. “So I might as well put it out there to the world. It seemed like information that was worthy of sharing and might even be interesting.”
So one day in September 2008, Brin started a blog. His first post was called simply “LRRK2.”
“I know early in my life something I am substantially predisposed to,” Brin wrote. “I now have the opportunity to adjust my life to reduce those odds (e.g., there is evidence that exercise may be protective against Parkinson’s). I also have the opportunity to perform and support research into this disease long before it may affect me. And, regardless of my own health, it can help my family members as well as others.”
Brin continued: “I feel fortunate to be in this position. Until the fountain of youth is discovered, all of us will have some conditions in our old age, only we don’t know what they will be. I have a better guess than almost anyone else for what ills may be mine—and I have decades to prepare for it.”
In a sense, we’ve been using genetics to foretell disease risk forever. When we talk about “family history,” we’re largely talking about DNA, about how our parents’ health might hint at our own. A genetic scan is just a more modern way to link our familial past with our potential future. But there’s something about the precision of a DNA test that can make people believe that chemistry is destiny—that it holds dark, implacable secrets. This is why genetic information is sometimes described as “toxic knowledge”: Giving people direct access to their genetic information, in the words of Stanford bioethicist Hank Greely, is out and out “reckless.”
It’s true that in the early days of the science, genetic testing meant learning about a dreaded degenerative disease like Huntington’s or cystic fibrosis. But these diseases, although easy to identify, are extremely rare. Newer research has shown that when it comes to getting sick, a genetic predisposition is usually just one factor. The vast majority of conditions are also influenced by environment and day-to-day habits, areas where we can actually take some action.
But, surprisingly, the concept of genetic information as toxic has persisted, possibly because it presumes that people aren’t equipped to learn about themselves. But research shows this presumption to be unfounded. In 2009, The New England Journal of Medicine published results of the Risk Evaluation and Education for Alzheimer’s Disease study, an 11-year project that sought to examine how people react to finding out that they have a genetic risk for Alzheimer’s. Like Parkinson’s, Alzheimer’s is a neurodegenerative condition centering on the brain. But unlike Parkinson’s, Alzheimer’s has no known treatment. So learning you have a genetic predisposition should be especially toxic.
In the study, a team of researchers led by Robert Green, a neurologist and geneticist at Boston University, contacted adults who had a parent with Alzheimer’s and asked them to be tested for a variation in a gene known as ApoE. Depending on the variation, an ApoE mutation can increase a person’s risk for Alzheimer’s from three to 15 times the average. One hundred sixty-two adults agreed; 53 were told they had the mutation.
The results were delivered to the participants with great care: A genetic counselor walked each individual through the data, and all the subjects had follow-up appointments with the counselor. Therapists were also on call. “People were predicting catastrophic reactions,” Green recalls. “Depression, suicide, quitting their jobs, abandoning their families. They were anticipating the worst.”
But that isn’t what happened. People told that they were at dramatically higher risk for developing Alzheimer’s later in life seemed to process the information and integrate it into their lives, often choosing to lead more healthy lifestyles. “People are handling it,” Green says. “It doesn’t seem to be producing any clinically apparent distress.”
In other experiments, Green has further challenged the conventional wisdom about the toxicity of genetic information: He has begun questioning the need for counselors and therapists. “We’re looking at what happens if you don’t do this elaborate thing. What if you do it like a lab test in your doctor’s office? We’re treating it more like cholesterol and less like Huntington’s disease.”
In other words, given what seems like very bad news, most of us would do what Sergey Brin did: Go over our options, get some advice, and move on with life. “Everyone’s got their challenges; everyone’s got something to deal with,” Brin says. “This is mine. To me, it’s just one of any number of things that I could get in old age. And the most important factor is that I can do something about it.”
High-Speed Science
Can a model fueled by data sets and computational power compete with the gold standard of research? Maybe: Here are two timelines—one from an esteemed traditional research project run by the NIH, the other from the 23andMe Parkinson’s Genetics Initiative. They reached almost the same conclusion about a possible association between Gaucher’s disease and Parkinson’s disease, but the 23andMe project took a fraction of the time.—Rachel Swaby
Traditional Model
1. Hypothesis: An early study suggests that patients with Gaucher’s disease (caused by a mutation to the GBA gene) might be at increased risk of Parkinson’s.
2. Studies: Researchers conduct further studies, with varying statistical significance.
3. Data aggregation: Sixteen centers pool information on more than 5,500 Parkinson’s patients.
4. Analysis: A statistician crunches the numbers.
5. Writing: A paper is drafted and approved by 64 authors.
6. Submission: The paper is submitted to The New England Journal of Medicine. Peer review ensues.
7. Acceptance:NEJM accepts the paper.
8. Publication: The paper notes that people with Parkinson’s are 5.4 times more likely to carry the GBA mutation.
Total time elapsed: 6 years
Parkinson’s Genetics initiative
1. Tool Construction: Survey designers build the questionnaire that patients will use to report symptoms.
2. Recruitment: The community is announced, with a goal of recruiting 10,000 subjects with Parkinson’s.
3. Data aggregation: Community members get their DNA analyzed. They also fill out surveys.
4. Analysis: Reacting to the NEJM paper, 23andMe researchers run a database query based on 3,200 subjects. The results are returned in 20 minutes.
5. Presentation: The results are reported at a Royal Society of Medicine meeting in London: People with GBA are 5 times more likely to have Parkinson’s, which is squarely in line with the NEJM paper. The finding will possibly be published at a later date.
Total time elapsed: 8 months
If Brin’s blog post betrayed little fear about his risk for Parkinson’s, it did show a hint of disappointment with the state of knowledge on the disease. (His critique was characteristically precise: “Studies tend to have small samples with various selection biases.”)
His frustration is well founded. For decades, Parkinson’s research has been a poor cousin to the study of Alzheimer’s, which affects 10 times as many Americans and is therefore much more in the public eye. What is known about Parkinson’s has tended to emerge from observing patients in clinical practice, rather than from any sustained research. Nearly all cases are classified as idiopathic, meaning there’s no known cause. Technically, the disease is a result of the loss of brain cells that produce the neurotransmitter dopamine, but what causes those cells to die is unclear. The classic symptoms of the condition—tremors, rigidity, balance problems—come on gradually and typically don’t develop until dopamine production has declined by around 80 percent, meaning that a person can have the disease for years before experiencing the first symptom.
As far as treatments go, the drug levodopa, which converts to dopamine in the brain, remains the most effective. But the drug, developed in 1967, has significant side effects, including involuntary movements and confusion. Other interventions, like deep-brain stimulation, are invasive and expensive. Stem cell treatments, which generated great attention and promise a decade ago, “didn’t really work,” says William Langston, director of the Parkinson’s Institute. “Transferring nerve cells into the brain and repairing the brain has been harder than anybody thought.”
There are, however, some areas of promise—including the 2004 discovery of the LRRK2 connection. It’s especially common among people of Ashkenazi descent, like Brin, and appears in just about 1 percent of Parkinson’s patients. Rare as the mutation is, however, LRRK2 cases of Parkinson’s appear indistinguishable from other cases, making LRRK2 a potential window onto the disease in general.
LRRK2 stands for leucine-rich repeat kinase. Kinases are enzymes that activate proteins in cells, making them critical to cell growth and death. In cancer, aberrant kinases are known to contribute to tumor growth. That makes them a promising target for research. Drug companies have already developed kinase inhibitors for cancer; it’s a huge opportunity for Parkinson’s treatment, as well: If overactive kinases interfere with dopamine-producing cells in all Parkinson’s cases, then a kinase inhibitor may be able to help not just the LRRK2 carriers but all people with the disease.
Another promising area for research is that delay between the loss of dopamine-producing cells and the onset of symptoms. As it stands, this lag makes treatment a much more difficult problem. “By the time somebody has full-blown Parkinson’s, it’s way too late,” Langston says. “Any number of promising drugs have failed, perhaps because we’re getting in there so late.” But doctors can’t tell who should get drugs earlier, because patients are asymptomatic. If researchers could find biomarkers—telltale proteins or enzymes detected by, say, a blood or urine test—that were produced before symptoms emerged, a drug regimen could be started early enough to work.
And indeed, Brin has given money to both these areas of research, predominantly through gifts to the Parkinson’s Institute and to the Michael J. Fox Foundation, which is committed to what’s called translational research—getting therapies from researchers to the clinic as quickly as possible. Last February the Fox Foundation launched an international consortium of scientists working on LRRK2, with a mandate for collaboration, openness, and speed. “The goal is to get people to change their behavior and share information much more quickly and openly,” says Todd Sherer, head of the Fox Foundation’s research team. “We need to change the thinking.”
As Brin’s understanding of Parkinson’s grew, though, and as he talked with Wojcicki about research models, he realized that there was an even bolder experiment in the offing.
In 1899, scientists at Bayer unveiled Aspirin, a drug it offered as an effective remedy for colds, lumbago, and toothaches, among other ills. How aspirin—or acetylsalicylic acid—actually worked was a mystery. All people knew was that it did (though a discouraging side effect, gastric bleeding, emerged in some people).
It wasn’t until the 1960s and ’70s that scientists started to understand the mechanism: Aspirin inhibits the production of chemicals in the body called prostaglandins, fatty acids that can cause inflammation and pain. That insight proved essential to understanding the later discovery, in 1988, that people who took aspirin every other day had remarkably reduced rates of heart attack—cases in men dropped by 44 percent. When the drug inhibits prostaglandins, it seems, it inhibits the formation of blood clots, as well—reducing the risk of heart attack or stroke.
The second coming of aspirin is considered one of the triumphs of contemporary medical research. But to Brin, who spoke of the drug in a talk at the Parkinson’s Institute last August, the story offers a different sort of lesson—one drawn from that period after the drug was introduced but before the link to heart disease was established. During those decades, Brin notes, surely “many millions or hundreds of millions of people who took aspirin had a variety of subsequent health benefits.” But the association with aspirin was overlooked, because nobody was watching the patients. “All that data was lost,” Brin said.
In Brin’s way of thinking, each of our lives is a potential contribution to scientific insight. We all go about our days, making choices, eating things, taking medications, doing things—generating what is inelegantly called data exhaust. A century ago, of course, it would have been impossible to actually capture this information, particularly without a specific hypothesis to guide a researcher in what to look for. Not so today. With contemporary computing power, that data can be tracked and analyzed. “Any experience that we have or drug that we may take, all those things are individual pieces of information,” Brin says. “Individually, they’re worthless, they’re anecdotal. But taken together they can be very powerful.”
In computer science, the process of mining such large data sets for useful associations is known as a market-basket analysis. Conventionally, it has been used to divine patterns in retail purchases. It’s how Amazon.com can tell you that “customers who bought X also bought Y.”
But a problem emerges as the data in a basket become less uniform. This was the focus of much of Brin’s work at Stanford, where he published several papers on the subject. One, from 1997, argued that given the right algorithms, meaningful associations can be drawn from all sorts of unconventional baskets—”student enrollment in classes, word occurrence in text documents, users’ visits of Web pages, and many more.” It’s not a stretch to say that our experiences as patients might conceivably be the next item on the list.
This is especially true given the advances in computational power since 1997, when Brin and his fellow Stanford comp-sci student Larry Page were starting Google. “When Larry and I started the company,” Brin says, “we had to get some hard drives to, you know, store the entire Web. We ended up in a back alley in San Jose, dealing with some shady guy. We spent $10,000 or $20,000, all our life savings. We got these giant stacks of hard drives that we had to fit in our cars and get home. Just last week I happened to go to Fry’s and I picked up a hard drive that was 1 terabyte and cost like $100. And it was bigger than all those hard drives put together.”
This computing power can be put to work to answer questions about health. As an example, Brin cites a project developed at his company’s nonprofit research arm, Google.org. Called Google Flu Trends, the idea is elegantly simple: Monitor the search terms people enter on Google, and pull out those words and phrases that might be related to symptoms or signs of influenza, particularly swine flu.
In epidemiology, this is known as syndromic surveillance, and it usually involves checking drugstores for purchases of cold medicines, doctor’s offices for diagnoses, and so forth. But because acquiring timely data can be difficult, syndromic surveillance has always worked better in theory than in practice. By looking at search queries, though, Google researchers were able to analyze data in near real time. Indeed, Flu Trends can point to a potential flu outbreak two weeks faster than the CDC’s conventional methods, with comparable accuracy. “It’s amazing that you can get that kind of signal out of very noisy data,” Brin says. “It just goes to show that when you apply our newfound computational power to large amounts of data—and sometimes it’s not perfect data—it can be very powerful.” The same, Brin argues, would hold with patient histories. “Even if any given individual’s information is not of that great quality, the quantity can make a big difference. Patterns can emerge.”
Brin’s tolerance for “noisy data” is especially telling, since medical science tends to consider it poisonous. Biomedical researchers often limit their experiments to narrow questions that can be rigorously measured. But the emphasis on purity can mean fewer patients to study, which results in small data sets. That limits the research’s “power”—a statistical term that generally means the probability that a finding is actually true. And by design it means the data almost never turn up insights beyond what the study set out to examine.
Increasingly, though, scientists—especially those with a background in computing and information theory—are starting to wonder if that model could be inverted. Why not start with tons of data, a deluge of information, and then wade in, searching for patterns and correlations?
This is what Jim Gray, the late Microsoft researcher and computer scientist, called the fourth paradigm of science, the inevitable evolution away from hypothesis and toward patterns. Gray predicted that an “exaflood” of data would overwhelm scientists in all disciplines, unless they reconceived their notion of the scientific process and applied massive computing tools to engage with the data. “The world of science has changed,” Gray said in a 2007 speech—from now on, the data would come first.
Gray’s longtime employer, Bill Gates, recently made a small wager on the fourth paradigm when he invested $10 million in Schrödinger, a Portland, Oregon-based firm that’s using massive computation to rapidly simulate the trial and error of traditional pharmaceutical research.
And Andy Grove, former chair and CEO of Intel, has likewise called for a “cultural revolution” in science, one modeled on the tech industry’s penchant for speedy research and development. Grove, who was diagnosed with Parkinson’s in 2000 and has since made the disease his casus belli, shakes his fist at the pace of traditional science: “After 10 years in the Parkinson’s field, we may finally have three drugs in Phase I and Phase II trials next year—that’s more than ever before. But let’s get real. We’ll get the results in 2012, then they’ll argue about it for a year, then Phase III results in 2015, then argue about that for a year—if I’m around when they’re done …” He doesn’t finish his thought. “The whole field is not pragmatic enough. They’re too nice to themselves.”
Grove disagrees somewhat with Brin’s emphasis on patterns over hypothesis. “You have to be looking for something,” he says. But the two compare notes on the disease from time to time; both are enthusiastic and active investors in the Michael J. Fox Foundation. (Grove is even known to show up on the online discussion forums.)
In the world of traditional drug research, however, there’s more than a little skepticism about swapping out established biomedical approaches for technological models. Derek Lowe, a longtime medicinal chemist and author of a widely read drug industry blog, grants that big hardware and big data can be helpful. But for a disease as opaque as Parkinson’s, he argues, the challenge of drug development will always come down to basic chemistry and biology. “I don’t have a problem with data,” Lowe says. “The problem is that the data is tremendously noisy stuff. We just don’t know enough biology. If Brin’s efforts will help us understand that, I’m all for it. But I doubt they will.”
To be sure, biomedicine, and pharmaceutical research in particular, is not the same as software or computer chips. It’s a much more complicated process, and Brin acknowledges as much: “I’m not an expert in biological research. I write a bunch of computer code and it crashes, no big deal. But if you create a drug and it kills people, that’s a different story.” Brin knows that his method will require follow-up research to get through the traditional hoops of drug discovery and approvals. But, he adds, “in my profession you really make progress based on how quick your development cycle is.”
So, with the cooperation of the Parkinson’s Institute, the Fox Foundation, and 23andMe, he has proposed a new development cycle. Brin has contributed $4 million to fund an online Parkinson’s Disease Genetics Initiative at 23andMe: 10,000 people who’ve been diagnosed with the disease and are willing to pour all sorts of personal information into a database. (They’ve tapped about 4,000 so far.) Volunteers spit into a 23andMe test tube to have their DNA extracted and analyzed. That information is then matched up with surveys that extract hundreds of data points about the volunteers’ environmental exposures, their family history, disease progression, and treatment response. The questions range from the mundane (“Are you nearsighted?”)—to the perplexing (“Have you had trouble staying awake?”). It is, in short, an attempt to create the always-on data-gathering project that Brin believes could aid all medical research—and, potentially, himself. “We have no grand unified theory,” says Nicholas Eriksson, a 23andMe scientist. “We have a lot of data.”
Click to Open Overlay GalleryWhy not do science differently? Gather tons of data, then start searching for correlations. Steven Wilson
It’s hard to overstate the difference between this approach and conventional research. “Traditionally, an experiment with 10 or 20 subjects was big,” says the Parkinson’s Institute’s Langston. “Then it went up to the hundreds. Now 1,000 subjects would be a lot—so with 10,000, suddenly we’ve reached a scale never seen before. This could dramatically advance our understanding.”
Langston offers a case in point. Last October, the New England Journal of Medicinepublished the results of a massive worldwide study that explored a possible association between people with Gaucher’s disease—a genetic condition where too much fatty substances build up in the internal organs—and a risk for Parkinson’s. The study, run under the auspices of the National Institutes of Health, hewed to the highest standards and involved considerable resources and time. After years of work, it concluded that people with Parkinson’s were five times more likely to carry a Gaucher mutation.
Langston decided to see whether the 23andMe Research Initiative might be able to shed some insight on the correlation, so he rang up 23andMe’s Eriksson, and asked him to run a search. In a few minutes, Eriksson was able to identify 350 people who had the mutation responsible for Gaucher’s. A few clicks more and he was able to calculate that they were five times more likely to have Parkinson’s disease, a result practically identical to the NEJM study. All told, it took about 20 minutes. “It would’ve taken years to learn that in traditional epidemiology,” Langston says. “Even though we’re in the Wright brothers early days with this stuff, to get a result so strongly and so quickly is remarkable.”
Mark Hallett, chief of the Human Motor Control section at the National Institute of Neurological Disorders and Stroke, saw Langston present his results at a recent conference and came away very impressed. “The quality of the data is probably not as good as it could be, since it’s provided by the patient,” he says. “But it’s an impressive research tool. It sounds like it’d be useful to generate new hypotheses as opposed to prove anything.”
But hypotheses are what Parkinson’s research needs more of, especially now that we can study people who, like Brin, have an LRRK2 mutation. Since some of these carriers don’t get the disease, we should try to discern why. “This is an information-rich opportunity,” Brin says. “It’s not just the genes—it could be environment or behaviors, it could be that they take aspirin. We don’t know.”
This approach—huge data sets and open questions—isn’t unknown in traditional epidemiology. Some of the greatest insights in medicine have emerged from enormous prospective projects like the Framingham Heart Study, which has followed 15,000 citizens of one Massachusetts town for more than 60 years, learning about everything from smoking risks to cholesterol to happiness. Since 1976, the Nurses Health Study has tracked more than 120,000 women, uncovering risks for cancer and heart disease. These studies were—and remain—rigorous, productive, fascinating, even lifesaving. They also take decades and demand hundreds of millions of dollars and hundreds of researchers. The 23andMe Parkinson’s community, by contrast, requires fewer resources and demands far less manpower. Yet it has the potential to yield just as much insight as a Framingham or a Nurses Health. It automates science, making it something that just … happens. To that end, later this month 23andMe will publish several new associations that arose out of their main database, which now includes 50,000 individuals, that hint at the power of this new scientific method.
“The exciting thing about this sort of research is the breadth of possibilities that it tests,” Brin says. “Ultimately many medical discoveries owe a lot to just some anecdotal thing that happened to have happened, that people happened to have noticed. It could have been the dime they saw under the streetlight. And if you light up the whole street, it might be covered in dimes. You have no idea. This is trying to light up the whole street.”
Sergey Brin is different. Few people have the resources to bend the curve of science; fewer still have spouses who run genetics companies. Given these circumstances and his data-driven mindset, Brin is likely more comfortable with genetic knowledge than most of us. And few people are going to see their own predicament as an opportunity to forge a new sort of science. So yeah, he’s different.
Ask Brin whether he’s a rare breed, and you won’t get much; on-the-record self-reflection doesn’t come easily to him. “Obviously I’m somewhat unusual in the resources that I can bring to bear,” he allows. “But all the other things that I do—the lifestyle, the self-education, many people can do that. So I’m not really that unique. I’m just early. It’s more that I’m on the leading edge of something.”
A decade ago, scientists spent $3 billion to sequence one human genome. Today, at least 20 people have had their whole genomes sequenced, and anyone with $48,000 can add their name to the list. That cost is expected to plummet still further in the next few years. (Brin is in line to have his whole genome sequenced, and 23andMe is considering offering whole-genome tests, though the company hasn’t determined a price.)
As the cost of sequencing drops and research into possible associations increases, whole genome sequencing will become a routine part of medical treatment, just as targeted genetic tests are a routine part of pregnancy today. The issue won’t be whether to look; it will be what to do with what’s found.
Today, the possibility of a rudimentary genetic test appearing on the shelves of Walgreens is headline news—delivered, inevitably, with the subtext that ordinary people will come undone upon learning about their genetic propensities. But other tests have gone from incendiary to innocuous. (Walgreens already stocks at-home paternity tests and HIV tests.) And other disclosures have gone from radical to routine. (In 1961, 90 percent of physicians said they wouldn’t tell their patients if they had cancer.) And other data points have gone from baffling to banal. (Blood pressure, LDL cholesterol, and blood sugar are now the stuff of watercooler chats.)
So, too, will it go with DNA. We’ll all find out about our propensities for disease in great detail and be compelled to work our own algorithms to address that risk. In many cases, this will be straightforward. There will be things we can do today and treatments we can undergo tomorrow.
But in some cases, undoubtedly, we may find ourselves in a circumstance like Brin’s, with an elevated risk for a disease with no cure. So we’ll exercise more, start eating differently, and do whatever else we can think of while we wait for science to catch up. In that way, Brin’s story isn’t just a billionaire’s tale. It’s everyone’s.