New iPad 2018: release date, news and leaks

We're just days away from the first big Apple launch of 2018, and word on the web is that it's going to be another new iPad.

But this one is going to be different to the iPads from before, in that it's going to be even cheaper and – if you read into the most recent Apple invite – it's not going to be aimed at consumers.

That's right, there's an outside chance you won't even be able to buy this iPad, although it's probably going to be available for sale in certain locations and could just be the new budget way to get into Apple's tablet ecosystem.

But before we go too far into that – and give away one of the most surprising features of this new device – let's break it down bit-by-bit so you can get a proper taste of what we expect from Apple on March 27.

Cut to the chase

  • What is it? A new, low-cost iPad for schools and business
  • When is it out? Likely April 2018
  • What will it cost? Probably at least $259 / £249 / AU$350

New iPad 2018 release date and price

The invite above, for an event in Chicago on March 27, drops a lot of hints about when we can expect from the event.

The main thing is the actual location: a fancy high school in Chicago, and a note saying that we're going to see new creative ideas for teachers and students.

There's not mention of new hardware – come on, this is an Apple invite, after all – but the invitation has clearly been written in the style of Apple's Pencil swipes and swooshes, so it's pretty clear there's a new iPad approaching.

That date means we'll be getting the new iPad somewhen in April if the usual 10-day-to-two-weeks model is followed, with pre-orders beginning somewhen in between that… if indeed you can buy this from retail stores, as the new iPad 2018 might be for education only.

In terms of price, we're hearing rumors that it could be pretty cheap, with the cost all the way down to US$259 (converted to £190 or AU$340, but more likely £249 / AU350 based on the way Apple's been pricing things).

The current model starts at US$329, so that's a drop of more than 20 percent.

The Apple Pencil

Here's the interesting thing – we've been hearing that Apple is gearing up to increase the volume of its Apple Pencil production, almost doubling it up to 10 million units… so it's going to need to put them somewhere.

Where better than alongside a new iPad that's going to be used by more and more schools (if Apple actually makes this move a success)?

That theory looks more robust as it seems the new iPad 2018 will indeed support the Apple Pencil, giving it more scope to be used beyond the iPad Pro range.

There are rumors that the Apple Pencil support will even extend to future iPhones, but that's not on the cards for now (and we're not sure it's part of the vision Steve Jobs had for the iPhone…)

New iPad 2018 screen

Details are starting to get a little thinner here, but given the new iPad 2018 is supposed to be a little cheaper, we can extrapolate some ideas.

Firstly, there's going to have to be a digitiser layer underneath the glass that can read the Apple Pencil – that's not going to make a difference to the look of the iPad, but it's another layer and does add to the cost.

That means we probably won't see any of the True Tone display technology that's been coming to the iPad Pro range, where the sensors match the white balance of the screen with the surrounding light.

Resolution on the likely LCD screen will probably match that of the entry level iPad from last year at 1536×2048, and we'd anticipate it won't be the highest-quality color reproduction Apple has ever offered in an iPad as the focus will be slightly more on function.

But the screen will still be in the standard 4:3 ratio and offer Apple's staple 9.7-inch display size, with larger bezels all around if everything appears as expected.


Again, we've had no leaks about the design of the new iPad, but given the way Apple is adept at repurposing older designs for cheaper models (think the iPhone SE and iPhone 5C) it's fairly easy to see that the model we are likely to see on March 27 is going to be something quite familiar.

In fact, we're willing to bet that the event will be more about what you can do with the device than the specs on board, so expect something that looks almost identical to the iPad 2017, so a metal back and rounded corners.

The thickness of the iPad from 2017 was something we weren't super impressed by, but we expect that to continue – and don't expect there to be masses of storage in there, as the cloud is more likely to be a destination for all the content on these devices.

We'd expect Apple to unveil more iCloud storage for students – so if this does also get sold as a retail unit, it'll be a pretty basic one, in the same way we see Chromebooks these days.

New iPad 2018 power and OS

The operating system is the easy one here – it'll be iOS 11.3, as Apple always uses an event to debut some new feature of what its devices can do.

There's word that the new software contains something called ClassKit, which doesn't need a lot of analysis given we're expecting these iPads to be used for students and they'll need new software.

The question is which processor Apple will chuck in the new – it could well still with the A9 chipset that powered the iPad last year.

That would leave it quite underpowered (although would help with the cost reduction) and we can see Apple making a huge deal about the new things you can do to learn with these iPads – including 3D rendering of items for more interactive education.

We're going to guess at the A10 chip from last year being used, but don't be surprised if the teardown reveals a poorer engine and less RAM than we're used to.

What else should I know?

Well, the first thing you should know is that TechRadar is going to be liveblogging this event for you as there's no stream to watch it from… so you're going to want to check back on the site on Tuesday March 27, when the event will be covered in depth from when it kicks off at 8AM PT, 11AM ET and 4PM GMT.

Beyond that, the main difference on this iPad is its use in the classroom, so there could well be an appearance from the Smart Connector for low-power accessories, turning the tablet into a word processor with a snap on keyboard.

There could also be new options on show, which would please iPad Pro users, but again this would add cost to a device Apple will be looking to lower the price of.

So, make sure you keep it locked to TechRadar to get all you need about the new iPad 2018 – we'll be doing our utmost to be among the very first on the web to bring you information on the new tablet, so you can decide whether it's your next purchase (if you can, that is).

via Click on the link for the full article

Monarch is a new platform from surgical robot pioneer Frederic Moll

Auris Health (née Auris Surgical Robots) has done a pretty good job flying under the radar, in spite of raising a massive amount of capital and listing one of the key people behind the da Vinci surgical robot among its founders. With FDA clearance finally out of the way, however, the Redwood City-based startup medical startup is ready to start talking.

This week, Auris revealed the Monarch Platform, which swaps the da Vinci’s surgical approach for something far less invasive. The system utilizes the common endoscopy procedure to a insert a flexible robot into hard to reach places inside the human body. A doctor trained on the system uses a video game-style controller to navigate inside, with help from 3D models.

Monarch’s first target is lung cancer, the which tops the list of deadliest cancers. More deaths could be stopped, if doctors were able to catch the disease in its early stages, but the lung’s complex structures, combined with current techniques, make the process difficult. According to the company,  “More than 90-percent of people diagnosed with lung cancer do not survive, in part because it is often found at an advanced stage.”

“A CT scan shows a mass or a lesion,” CEO Frederic Moll tells TechCrunch. “It doesn’t tell you what it is. Then you have to get a piece of lung, and if it’s a small lesion. It isn’t that easy — it can be quite a traumatic procedure. So you’d like to do it a very systematic and minimally invasive fashion. Currently it’s difficult with manual techniques and 40-percent of the time, there is no diagnosis. This is has been a problem for many years and [inhibits] the ability of a clinician to diagnose and treat early-stage cancer.

Auris was founded half a dozen years ago, in which time the company has managed to raise a jaw-dropping $500 million, courtesy of Mithril Capital Management, Lux Capital, Coatue Management and Highland Capital. The company says the large VC raise and long runway were necessary factors in building its robust platform.

“We are incredibly fortunate to have an investor base that is supportive of our vision and committed to us for the long-term,” CSO Josh DeFonzo tells TechCrunch. “The investments that have been made in Auris are to support both the development of a very robust product pipeline, as well as successful clinical adoption of our technology to improve patient outcomes.”

With that funding and FDA approval for Monarch out of the way, the company has an aggressive timeline. Moll says Auris is hoping to bring the system to hospitals and outpatient centers by the end of the year. And once it’s out in the wild, Monarch’s disease detecting capabilities will eventually extend beyond lung cancer.

“We have developed what we call a platform technology,” says Moll. “Bronchoscopy is the first application, but this platform will do other robotic endoscopies.”

via Click on the link for the full article

Facebook was warned about app permissions in 2011

Who’s to blame for the leaking of 50 million Facebook users’ data? Facebook founder and CEO Mark Zuckerberg broke several days of silence in the face of a raging privacy storm to go on CNN this week to say he was sorry. He also admitted the company had made mistakes; said it had breached the trust of users; and said he regretted not telling Facebookers at the time their information had been misappropriated.

Meanwhile, shares in the company have been taking a battering. And Facebook is now facing multiple shareholder and user lawsuits.

Pressed on why he didn’t inform users, in 2015, when Facebook says it found out about this policy breach, Zuckerberg avoided a direct answer — instead fixing on what the company did (asked Cambridge Analytica and the developer whose app was used to suck out data to delete the data) — rather than explaining the thinking behind the thing it did not do (tell affected Facebook users their personal information had been misappropriated).

Essentially Facebook’s line is that it believed the data had been deleted — and presumably, therefore, it calculated (wrongly) that it didn’t need to inform users because it had made the leak problem go away via its own backchannels.

Except of course it hadn’t. Because people who want to do nefarious things with data rarely play exactly by your rules just because you ask them to.

There’s an interesting parallel here with Uber’s response to a 2016 data breach of its systems. In that case, instead of informing the ~57M affected users and drivers that their personal data had been compromised, Uber’s senior management also decided to try and make the problem go away — by asking (and in their case paying) hackers to delete the data.

Aka the trigger response for both tech companies to massive data protection fuck-ups was: Cover up; don’t disclose.

Facebook denies the Cambridge Analytica instance is a data breach — because, well, its systems were so laxly designed as to actively encourage vast amounts of data to be sucked out, via API, without the check and balance of those third parties having to gain individual level consent.

So in that sense Facebook is entirely right; technically what Cambridge Analytica did wasn’t a breach at all. It was a feature, not a bug.

Clearly that’s also the opposite of reassuring.

Yet Facebook and Uber are companies whose businesses rely entirely on users trusting them to safeguard personal data. The disconnect here is gapingly obvious.

What’s also crystal clear is that rules and systems designed to protect and control personal data, combined with active enforcement of those rules and robust security to safeguard systems, are absolutely essential to prevent people’s information being misused at scale in today’s hyperconnected era.

But before you say hindsight is 20/20 vision, the history of this epic Facebook privacy fail is even longer than the under-disclosed events of 2015 suggest — i.e. when Facebook claims it found out about the breach as a result of investigations by journalists.

What the company very clearly turned a blind eye to is the risk posed by its own system of loose app permissions that in turn enabled developers to suck out vast amounts of data without having to worry about pesky user consent. And, ultimately, for Cambridge Analytica to get its hands on the profiles of ~50M US Facebookers for dark ad political targeting purposes.

European privacy campaigner and lawyer Max Schrems — a long time critic of Facebook — was actually raising concerns about the Facebook’s lax attitude to data protection and app permissions as long ago as 2011.

Indeed, in August 2011 Schrems filed a complaint with the Irish Data Protection Commission exactly flagging the app permissions data sinkhole (Ireland being the focal point for the complaint because that’s where Facebook’s European HQ is based).

“[T]his means that not the data subject but “friends” of the data subject are consenting to the use of personal data,” wrote Schrems in the 2011 complaint, fleshing out consent concerns with Facebook’s friends’ data API. “Since an average facebook user has 130 friends, it is very likely that only one of the user’s friends is installing some kind of spam or phishing application and is consenting to the use of all data of the data subject. There are many applications that do not need to access the users’ friends personal data (e.g. games, quizzes, apps that only post things on the user’s page) but Facebook Ireland does not offer a more limited level of access than “all the basic information of all friends”.

“The data subject is not given an unambiguous consent to the processing of personal data by applications (no opt-in). Even if a data subject is aware of this entire process, the data subject cannot foresee which application of which developer will be using which personal data in the future. Any form of consent can therefore never be specific,” he added.

As a result of Schrems’ complaint, the Irish DPC audited and re-audited Facebook’s systems in 2011 and 2012. The result of those data audits included a recommendation that Facebook tighten app permissions on its platform, according to a spokesman for the Irish DPC, who we spoke to this week.

The spokesman said the DPC’s recommendation formed the basis of the major platform change Facebook announced in 2014 — aka shutting down the Friends data API — albeit too late to prevent Cambridge Analytica from being able to harvest millions of profiles’ worth of personal data via a survey app because Facebook only made the change gradually, finally closing the door in May 2015.

“Following the re-audit… one of the recommendations we made was in the area of the ability to use friends data through social media,” the DPC spokesman told us. “And that recommendation that we made in 2012, that was implemented by Facebook in 2014 as part of a wider platform change that they made. It’s that change that they made that means that the Cambridge Analytica thing cannot happen today.

“They made the platform change in 2014, their change was for anybody new coming onto the platform from 1st May 2014 they couldn’t do this. They gave a 12 month period for existing users to migrate across to their new platform… and it was in that period that… Cambridge Analytica’s use of the information for their data emerged.

“But from 2015 — for absolutely everybody — this issue with CA cannot happen now. And that was following our recommendation that we made in 2012.”

Given his 2011 complaint about Facebook’s expansive and abusive historical app permissions, Schrems has this week raised an eyebrow and expressed surprise at Zuckerberg’s claim to be “outraged” by the Cambridge Analytica revelations — now snowballing into a massive privacy scandal.

In a statement reflecting on developments he writes: “Facebook has millions of times illegally distributed data of its users to various dodgy apps — without the consent of those affected. In 2011 we sent a legal complaint to the Irish Data Protection Commissioner on this. Facebook argued that this data transfer is perfectly legal and no changes were made. Now after the outrage surrounding Cambridge Analytica the Internet giant suddenly feels betrayed seven years later. Our records show: Facebook knew about this betrayal for years and previously argues that these practices are perfectly legal.”

So why did it take Facebook from September 2012 — when the DPC made its recommendations — until May 2014 and May 2015 to implement the changes and tighten app permissions?

The regulator’s spokesman told us it was “engaging” with Facebook over that period of time “to ensure that the change was made”. But he also said Facebook spent some time pushing back — questioning why changes to app permissions were necessary and dragging its feet on shuttering the friends’ data API.

“I think the reality is Facebook had questions as to whether they felt there was a need for them to make the changes that we were recommending,” said the spokesman. “And that was, I suppose, the level of engagement that we had with them. Because we were relatively strong that we felt yes we made the recommendation because we felt the change needed to be made. And that was the nature of the discussion. And as I say ultimately, ultimately the reality is that the change has been made. And it’s been made to an extent that such an issue couldn’t occur today.”

“That is a matter for Facebook themselves to answer as to why they took that period of time,” he added.

Of course we asked Facebook why it pushed back against the DPC’s recommendation in September 2012 — and whether it regrets not acting more swiftly to implement the changes to its APIs, given the crisis its business is now faced having breached user trust by failing to safeguard people’s data.

We also asked why Facebook users should trust Zuckerberg’s claim, also made in the CNN interview, that it’s now ‘open to being regulated’ — when its historical playbook is packed with examples of the polar opposite behavior, including ongoing attempts to circumvent existing EU privacy rules.

A Facebook spokeswoman acknowledged receipt of our questions this week — but the company has not responded to any of them.

The Irish DPC chief, Helen Dixon, also went on CNN this week to give her response to the Facebook-Cambridge Analytica data misuse crisis — calling for assurances from Facebook that it will properly police its own data protection policies in future.

“Even where Facebook have terms and policies in place for app developers, it doesn’t necessarily give us the assurance that those app developers are abiding by the policies Facebook have set, and that Facebook is active in terms of overseeing that there’s no leakage of personal data. And that conditions, such as the prohibition on selling on data to further third parties is being adhered to by app developers,” said Dixon.

“So I suppose what we want to see change and what we want to oversee with Facebook now and what we’re demanding answers from Facebook in relation to, is first of all what pre-clearance and what pre-authorization do they do before permitting app developers onto their platform. And secondly, once those app developers are operative and have apps collecting personal data what kind of follow up and active oversight steps does Facebook take to give us all reassurance that the type of issue that appears to have occurred in relation to Cambridge Analytica won’t happen again.”

Firefighting the raging privacy crisis, Zuckerberg has committed to conducting a historical audit of every app that had access to “a large amount” of user data around the time that Cambridge Analytica was able to harvest so much data.

So it remains to be seen what other data misuses Facebook will unearth — and have to confess to now, long after the fact.

But any other embarrassing data leaks will sit within the same unfortunate context — which is to say that Facebook could have prevented these problems if it had listened to the very valid concerns data protection experts were raising more than six years ago.

Instead, it chose to drag its feet. And the list of awkward questions for the Facebook CEO keeps getting longer.

via Click on the link for the full article

JASK and the future of autonomous cybersecurity

There is a familiar trope in Hollywood cyberwarfare movies. A lone whiz kid hacker (often with blue, pink, or platinum hair) fights an evil government. Despite combatting dozens of cyber defenders, each of whom appears to be working around the clock and has very little need to use the facilities, the hacker is able to defeat all security and gain access to the secret weapon plans or whatever have you. The weapon stopped, the hacker becomes a hero.

The real world of security operations centers (SOCs) couldn’t be further from this silver screen fiction. Today’s hackers (who are the bad guys, by the way) don’t have the time to custom hack a system and play cat-and-mouse with security professionals. Instead, they increasingly build a toolbox of automated scripts and simultaneously hit hundreds of targets using, say, a newly discovered zero-day vulnerability and trying to take advantage of it as much as possible before it is patched.

Security analysts working in a SOC are increasingly overburdened and overwhelmed by the sheer number of attacks they have to process. Yet, despite the promises of automation, they are often still using manual processes to counter these attacks. Fighting automated attacks with manual actions is like fighting mechanized armor with horses: futile.

Nonetheless, that’s the current state of things in the security operations world, but as V.Jay LaRosa, the VP of Global Security Architecture of payroll and HR company ADP explained to me, “The industry, in general from a SOC operations perspective, it is about to go through a massive revolution.”

That revolution is automation. Many companies have claimed that they are bringing machine learning and artificial intelligence to security operations, and the buzzword has been a mainstay of security startup pitch decks for some times. Results in many cases have been nothing short of lackluster at best. But a new generation of startups is now replacing soaring claims with hard science, and focusing on the time-consuming low-hanging fruit of the security analyst’s work.

One of those companies, as we will learn shortly, is JASK. The company, which is based in San Francisco and Austin, wants to create a new market for what it calls the “autonomous security operations center.” Our goal is to understand the current terrain for SOCs, and how such a platform might fit into the future of cybersecurity.

Data wrangling and the challenge of automating security

The security operations center is the central nervous system of corporate security departments today. Borrowing concepts from military organizational design, the modern SOC is designed to fuse streams of data into one place, giving security analysts a comprehensive overview of a company’s systems. Those data sources typically include network logs, an incident detection and response system, web application firewall data, internal reports, antivirus, and many more. Large companies can easily have dozens of data sources.

Once all of that information has been ingested, it is up to a team of security analysts to evaluate that data and start to “connect the dots.” These professionals are often overworked since the growth of the security team is generally reactive to the threat environment. Startups might start with a single security professional, and slowly expand that team as new threats to the business are discovered.

Given the scale and complexity of the data, investigating a single security alert can take significant time. An analyst might spend 50 minutes just pulling and cleaning the necessary data to be able to evaluate the likelihood of a threat to the company. Worse, alerts are sufficiently variable that the analyst often has to repeatedly perform this cleanup work for every alert.

Data wrangling is one of the most fundamental problems that every SOC faces. All of those streams of data need to be constantly managed to ensure that they are processed properly. As LaRosa from ADP explained, “The biggest challenge we deal with in this space is that [data] is transformed at the time of collection, and when it is transformed, you lose the raw information.” The challenge then is that “If you don’t transform that data properly, then … all that information becomes garbage.”

The challenges of data wrangling aren’t unique to security — teams across the enterprise struggle to design automated solutions. Nonetheless, just getting the right data to the right person is an incredible challenge. Many security teams still manually monitor data streams, and may even write their own ad-hoc batch processing scripts to get data ready for analysis.

Managing that data inside the SOC is the job of a security information and event management system (SIEM), which acts as a system of record for the activities and data flowing through security operations. Originally focused on compliance, these systems allow analysts to access the data they need, and also log the outcome of any alert investigation. Products like ArcSight and Splunk and many others here have owned this space for years, and the market is not going anywhere.

Due to their compliance focus though, security management systems often lack the kinds of automated features that would make analysts more efficient. One early response to this challenge was a market known as user entity behavior analytics (UEBA). These products, which include companies like Exabeam, analyze typical user behavior and search for anomalies. In this way, they are meant to integrate raw data together to highlight activities for security analysts, saving them time and attention. This market was originally standalone, but as Gartner has pointed out, these analytics products are increasingly migrating into the security information management space itself as a sort of “smarter SIEM.”

These analytics products added value, but they didn’t solve the comprehensive challenge of data wrangling. Ideally, a system would ingest all of the security data and start to automatically detect correlations, grouping disparate data together into a cohesive security alert that could be rapidly evaluated by a security analyst. This sort of autonomous security has been a dream of security analysts for years, but that dream increasingly looks like it could become reality quite soon.

LaRosa of ADP told me that “Organizationally, we have got to figure out how we help our humans to work smarter.” David Tsao, Global Information Security Officer of Veeva Systems, was more specific, asking “So how do you organize data in a way so that a security engineer … can see how these various events make sense?”

JASK and the future of “autonomous security”

That’s where a company like JASK comes in. Its goal, simply put, is to take all the disparate data streams entering the security operations center and automatically group them into attacks. From there, analysts can then evaluate each threat holistically, saving them time and allowing them to focus on the sophisticated analytical part of their work, instead of on monotonous data wrangling.

The startup was founded by Greg Martin, a security veteran who perviously founded threat intelligence platform ThreatStream (now branded Anomali). Before that, he worked as an executive at ArcSight, a company that is one of the incumbent behemoths in security information management.

Martin explained to me that “we are now far and away past what we can do with just human-led SOCs.” The challenge is that every single security alert coming in has to go through manual review. “I really feel like the state of the art in security operations is really how we manufactured cars in the 1950s — hand-painting every car,” Martin said. “JASK was founded to just clean up the mess.”

Machine learning is one of these abused terms in the startup world, and certainly that is no exception in cybersecurity. Visionary security professionals wax poetic about automated systems that instantly detect a hacker as they attempt to gain access to the system and immediately respond with tested actions designed to thwart them. The reality is much less exciting: just connecting data from disparate sources is a major hurdle for AI researchers in the security space.

Martin’s philosophy with JASK is that the industry should walk before it runs. “We actually look to the autonomous car industry,” he said to me. “They broke the development roadmap into phases.” For JASK, “Phase one would be to collect all the data and prepare and identify it for machine learning,” he said. LaRosa of ADP, talking about the potential of this sort of automation, said that “you are taking forty to fifty minutes of busy work out of that process and allow [the security analysts] to get right to the root cause.”

This doesn’t mean that security analysts are suddenly out of a job, indeed far from it. Analysts still have to interpret the information that has been compiled, and even more importantly, they have to decide on what is the best course of action. Today’s companies are moving from “runbooks” of static response procedures to automated security orchestration systems. Machine learning realistically is far from being able to accomplish the full lifecycle of an alert today, although Martin is hopeful that such automation is coming in later phases of the roadmap.

Martin tells me that the technology is being used by twenty customers today. The company’s stack is built on technologies like Hadoop, allowing it to process significantly higher volumes of data compared to legacy security products.

JASK is essentially carving out a unique niche in the security market today, and the company is currently in beta. The company raised a $2m seed from Battery in early 2016, and a $12m series A led by Dell Technologies Capital, which saw its investment in security startup Zscaler IPO last week.

There are thousands of security products in the market, as any visit to the RSA conference will quickly convince you. Unfortunately though, SOCs can’t just be built with tech off the shelf. Every company has unique systems, processes, and threat concerns that security operations need to adapt to, and of course, hackers are not standing still. Products need to constantly change to adapt to those needs, which is why machine learning and its flexibility is so important.

Martin said that “we have to bias our algorithms so that you never trust any one individual or any one team. It is a careful controlled dance to build these types of systems to produce general purpose, general results that applies across organizations.” The nuance around artificial intelligence is refreshing in a space that can see incredible hype. Now the hard part is to keep moving that roadmap forward. Maybe that blue-haired silver screen hacker needs some employment.

via Click on the link for the full article

Hip hop finds its beat in the startup scene

Hip hop stars are taking their reputations to Wall Street and Sand Hill road.

Unlike their rock star brethren, who’ve historically been disinterested in dabbling with startups, quite a few hip hop artists have amassed good-sized portfolios. They’ve seen a few big hits too, most recently including a massive up round for zero-commission stock trading platform Robinhood, which counted Jay-Z, Nas and Snoop Dogg among its earlier backers.

But just how deep does the hip hop-startup relationship go and where is it headed? To shed some light on that question, we put together a review of Crunchbase data on the startup investment activity of famous musicians. We looked at both hip hop and pop stars, culling a list of 21 artists who are either active investors or have joined one or more rounds in recent years.

The general conclusion: Artists are doing more deals, raising more funds and backing more companies that graduate to up rounds and exits. Here are a few examples:

  • Besides getting a slice of Robinhood, Jay-Z and his entertainment company, Roc Nation, also saw an early portfolio company, flight club startup JetSmarter, go on to raise financing a year ago at a reported valuation more than $1.5 billion. Roc Nation also made headlines this week for investing in Promise, a startup providing alternatives to incarceration for people who can’t afford bail.
  • QueensBridge Venture Partners, the investment fund co-founded by Nas, was an early-stage investor in video doorbell maker Ring, which Amazon just bought for $1.1 billion. The firm could also see some paper gains this week in the much-anticipated market debut of Dropbox, which it backed in a 2014 Series C round. In addition, QueensBridge participated in a $25 million Series B round for cryptocurrency trading platform Coinbase back in 2013. Coinbase’s last reported valuation was around $1.6 billion.
  • Casa Verde Capital, a cannabis-focused venture fund co-founded by Snoop Dogg, has closed its debut fund with $45 million. Just this week it backed a $3.5 million round for vape manufacturer Green Tank.

That’s not to say everything a star touches turns multi-platinum. We found quite a few flops in their portfolios and assembled a list here of 10 startups now shuttered that counted a hip hop or pop star among their backers.

Becoming and remaining famous requires many of the same skills and qualities as running an entrepreneurial venture, including an exceptional degree of tenacity.

Of course, flops are part of life for early-stage investors, so there’s no reason we’d expect celebrities to be an exception. Moreover, most of the now-shuttered companies were not heavily capitalized by venture standards.

However, there are some higher-profile or more heavily funded companies on the flop list. One is Washio, a laundry delivery service, which raised $17 million from Nas and 20 other investors before hanging itself out to dry in 2016. Another is Viddy, an app for shooting and sharing video clips backed by Roc Nation.

Why the rich, hip and famous like startups

A number of venture pundits and pop culture mavens have previously pontificated why celebrities, and hip hop stars in particular, are drawn to startups.

One possibility is that rap music and startups resemble each other at the earliest stages, postulates Cam Houser, CEO of the 3 Day Startup Program. Rap music starts with a rapper and a producer. This duality, he says, is similar to the beginning stages of a startup, which commonly also brings together two people, a business and a technical co-founder.

Rap and startup entrepreneurship are also both longshot career tracks that celebrate raw ambition and unabashed self-promotion. To make it, however, both require an excellent grasp of what sells in the real world.

Branding is perhaps the most common rationale provided for the celebrity-startup connection. With their massive fan bases, swooning coverage and millions of social media followers, celebrities can certainly help get the word out about a new product or app. That said, the attention usually works only if said product also has compelling attributes of its own.

One of the less controversial explanations is that becoming and remaining famous requires many of the same skills and qualities as running an entrepreneurial venture, including an exceptional degree of tenacity.

It’s also true that in venture capital and the music business, it’s the hits that matter. It helps that we’re seeing plenty of those. 

via Click on the link for the full article