-->
ROOM ZKE
USAComment.com
Zicutake USA Comment | Search Articles



#History (Education) #Satellite report #Arkansas #Tech #Poker #Language and Life #Critics Cinema #Scientific #Hollywood #Future #Conspiracy #Curiosity #Washington
 Smiley face
 SYFY TV online Free

PROXY LIST

[Calculate SHA256 hash]
 Smiley face
DESTROYED WEBSITE @Zicutake Zicutake BROWSER
 Smiley face Encryption Text and HTML
Aspect Ratio Calculator
[HTML color codes]
 Smiley face Conversion to JavaScript
[download YouTube videos in MP4, FLV, 3GP, and many more formats]

 Smiley face Mining Satoshi | Payment speed
CALCULATOR DIMENSIONS AND RECTANGLE

 Smiley face
CREATE ADDRESS BITCOIN
Online BitTorrent Magnet Link Generator
[PERCENTAGE CALCULATOR]
JOURNAL WORLD:

SEARCH +8 MILLIONS OF LINKS ZICUTAKE STATE

Calculator Scientific

#Tech

#Tech


Canada’s Telus says partner Huawei is ‘reliable’: reports

Posted: 20 Jan 2019 11:04 PM PST

The US-China tension over Huawei is leaving telecommunications companies around the world at a crossroad, but one spoke out last week. Telus, one of Canada’s largest phone companies showed support for its Chinese partner despite a global backlash against Huawei over cybersecurity threats.

“Clearly, Huawei remains a viable and reliable participant in the Canadian telecommunications space, bolstered by globally leading innovation, comprehensive security measures, and new software upgrades,” said an internal memo signed by a Telus executive that The Globe and Mail obtained.

The Vancouver-based firm is among a handful of Canadian companies that could potentially leverage the Shenzhen-based company to build out 5G systems, the technology that speeds up not just mobile connection but more crucially powers emerging fields like low-latency autonomous driving and 8K video streaming. TechCrunch has contacted Telus for comments and will update the article when more information becomes available.

The United States has long worried that China’s telecom equipment makers could be beholden to Beijing and thus pose espionage risks. As fears heighten, President Donald Trump is reportedly mulling a boycott of Huawei and ZTE this year, according to Reuters. The Wall Street Journal reported last week that US federal prosecutors may bring criminal charges against Huawei for stealing trade secrets.

Australia and New Zealand have both blocked local providers from using Huawei components. The United Kingdom has not officially banned Huawei but its authorities have come under pressure to take sides soon.

Canada, which is part of the Five Eyes intelligence-sharing network alongside Australia, New Zealand, the UK and the US, is still conducting a security review ahead of its 5G rollout but has been urged by neighboring US to steer clear of Huawei in building the next-gen tech.

China has hit back at spy claims against its tech crown jewel over the past months. Last week, its ambassador to Canada Lu Shaye warned that blocking the world’s largest telecom equipment maker may yield repercussions.

“I always have concerns that Canada may make the same decision as the US, Australia and New Zealand did. And I believe such decisions are not fair because their accusations are groundless," Lu said at a press conference. “As for the consequences of banning Huawei from 5G network, I am not sure yet what kind of consequences will be, but I surely believe there will be consequences.”

Last week also saw Huawei chief executive officer Ren Zhengfei appear in a rare interview with international media. At the roundtable, he denied security charges against the firm he founded in 1987 and cautioned the exclusion of Chinese firms may delay plans in the US to deliver ultra-high-speed networks to rural populations — including to the rich.

“If Huawei is not involved in this, these districts may have to pay very high prices in order to enjoy that level of experience,” argued Ren. “Those countries may voluntarily approach Huawei and ask Huawei to sell them 5G products rather than banning Huawei from selling 5G systems.”

The Huawei controversy comes as the US and China are locked in a trade war that’s sending reverberations across countries that rely on the US for security protection and China for investment and increasingly skilled — not just cheap — labor.

Canada got caught between the feuding giants after it arrested Huawei’s chief financial officer Meng Wanzhou, who’s also Ren’s daughter, at the request of US authorities. The White House is now facing a deadline at the end of January to extradite Meng. Meanwhile, Canadian Prime Minister Justin Trudeau and Trump are urging Beijing to release two Canadian citizens who Beijing detained following Meng’s arrest.

Invoice finance platform MarketInvoice raises $33.5M from Barclays, Santander

Posted: 20 Jan 2019 04:00 PM PST

London, with its huge FinTech hub, is continuing to attract investment and that is no better represented today than with the news that MarketInvoice, arguably Europe's largest online invoice finance platform, has raised £26M ($33.5M) in a Series-B funding round led by Barclays and fintech fund Santander InnoVentures, alongside participation from European VC Northzone, which previously invested. In August last year Barlcays took a minority equity stake in the company and rolled out the service to its large SME client base.
Technology credit fund Viola Credit, which also participated, will additionally provide a debt facility of up to £30m to help scale the MarketInvoice business loans solution which is part of its core invoice finance solutions.
The funding will be used to deepen MarketInvoice’s market in the UK and launch what it called “cross-border fintech-bank partnerships" which it would be reasonable to conclude would include expanding into new markets.
Established in 2011, MarketInvoice has funded invoices and business loans to UK companies worth more than £2 billion, and they claim to be Europe's largest online invoice finance platform.
Anil Stocker, Co-founder & CEO told me: “for us strategic partnerships, especially those where we can use new sources of data, are becoming increasingly important. Our mission is to help as many entrepreneurs as possible gain access to finance… Barclays realises it's a good way of upgrading their offering to SMEs, and get more lending out to help these businesses. For us, by working with Barclays' network and presence in the market, we're able to educate more businesses on our funding solutions, something which would take much more time if we were to do it on our own.”
Ian Rand, CEO of Barclays Business Bank, said: "This investment demonstrates our commitment to the partnership we announced last summer which offers hundreds of thousands of our SME clients access to even more innovative forms of finance, boosting cash flow and competition in the market." Manuel Silva Martínez, Managing Partner and Head of Investments at Santander InnoVentures commented: “MarketInvoice is helping UK businesses access much needed funding to keep their businesses and ideas thriving in a very competitive market."

The case against behavioral advertising is stacking up

Posted: 20 Jan 2019 01:00 PM PST

No one likes being stalked around the Internet by adverts. It’s the uneasy joke you can’t enjoy laughing at. Yet vast people-profiling ad businesses have made pots of money off of an unregulated Internet by putting surveillance at their core.

But what if creepy ads don’t work as claimed? What if all the filthy lucre that’s currently being sunk into the coffers of ad tech giants — and far less visible but no less privacy-trampling data brokers — is literally being sunk, and could both be more honestly and far better spent?

Case in point: This week Digiday reported that the New York Times managed to grow its ad revenue after it cut off ad exchanges in Europe. The newspaper did this in order to comply with the region’s updated privacy framework, GDPR, which includes a regime of supersized maximum fines.

The newspaper business decided it simply didn’t want to take the risk, so first blocked all open-exchange ad buying on its European pages and then nixed behavioral targeting. The result? A significant uptick in ad revenue, according to Digiday’s report.

“NYT International focused on contextual and geographical targeting for programmatic guaranteed and private marketplace deals and has not seen ad revenues drop as a result, according to Jean-Christophe Demarta, SVP for global advertising at New York Times International,” it writes.

“Currently, all the ads running on European pages are direct-sold. Although the publisher doesn't break out exact revenues for Europe, Demarta said that digital advertising revenue has increased significantly since last May and that has continued into early 2019.”

It also quotes Demarta summing up the learnings: “The desirability of a brand may be stronger than the targeting capabilities. We have not been impacted from a revenue standpoint, and, on the contrary, our digital advertising business continues to grow nicely."

So while (of course) not every publisher is the NYT, publishers that have or can build brand cachet, and pull in a community of engaged readers, must and should pause for thought — and ask who is the real winner from the notion that digitally served ads must creep on consumers to work?

The NYT’s experience puts fresh taint on long-running efforts by tech giants like Facebook to press publishers to give up more control and ownership of their audiences by serving and even producing content directly for the third party platforms. (Pivot to video anyone?)

Such efforts benefit platforms because they get to make media businesses dance to their tune. But the self-serving nature of pulling publishers away from their own distribution channels (and content convictions) looks to have an even more bass string to its bow — as a cynical means of weakening the link between publishers and their audiences, thereby risking making them falsely reliant on adtech intermediaries squatting in the middle of the value chain.

There are other signs behavioural advertising might be a gigantically self-serving con too.

Look at non-tracking search engine DuckDuckGo, for instance, which has been making a profit by serving keyword-based ads and not profiling users since 2014, all the while continuing to grow usage — and doing so in a market that’s dominated by search giant Google.

DDG recently took in $10M in VC funding from a pension fund that believes there’s an inflection point in the online privacy story. These investors are also displaying strong conviction in the soundness of the underlying (non-creepy) ad business, again despite the overbearing presence of Google.

Meanwhile, Internet users continue to express widespread fear and loathing of the ad tech industry’s bandwidth- and data-sucking practices by running into the arms of ad blockers. Figures for usage of ad blocking tools step up each year, with between a quarter and a third of U.S. connected device users’ estimated to be blocking ads as of 2018 (rates are higher among younger users).

Ad blocking firm Eyeo, maker of the popular AdBlock Plus product, has achieved such a position of leverage that it gets Google et al to pay it to have their ads whitelisted by default — under its self-styled ‘acceptable ads’ program. (Though no one will say how much they’re paying to circumvent default ad blocks.)

So the creepy ad tech industry is not above paying other third parties for continued — and, at this point, doubly grubby (given the ad blocking context) — access to eyeballs. Does that sound even slightly like a functional market?

In recent years expressions of disgust and displeasure have also been coming from the ad spending side too — triggered by brand-denting scandals attached to the hateful stuff algorithms have been serving shiny marketing messages alongside. You don’t even have to be worried about what this stuff might be doing to democracy to be a concerned advertiser.

Fast moving consumer goods giants Unilever and Procter & Gamble are two big spenders which have expressed concerns. The former threatened to pull ad spend if social network giants didn’t clean up their act and prevent their platforms algorithmically accelerating hateful and divisive content.

While the latter has been actively reevaluating its marketing spending — taking a closer look at what digital actually does for it. And last March Adweek reported it had slashed $200M from its digital ad budget yet had seen a boost in its reach of 10 per cent, reinvesting the money into areas with “‘media reach’ including television, audio and ecommerce”.

The company’s CMO, Marc Pritchard, declined to name which companies it had pulled ads from but in a speech at an industry conference he said it had reduced spending "with several big players" by 20 per cent to 50 per cent, and still its ad business grew.

So chalk up another tale of reduced reliance on targeted ads yielding unexpected business uplift.

At the same time, academics are digging into the opaquely shrouded question of who really benefits from behavioral advertising. And perhaps getting closer to an answer.

Last fall, at an FTC hearing on the economics of big data and personal information, Carnegie Mellon University professor of IT and public policy, Alessandro Acquisti, teased a piece of yet to be published research — working with a large U.S. publisher that provided the researchers with millions of transactions to study.

Acquisti said the research showed that behaviourally targeted advertising had increased the publisher’s revenue but only marginally. At the same time they found that marketers were having to pay orders of magnitude more to buy these targeted ads, despite the minuscule additional revenue they generated for the publisher.

“What we found was that, yes, advertising with cookies — so targeted advertising — did increase revenues — but by a tiny amount. Four per cent. In absolute terms the increase in revenues was $0.000008 per advertisment,” Acquisti told the hearing. “Simultaneously we were running a study, as merchants, buying ads with a different degree of targeting. And we found that for the merchants sometimes buying targeted ads over untargeted ads can be 500% times as expensive.”

“How is it possible that for merchants the cost of targeting ads is so much higher whereas for publishers the return on increased revenues for targeted ads is just 4%,” he wondered, posing a question that publishers should really be asking themselves — given, in this example, they’re the ones doing the dirty work of snooping on (and selling out) their readers.

Acquisti also made the point that a lack of data protection creates economic winners and losers, arguing this is unavoidable — and thus qualifying the oft-parroted tech industry lobby line that privacy regulation is a bad idea because it would benefit an already dominant group of players. The rebuttal is that a lack of privacy rules also does that. And that’s exactly where we are now.

“There is a sort of magical thinking happening when it comes to targeted advertising [that claims] everyone benefits from this,” Acquisti continued. “Now at first glance this seems plausible. The problem is that upon further inspection you find there is very little empirical validation of these claims… What I’m saying is that we actually don’t know very well to which these claims are true and false. And this is a pretty big problem because so many of these claims are accepted uncritically.”

There’s clearly far more research that needs to be done to robustly interrogate the effectiveness of targeted ads against platform claims and vs more vanilla types of advertising (i.e. which don’t demand reams of personal data to function). But the fact that robust research hasn’t been done is itself interesting.

Acquisti noted the difficulty of researching “opaque blackbox” ad exchanges that aren’t at all incentivized to be transparent about what’s going on. Also pointing out that Facebook has sometimes admitted to having made mistakes that significantly inflated its ad engagement metrics.

His wider point is that much current research into the effectiveness of digital ads is problematically narrow and so is exactly missing a broader picture of how consumers might engage with alternative types of less privacy-hostile marketing.

In a nutshell, then, the problem is the lack of transparency from ad platforms; and that lack serving the self same opaque giants.

But there’s more. Critics of the current system point out it relies on mass scale exploitation of personal data to function, and many believe this simply won’t fly under Europe’s tough new GDPR framework.

They are applying legal pressure via a set of GDPR complaints, filed last fall, that challenge the legality of a fundamental piece of the (current) adtech industry’s architecture: Real-time bidding (RTB); arguing the system is fundamentally incompatible with Europe’s privacy rules.

We covered these complaints last November but the basic argument is that bid requests essentially constitute systematic data breaches because personal data is broadcast widely to solicit potential ad buys and thereby poses an unacceptable security risk — rather than, as GDPR demands, people’s data being handled in a way that “ensures appropriate security”.

To spell it out, the contention is the entire behavioral advertising business is illegal because it’s leaking personal data at such vast and systematic scale it cannot possibly comply with EU data protection law.

Regulators are considering the argument, and courts may follow. But it’s clear adtech systems that have operated in opaque darkness for years, without no worry of major compliance fines, no longer have the luxury of being able to take their architecture as a given.

Greater legal risk might be catalyst enough to encourage a market shift towards less intrusive targeting; ads that aren’t targeted based on profiles of people synthesized from heaps of personal data but, much like DuckDuckGo’s contextual ads, are only linked to a real-time interest and a generic location. No creepy personal dossiers necessary.

If Acquisti’s research is to be believed — and here’s the kicker for Facebook et al — there’s little reason to think such ads would be substantially less effective than the vampiric microtargeted variant that Facebook founder Mark Zuckerberg likes to describe as “relevant”.

The ‘relevant ads’ badge is of course a self-serving concept which Facebook uses to justify creeping on users while also pushing the notion that its people-tracking business inherently generates major extra value for advertisers. But does it really do that? Or are advertisers buying into another puffed up fake?

Facebook isn’t providing access to internal data that could be used to quantify whether its targeted ads are really worth all the extra conjoined cost and risk. While the company’s habit of buying masses of additional data on users, via brokers and other third party sources, makes for a rather strange qualification. Suggesting things aren’t quite what you might imagine behind Zuckerberg’s drawn curtain.

Behavioral ad giants are facing growing legal risk on another front. The adtech market has long been referred to as a duopoly, on account of the proportion of digital ad spending that gets sucked up by just two people-profiling giants: Google and Facebook (the pair accounted for 58% of the market in 2018, according to eMarketer data) — and in Europe a number of competition regulators have been probing the duopoly.

Earlier this month the German Federal Cartel Office was reported to be on the brink of partially banning Facebook from harvesting personal data from third party providers (including but not limited to some other social services it owns). Though an official decision has yet to be handed down.

While, in March 2018, the French Competition Authority published a meaty opinion raising multiple concerns about the online advertising sector — and calling for an overhaul and a rebalancing of transparency obligations to address publisher concerns that dominant platforms aren’t providing access to data about their own content.

The EC’s competition commissioner, Margrethe Vestager, is also taking a closer look at whether data hoarding constitutes a monopoly. And has expressed a view that, rather than breaking companies up in order to control platform monopolies, the better way to go about it in the modern ICT era might be by limiting access to data — suggesting another potentially looming legal headwind for personal data-sucking platforms.

At the same time, the political risks of social surveillance architectures have become all too clear.

Whether microtargeted political propaganda works as intended or not is still a question mark. But few would support letting attempts to fiddle elections just go ahead and happen anyway.

Yet Facebook has rushed to normalize what are abnormally hostile uses of its tools; aka the weaponizing of disinformation to further divisive political ends — presenting ‘election security’ as just another day-to-day cost of being in the people farming business. When the ‘cost’ for democracies and societies is anything but normal. 

Whether or not voters can be manipulated en masse via the medium of targeted ads, the act of targeting itself certainly has an impact — by fragmenting the shared public sphere which civilized societies rely on to drive consensus and compromise. Ergo, unregulated social media is inevitably an agent of antisocial change.

The solution to technology threatening democracy is far more transparency; so regulating platforms to understand how, why and where data is flowing, and thus get a proper handle on impacts in order to shape desired outcomes.

Greater transparency also offers a route to begin to address commercial concerns about how the modern adtech market functions.

And if and when ad giants are forced to come clean — about how they profile people; where data and value flows; and what their ads actually deliver — you have to wonder what if anything will be left unblemished.

People who know they’re being watched alter their behavior. Similarly, platforms may find behavioral change enforced upon them, from above and below, when it becomes impossible for everyone else to ignore what they’re doing.

Thanks to Hulu, Disney lost $580 million last fiscal year

Posted: 20 Jan 2019 11:00 AM PST

The streaming media business is tough. Disney, which has a 30 percent stake Hulu, saw losses of $580 million last fiscal year, according to an SEC filing.

This was, the SEC filing states, “primarily due to a higher loss from our investment in Hulu, partially offset by a favorable comparison to a loss from BAMTech in the prior year.”

BAMTech is the streaming technology that powers ESPN+ and other services. In total, streaming accounted for more than $1 billion in losses for Disney last fiscal year.

Meanwhile, Disney has yet to release its own streaming service, Disney+, which is slated for late 2019. Disney is also planning to increase its investment in Hulu, focusing more on original content and international expansion.

As part of Disney’s buyout of 21st Century Fox, Disney will soon own another 30 percent of Hulu. If the business goes similarly for Hulu this fiscal year, that will only increase Disney’s losses.

Uber is exploring autonomous bikes and scooters

Posted: 20 Jan 2019 10:40 AM PST

Uber is looking to integrate autonomous technology into its bike and scooter-share programs. Details are scarce, but according to 3D Robotics CEO Chris Anderson, who said Uber announced this at a DIY Robotics event over the weekend, the division will live inside Uber’s JUMP group, which is responsible for shared electric bikes and scooters.

The new division, Micromobility Robotics, will explore autonomous scooters and bikes that can drive themselves to be charged, or drive themselves to locations where riders need them. The Telegraph has since reported Uber has already begun hiring for this team.

“The New Mobilities team at Uber is exploring ways to improve safety, rider experience, and operational efficiency of our shared electric scooters and bicycles through the application of sensing and robotics technologies,” Uber’s ATG wrote in a Google Form seeking information from people interested in career opportunities.

Back in December, Uber unveiled its next generation of JUMP bikes, with self-diagnostic capabilities and swappable batteries. The impetus for the updated bikes came was the need to improve JUMP’s overall unit economics.

"That is a major improvement to system utilization, the operating system, fleet uptime and all of the most critical metrics about how businesses are performing with running a shared fleet," JUMP Head of Product Nick Foley told TechCrunch last month. "Swappable batteries mean you don't have to take vehicles back to wherever you charge a bike or scooter, and that's good for the business."

Autonomous bikes and scooters would make Uber’s shared micromobility business less reliant on humans to charge the vehicles. You could envision a scenario where Uber deploys freshly-charged bikes and scooters to areas where other vehicles are low on juice. Combine that with swappable batteries (think about Uber quickly swapping in a new battery once the vehicle makes it back to the warehouse and then immediately re-deploying that bike or scooter), and Uber has itself a well-oiled machine that increases vehicle availability and improves the overall rider experience.

Uber declined to comment.

Facebook launches petition feature, its next battlefield

Posted: 20 Jan 2019 10:21 AM PST

Gather a mob and Facebook will now let you make political demands. Tomorrow Facebook will encounter a slew of fresh complexities with the launch of Community Actions, its News Feed petition feature. Community Actions could unite neighbors to request change from their local and national elected officials and government agencies. But it could also provide vocal interest groups a bully pulpit from which to pressure politicians and bureaucrats with their fringe agendas.

Community Actions embodies the central challenge facing Facebook. Every tool it designs for positive expression and connectivity can be subverted for polarization and misinformation. Facebook’s membership has swelled into such a ripe target for exploitation that it draws out the worst of humanity. You can imagine misuses like “Crack down on [minority group]” that are offensive or even dangerous but some see as legitimate. The question is whether Facebook puts in the forethought and aftercare to safeguard its new tools with proper policy and moderation. Otherwise each new feature is another liability.

Community Actions start to roll out to the US tomorrow after several weeks of testing in a couple of markets. Users can add a title, description, and image to their Community Action, and tag relevant government agencies and officials who’ll be notified. The goal is to make the Community Action go viral and get people to hit the “Support” button. Community Actions have their own discussion feed where people can leave comments, create fundraisers, and organize Facebook Events or Call Your Rep campaigns. Facebook displays the numbers of supporters behind a Community Action, but you’ll only be able to see the names of those you’re friends with or that are Pages or public figures.

Facebook is purposefully trying to focus Community Actions to be more narrowly concentrated on spurring government action than just any random cause. That means it won’t immediately replace Change.org petitions that can range from the civilian to the absurd. But one-click Support straight from the News Feed could massively reduce the friction to signing up, and thereby attract organizations and individuals seeking to maximize the size of their mob.

You can check out some examples here of Community Actions here like a non-profit Colorado Rising calling for the governor to put a moratorium on oil and gas drilling, citizens asking the a Florida’s mayor and state officials to build a performing arts center, and a Philadelphia neighborhood association requesting that the city put in crosswalks by the library. I fully expect one of the first big Community Actions will be the social network’s users asking Senators to shut down Facebook or depose Mark Zuckerberg.

The launch follows other civic-minded Facebook features like its Town Hall and Candidate Info for assessing politicians, Community Help for finding assistance after a disaster, and local news digest Today In. A Facebook spokesperson who gave us the first look at Community Actions provided this statement:

"Building informed and civically engaged communities is at the core of Facebook’s mission. Every day, people come together on Facebook to advocate for causes they care about, including by contacting their elected officials, launching a fundraiser, or starting a group. Through these and other tools, we have seen people marshal support for and get results on issues that matter to them. Community Action is another way for people to advocate for changes in their communities and partner with elected officials and government agencies on solutions."

The question will be where Facebook’s moderators draw the line on what’s appropriate as a Community Action, and the ensuing calls of bias that line will trigger. Facebook is employing a combination of user flagging, proactive algorithmic detection, and human enforcers to manage the feature. But what the left might call harassment, the right might call free expression. If Facebook allows controversial Community Actions to persist, it could be viewed as complicit with their campaigns, but could be criticized for censorship if it takes one down. Like fake news and trending topics, the feature could become the social network’s latest can of worms.

Facebook is trying to prioritize local Actions where community members have a real stake. It lets user display “constituent” badges so their elected officials know they aren’t just a distant rabble-rouser. It’s why Facebook will not allow President Donald Trump or Vice President Mike Pence to be tagged in Community Actions. But you’re free to tag all your state representatives demanding nude parks, apparently.

Another issue is how people can stand up against a Community Action. Only those who Support one may join in its discussion feed. That might lead trolls to falsely pledge their backing just to stir up trouble in the comments. Otherwise, Facebook tells me users will have to share a Community Action to their own feed with a message of disapproval, or launch their own in protest. My concern is that an agitated but niche group could drive a sense of false equivocacy by using Facebook Groups or message threads to make it look like there’s as much or more support for a vulgar cause or against of a just one. A politician could be backed into a corner and forced to acknowledge radicals or bad-faith actors lest they look negligent

While Facebook’s spokesperson says initial tests didn’t surface many troubles, the company is trying to balance safety with efficiency and it will consider how to evolve the feature in response to emergent behaviors. The trouble is that open access draws out the trolls and grifters seeking to fragment society. Facebook will have to assume the thorny responsibility of shepherding the product towards righteousness and defining what that even means. If it succeeds, there’s an amazing opportunity here for citizens to band together to exert consensus upon government. A chorus of voices carries much further than a single cry.

The AI market is growing, but how quickly is tough to pin down

Posted: 20 Jan 2019 10:10 AM PST

If you work in tech, you've heard about artificial intelligence: how it's going to replace uswhether it's over-hyped or not and which nations will leverage it to prevent, or instigate, war.

Our editorial bent is more clear-cut: How much money is going into startups? Who is putting that money in? And what trends can we suss out about the health of the market over time?

So let's talk about the state of AI startups and how much capital is being raised. Here's what I can tell you: funding totals for AI startups are growing year-over-year; I just don't know precisely how quickly. Regardless, startups are certainly raising massive sums of money off the buzzword.

To make that point, here are just a few of the biggest rounds announced and recorded by Crunchbase in 2018:

  • SenseTime, a China-based startup that is quite good at tracking your face wherever it may be, raised a $1 billion Series D round. It was the largest round of the year in the AI category, according to Crunchbase. But what's more mind-blowing is that the company raised a total of $2.2 billion in just one year across three rounds. A picture is worth a thousand words, but a face is worth billions of dollars.
  • UBTech Robotics, another China-based startup focusing on robotics, raised an $820 million Series C. Just a cursory look at its website, however, makes UBTech appear to be a high-end toy maker rather than an AI innovator.
  • And biotech startup Zymergen, which "manufactures microbes for Fortune 500 companies," according to Crunchbase, raised a $400 million Series C.

Now, this is the part I normally include a chart and 400 words of copy to contextualize the AI market. But if you read the above descriptions closely, you'll see our problem: What the hell does "AI" mean?

Take Zymergen as an example. Crunchbase tags it with the AI marker. Bloomberg, citing data from CB Insights, agrees. But if you were making the decision, would you demarcate it as an AI company?

Zymergen's own website doesn't employ the phrase. Rather, it uses buzzwords commonly associated with AI — machine learning, automation. Zymergen's home page, technology page and careers page are devoid of the term.

Instead, the company focuses on molecular technology. Artificial intelligence is not, in fact, what Zymergen is selling. We also know that Zymergen uses some AI-related tools to help it understand its data sets (check its jobs page for more). But is that enough to call it an AI startup? I don't think so. I would call it biotech.

That brings us back to the data. In the spirit of transparency, CB Insights reports a 72 percent boost in 2018 AI investment over 2017 funding totals. Crunchbase data pegs 2018's AI funding totals at a more modest 38 percent increase over the preceding year.

So we know that AI fundraising for private companies is growing. The two numbers make that plain. But it's increasingly clear to me after nearly two years of staring at AI funding rounds that there's no market consensus over exactly what counts as an AI startup. Bloomberg in its coverage of CB Insights' report doesn't offer a definition. What would yours be?

If you don't have one, don't worry; you're not alone. Professionals constantly debate what AI actually means, and who actually deserves the classification. There's no taxonomy for startups like how we classify animals. It's flexible, and with PR, you can bend perception past reality.

I have a suspicion there are startups that overstate their proximity to AI. For instance, is employing Amazon's artificial intelligence services in your back end enough to call yourself an AI startup? I would say no. But after perusing Crunchbase data, you can see plenty of startups that classify themselves on such slippery grounds.

And the problem we're encountering rhymes well with a broader definitional crisis: What exactly is a tech company? In the case of Blue Apron, public investors certainly differed with private investors over the definition, as Alex Wilhelm has touched on before.

So what I can tell you is that AI startup funding is up. By how much? A good amount. But the precise figure is hard to pin down until we all agree what counts as an AI startup.

Stung by criticism, Facebook’s Sandberg outlines new plans to tackle misinformation

Posted: 20 Jan 2019 07:32 AM PST

Stung by criticism of its widely reported role as a platform capable of spreading disinformation and being used by state actors to skew democratic elections, Facebook's COO Sheryl Sandberg unveiled five new ways the company would be addressing these issues at the annual DLD conference in Munich, staged ahead of the World Economic Forum. She also announced that Facebook would fund a german university to investigate the eithics of AI, and a new partnership with Germany’s office for information and security.
Sandberg laid out Facebooks five-step plan to regain trust:
1. Investing in safety and security
2. Protections against election interference
3. Cracking down on fake accounts and misinformation
4. Making sure people can control the data they share about themselves
5. Increasing transparency

Public backlashes mounted last year after Facebook was accused of losing track of its users’ personal data, and allow the now defunct Cambridge Analytica agency to mount targetted advertising to millions of Facebook users without their explicit consent in the US elections.

On safety and security, she said Facebook now employed 30,000 people to check its platform for hate posts and misinformation, 5 times more than in 2017.
She admitted that in 2016 Facebook's cybersecurity policies were centered around protecting users data from hacking and phishing. However, these were not adequate to deal with how state actors would try to a “sow disinformation and dissent into societies."
Over the last year she said Facebook has removed thousand of individuals accounts and page designs to coordinate disinformation campaigns. She said they would be applying all these lessons learned to the EU parliamentary elections this year's well as working more closely with governments.
Today, she said Facebook was announcing a new partnership with the German government's office for information and security to help guide policymaking in Germany and across the EU ahead of its parliamentary elections this year.
Sandberg also revealed the sheer scale of the problem. She said Facebook was now cracking down on fake accounts and misinformation, blocking “more than one million Facebook accounts every day, often as they are created." She did not elucidate further on which state actors were involved in this sustained assault on the social network.
She said Facebook was now working with fact checkers around the world and had tweaked its algorithm to show related articles allowing users to see both sides of a news story that is posted on the platform. It was also taking down posts which had the potential to create real-world violence, she said. However, she neglected to mention that Facebook also owns WhatsApp, which has been widely blamed for the spreading of false rumors leaking a spate of murders in India.
She cited independent studies from Stanford University and the Le Monde newspaper which have show that Facebook user engagement with unreliable sites has declined by half since 2015.
In a subtle attack on critics, she noted that in 2012 Facebook was often attacked because it was a "walled garden", and that the platform had subsequently bent to demands to open up to allow third-party apps to build on the service, allowing greater sharing, such as for game-play. However, the company was "now in a “very different place”. “We did not a do a good job managing our platform," she admitted, acknowledging that this data sharing had led to abuse by bad actors.
She said Facebook had now dramatically cut down on the information about users which apps can access, appointed independent data protection officers, bowed to GDPR rules in the EU and created similar users controls globally.
She said the company was also increasing transparency, allowing other organizations to hold them accountable. "We want you to be able to judge our progress," she said.
Last year it published its first community standards enforcement report and Sandberg said this would now become an annual event, and given as much status as its annual financial results.
She repeated previous announcements that Facebook would be instituting new standards for advertising transparency, allowing people to see all the adverts a page is running and launching new tools ahead of EU elections in May.
She also announced a new partnership with the Technical University of Munich (TUM) to support the creation of an independent AI ethics research center.
The Institute for Ethics in Artificial Intelligence, which is supported by an initial funding grant from Facebook of $7.5 million over five years, will help advance the growing field of ethical research on new technology and will explore fundamental issues affecting the use and impact of AI.

Technology’s dark forest

Posted: 20 Jan 2019 06:00 AM PST

We used to be such optimists. Technology would bring us a world of wealth in harmony with the environment, and even bring us new worlds. The Internet would erase national boundaries, replace gatekeepers with a universal opportunity for free expression, and bring us all closer together. Remember when we looked forward to every advance?

I just finished Liu Cixin’s magisterial science-fiction trilogy Remembrance of Earth’s Past. It is very much a bracingly pessimistic story for our era. Without spoiling it too much, I’ll just say that it’s a depiction of a transition from optimistically anticipating contact with other worlds … to a bleak realization that we haven’t done so yet because the universe is a “dark forest,” the title of the trilogy’s second book. “Dark forest theory” holds that civilizations fear one another so much that they don’t dare to reveal themselves lest they immediately be considered a potential threat and destroyed.

There are certain analogies here. We’ve grown to fear technology, to mistrust everything it offers us, to assume its every new offering has a dark side. Consider the recent mini-viral-storm around the “10 Year Challenge” meme, and the resulting Wired piece suggesting it’s a Trojan Horse designed to manipulate us into training Facebook’s AI to improve recognition of aging faces.

I strongly doubt that that is actually the case. Not because I have any faith in Facebook’s transparent benevolence; because they already have a way-past-enormous cornucopia of such data, more accurately (implicitly) tagged. Even if explicit tags were helpful rather than counterproductive — which I doubt, given the stripping of metadata, the jokes riffing on the meme, etc. — they wouldn’t move the needle. As Max Read puts it:

But I find it a striking example of how so many of us have grown to treat technology as a dark forest. Everything tech does seems to now be considered a threat until proved a blessing, and maybe even then. It wasn’t long ago that the reverse was true. How and why did this happen?

Part of it is probably resentment. The fabulously wealthy and influential tech industry has become one of the world’s premier power centers, and people (correctly) suspect tech is now more likely to reify this new hierarchy than disrupt or undercut it. But it’s hard to shake the sense that it’s not really technology’s job to improve human hierarchies; it’s democracy’s. It’s true that democracy seems to have been doing a shockingly poor job over the last few years, but it’s hard to blame that entirely on technology.

Rather, I think a lot of this dark-forest attitude towards tech is because, to most people, technology is now essentially magic. In AI’s case, as we see from that Wired piece, even experts can’t agree on what the technology needs, much less exactly how it works, much less explain step-by-step how it arrives at its (not always be reproducible) results.

(Possibly implicitly biased results! you may shout. Yes, that’s true and important. But I find it bizarre how everyone outside of the business keeps hammering the table shouting about how the tech industry need to stop ignoring the fact that AI may reinforce implicit bias, while all the AI people I know are deeply aware of this risk, describe it as one of their primary concerns, talk about it constantly, and are doing all kinds of work to mitigate or eliminate it. Why the implicit assumption that all AI researchers and engineers are blithely ignoring this risk? Again: technology has become a dark forest.)

Tech-as-magic is not just limited to AI, though. How many people really understand what happens when you flick a switch and a light comes on? How many fewer really understand how text messaging works, or why a change of a mere few degrees in global temperatures is likely to be catastrophic for billions? Not many. What do we fear? We fear the unknown. Tech is a dark forest because to most people tech is dark magic.

The problem is, this dark magic happens to be our only hope to solve our immediate existential problems, such as global warming. We already live in a dark forest full of terrible but subtle and ill-defined threats, and they aren’t caused by new technologies, they’re caused by the consequences of exceeding the carrying capacity of our planet with our old technologies. Climate change is a grue coming through the trees for us with terrifying speed, and technology is the one torch which might lead us out.

Fine, granted, that fire might, theoretically, in the long run, and/or in the wrong, might eventually become some kind of a threat. It’s used by a lot of bad actors to manipulate people, reify oppression, and siphon wealth its users don’t deserve. In some parts of the planet it’s being horrifically misused in far worse ways yet. All true. But just because fire is dangerous doesn’t every new use of it is a malevolent threat. Let’s get past the knee-jerk backlash and try to restore a little optimism, a little hope, a little potential belief that new technological initiatives are not automatically a bad-faith misuse, even if they do come from Facebook.

(I’m the first to admit that Facebook does a lot of bad things, and condemn them for it! But that does not mean that everything they do is bad. Companies are like people; it is possible, hard as this may to be to believe in this Death Of Nuance era, that they can do some good things and some bad things at the same time. Most shocking of all, this is even true of Elon Musk.)

I’m not just saying that this would be nice. I’m saying it’s something we probably need to do, because like it or not, it seems that we have, as a species, already collectively wandered into a very real dark forest, and a cascading series of better technologies is the only plausible route out. It’ll be awfully hard to build that route if we start assuming it’s been deliberately filled with pitfalls and quicksand. Let’s be skeptical, by all means; but let’s not assume guilt and bad faith as our default stance.