We have now mid 2018 and things have changed. Not on the surface yet, NIPS conference is still oversold, the corporate PR still has AI all over its press releases, Elon Musk still keeps promising self driving cars and Google CEO keeps repeating Andrew Ng’s slogan that AI is bigger than electricity. But this narrative begins to crack. And as I predicted in my older post, the place where the cracks are most visible is autonomous driving – an actual application of the technology in the real world.
I have a stated aim on this site of avoiding drama and negativity. In this case I felt compelled to make an exception and link to a post for the explicit purpose of essentially calling bullshit. The lead evidence for the onset of AI winter is that… Andrew Ng is tweeting less than he used to? The “biggest pin punching through the AI bubble” (his words, not mine) is that Uber’s engineering practices are as shady as their business practices? Everything in between these data points is just as weak.
He celebrates a tweet in which someone cherry picks results from a different data set in order to “prove” that a system Andrew Ng promoted actually “sucks”.
He links to this article, which based on the non-paywalled paragraphs can easily be refuted by watching the Google I/O keynote. Apparently Google doesn’t know what to do with DeepMind. Despite using its tech for text-to-speech in Google Assistant and data center power management.
The conclusion I’m inclined to draw here is that this is Deep Learning FUD. Why do I think that? It’s in the sidebar on the right of the site. The author has an alternative approach he’s trying to promote.
Exactly how the largest protocol-level hack of a cryptocurrency in recent memory could preceed said cryptocurrency increasing in price and then announcing a partnership with the most trafficked porn-site on the internet is a question I’m forced to leave open-ended; my personal pet-theory is that it has something to do with the fact that the world makes no sense and human beings are all completely out of their goddamn minds.
A really well written and easy to follow explanation of how exactly the verge hack happened. Aside from the above, completely bonkers, observation I found this to be a good takeaway:
That software doesn’t always work the way we want it to, and that such malfunctions can lead to the loss of funds, shouldn’t be particularly shocking to anyone in 2018. But when that software is, in fact, money, it’s worth an extra layer of precaution.
I suppose, at this point, that probably isn’t something we should need to be said…
Warning: references to sex. Possibly NSFW.
The sex industry has long held this tenuous place in the vanguard of technology, adopting new protocols early but at risk for punishment. As soon as people could exchange things online, they exchanged porn — copyrighted pictures, through Usenet and elsewhere, beginning in the late 1980s; and then streaming video. The industry was among the first to test online credit card processing. When payment apps began to appear, sex workers flocked to those, too. “When PayPal was starting out, a lot of their money originally came from sex work, so it’s frustrating that they treat us this way now,” said Goddess Venus.
The Outline tends to fall squarely into the “source of negativity and drama” classification of site which I tend to avoid posting here. This is an interesting article, though. It’s also pretty obvious in hindsight.
To some extent I feel as though this use case could be served by just using Ethereum or Bitcoin directly. But as it stands they don’t really have the needed user experience yet.
Walmart Inc. is getting suppliers to put food on the blockchain to help reduce waste, better manage contamination cases and improve transparency.
This is a really short article which is very light on details. It’s so short, in fact, that it’s hard for me to quote anything of substance without just including the whole thing. So instead I’ll just say this: blockchain seems pretty ideal for something like this.
There’s a standard called called BES 6001, which relates to the responsible sourcing of construction products. It mandates, essentially, that a single girder can be traced back through where it was smelted to the site the ore was quarried. There are similar standards for food and other materials.
That sounds onerous, and it is. But it’s also really useful. It makes it hard to cheat and use substandard materials. It also means that if a girder fails, you can find the entire batch and retest or decommission them. The flaw, of course, is that the records are only as good as the systems which are used to store them. So a tamper proof, trust minimised, distributed ledger is just the thing.
I’ve recommended this podcast multiple times, because I think it’s fantastic. Rob Reid needs helping making more of it. I’ll be signing on at the $5 level. If you listened and you think it’s worth your time, please consider if you think it’s worth your money, as well. If you haven’t listened: serious, it’s great.
I’ve been satisfied with Python for almost 10 years. But I don’t think I’ll still be using it another decade from now. I think I’ll be using Swift.
I promise I’ll stop beating this drum soon. But I thought this piece by an active practitioner was worth posting.
Source: TensorFlow @ Medium.com
The TensorFlow Team:
Swift for TensorFlow provides a new programming model that combines the performance of graphs with the flexibility and expressivity of Eager execution, with a strong focus on improved usability at every level of the stack. This is not just a TensorFlow API wrapper written in Swift — we added compiler and language enhancements to Swift to provide a first-class user experience for machine learning developers.
No surprises here. I think it looks awesome. I’m still reading the material which was released with the announcement, but everything so far seems very clear and well thought out.
“Why Swift for TensorFlow?” is an interesting read, for example. It doesn’t shy away from the disadvantages of using Swift. One of these is that Python already has a great data science ecosystem, but Swift does not. The new Swift / Python interoperability should alleviate that somewhat, but the doc makes a longer term point I hadn’t even thought of:
Given that most of these Python libraries are implemented as C code wrapped by Python, it is possible that the Swift ecosystem will eventually grow to include Swift wrappers for the same libraries.
Back in 2014, Amazon filed a patent for a Streaming Data Marketplace which would allow them to gather online data streams, analyze and combine them with other data sources, and sell the results as a finished product. Although this is a very generic sounding idea, one of the clauses in the patent filing has many in the security and cryptocurrency community worried.
The patent, which was granted this week, outlines a method for analyzing the Bitcoin blockchain, correlating transactions and Bitcoin addresses with the people that made them, and then selling the result to telecom providers or government agencies.
That sounds pretty nefarious. To what end would the government be using these transaction details? Let’s ask the patent itself:
Government agencies may be able to subscribe downstream and correlate tax transaction data to help identify transaction participants.
So essentially, Amazon has a patent on helping the government make it harder for people to use bitcoin in order to tax dodge. Honestly, I think that’s pretty reasonable. It seems the government of Finland is doing this already. Two things to remember:
- Bitcoin is not anonymous. It’s pseudomonas, which is not the same thing at all;
- If you buy something and that thing gains value, you probably have a tax burden related to that gain. This applies to stocks and shares, it applies to paintings, and it applies to digital goods.
From the abstract:
Recent medical progress is quickly advancing our ability to induce torpor, a deep sleep hibernation-like state, in humans for extended periods of time. The authors propose to place crew and passengers in a prolonged hypothermic state during space-mission transit phases (outbound and Earth-return) to significantly reduce the system mass, power, habitable volume, and medical challenges associated with long-duration space exploration.
This is really cool, but there’s no way I’d sign up to be one of the first to test it. Seriously, the ethical concerns of actually proving this technology are going to be huge. I hope it works out, though. I’m inclined to think we need something like this in order to reach even destinations as close as Mars.
Our geometric intuition developed in our three-dimensional world often fails us in higher dimensions. Many properties of even simple objects, such as higher dimensional analogs of cubes and spheres, are very counterintuitive. Below we discuss just a few of these properties in an attempt to convey some of the weirdness of high dimensional space.
This makes a nice follow up to my previous posting about performing gradient decent on the surface of the earth. State spaces for machine learning algorithms tend to have a lot more dimensions than 2. There’s a reasonable chance they have millions, in fact. So while the “surface of the earth” explanation is a really good way of introducing the concept of gradient decent, it might not actually help you when using the algorithm in practice.
Also: higher dimensional spaces are just fun to think about.