SpaceX kicks off its space-based internet service tomorrow with 60-satellite Starlink launch

SpaceX kicks off its space-based internet service tomorrow with 60-satellite Starlink launch

3:47pm, 14th May, 2019
As wild as it sounds, the race is on to build a functioning space internet — and SpaceX is taking its biggest step yet with the launch of 60 (!) satellites tomorrow that will form the first wave of its Starlink constellation. It’s a hugely important and incredibly complex launch for the company — and should be well worth launching. A Falcon 9 with the flat Starlink test satellites (they’re “production design” but not final hardware) is vertical at launchpad 40 in Cape Canaveral. It has completed its static fire test and should have a window for launch tomorrow, weather permitting. Building satellite constellations hundreds or thousands strong is seen by several major companies and investors as the next major phase of connectivity — though it will take years and billions of dollars to do so. OneWeb, perhaps SpaceX’s biggest competitor in this area, just in funding after in March of a planned 650. Jeff Bezos has announced that Amazon will join the fray with the proposed 3,236-satellite Project Kuiper. Ubiquitilink has . And plenty of others are taking on smaller segments, like lower-cost or domain-specific networks. Needless to say it’s an exciting sector, but today’s launch is a particularly interesting one because it is so consequential for SpaceX. If this doesn’t go well, it could set Starlink’s plans back long enough to give competitors an edge. The satellites stacked inside the Falcon 9 payload fairing. “Tight fit,” pointed out CEO Elon Musk. SpaceX hasn’t explained exactly how the 60 satellites will be distributed to their respective orbits, but founder and CEO Elon Musk did note on Twitter that there’s “no dispenser.” Of course there must be some kind of dispenser — these things aren’t going to just jump off of their own accord. They’re stuffed in there like kernels on a corncob, and likely each have . A pair of prototype satellites, Tintin-A and B, have been in orbit since early last year, and have no doubt furnished a great deal of useful information to the Starlink program. But the 60 aboard tomorrow’s launch aren’t quite final hardware. Although Musk noted that they are “production design,” COO Gwynne Shotwell has said that they are still test models. “This next batch of satellites will really be a demonstration set for us to see the deployment scheme and start putting our network together,” she said at the Satellite 2019 conference in Washington, D.C. — they reportedly lack inter-satellite links but are otherwise functional. I’ve asked SpaceX for more information on this. It makes sense: If you’re planning to put thousands (perhaps as many as 12,000 eventually) of satellites into orbit, you’ll need to test at scale and with production hardware. And for those worried about the possibility of overpopulation in orbit — it’s absolutely something to consider, but many of these satellites will be ; at 550 kilometers up, these tiny satellites will naturally de-orbit in a handful of years. Even OneWeb’s, at 1,100 km, aren’t that high up — geosynchronous satellites are above 35,000 km. That doesn’t mean there’s no risk at all, but it does mean failed or abandoned satellites won’t stick around for long. Just don’t expect to boot up your Starlink connection any time soon. It would take a minimum of 6 more launches like this one — a total of 420, a happy coincidence for Musk — to provide “minor” coverage. This would likely only be for testing as well, not commercial service. That would need 12 more launches, and dozens more to bring it to the point where it can compete with terrestrial broadband. Even if it will take years to pull off, that is the plan. And by that time others will have spun up their operations as well. It’s an exciting time for space and for connectivity. No launch time has been set as of this writing, so takeoff is just planned for Wednesday the 15th at present. As there’s no need to synchronize the launch with the movement of any particular celestial body, T-0 should be fairly flexible and SpaceX will likely just wait for the best weather and visibility. Delays are always a possibility, though, so don’t be surprised if this is pushed out to later in the week. As always you’ll be able to watch the launch , but I’ll update this post with the live video link as soon as it’s available.
Cat vs best and worst robot vacuum cleaners 

Cat vs best and worst robot vacuum cleaners 

2:13pm, 11th May, 2019
If you’ve flirted with the idea of buying a robot vacuum you may also have stepped back from the brink in unfolding horror at the alphabetic soup of branded discs popping into view. Consumer choice sounds like a great idea until you’ve tried to get a handle on the handle-less vacuum space. Amazon offers an A to Z of “top brands” that’s only a handful of letters short of a full alphabetic set. The horror. What awaits the unseasoned robot vacuum buyer as they resign themselves to hours of online research to try to inform — or, well, form — a purchase decision is a seeming endless permutation of robot vac reviews and round-ups. Unfortunately there are just so many brands in play that all these reviews tend to act as fuel, feeding a growing black hole of indecision that sucks away at your precious spare time, demanding you spend more and more of it reading about robots that suck (when you could, let’s be frank, be getting on with the vacuuming task yourself) — only to come up for air each time even less convinced that buying a robot dirtbag is at all a good idea. Reader, I know, because I fell into this hole. And it was hellish. So in the spirit of trying to prevent anyone else falling prey to convenience-based indecision I am — apologies in advance — adding to the pile of existing literature about robot vacuums with a short comparative account that (hopefully) helps cut through some of the chaff to the dirt-pulling chase. Here’s the bottom line: Budget robot vacuums that lack navigational smarts are simply not worth your money, or indeed your time. Yes, that’s despite the fact they are still actually expensive vacuum cleaners. Basically these models entail overpaying for a vacuum cleaner that’s so poor you’ll still have to do most of the job yourself (i.e. with a non-robotic vacuum cleaner). It’s the very worst kind of badly applied robotics. Abandon hope of getting anything worth your money at the bottom end of the heap. I know this because, alas, I tried — opting, finally and foolishly (but, in my defence, at a point of near desperation after sifting so much virtual chaff the whole enterprise seemed to have gained lottery odds of success and I frankly just wanted my spare time back), for a model sold by a well-known local retailer. It was a budget option but I assumed — or, well, hoped — the retailer had done its homework and picked a better-than-average choice. Or at least something that, y’know, could suck dust. The brand in question (Rowenta) sat alongside the better known (and a bit more expensive) iRobot on the shop shelf. Surely that must count for something? I imagined wildly. Reader, that logic is a trap. I can’t comment on the comparative performance of iRobot’s bots, which I have not personally tested, but I do not hesitate to compare a €180 (~$200) Rowenta-branded robot vacuum to a very expensive cat toy. This robot vacuum was spectacularly successful at entertaining the cat — presumably on account of its dumb disposition, bouncing stupidly off of furniture owing to a total lack of navigational smarts. (Headbutting is a pretty big clue to how stupid a robot it is, as it’s never a stand-in for intelligence even when encountered in human form.) Even more tantalizingly, from the cat’s point of view, the bot featured two white and whisker-like side brushes that protrude and spin at paw-tempting distance. In short: Pure robotic catnip. The cat did not stop attacking the bot’s whiskers the whole time it was in operation. That certainly added to the obstacles getting in its way. But the more existential problem was it wasn’t sucking very much at all. At the end of its first concluded ‘clean’, after it somehow managed to lurch its way back to first bump and finally hump its charging hub, I extracted the bin and had to laugh at the modest sized furball within. I’ve found larger clumps of dust gathering themselves in corners. So: Full marks for cat-based entertainment but as a vacuum cleaner it was horrible. At this point I did what every sensible customer does when confronted with an abject lemon: Returned it for a full refund. And that, reader, might have been that for me and the cat and robot vacs. Who can be bothered to waste so much money and time for what appeared laughably incremental convenience? Even with a steady supply of cat fur to contend with. But as luck would have it a Roborock representative emailed to ask if I would like to review their latest top-of-the-range model — which, at €549, does clock in at the opposite end of the price scale; ~3x the pitiful Rowenta. So of course I jumped at the chance to give the category a second spin — to see if a smarter device could impress me and not just tickle the cat’s fancy. Clearly the price difference here, at the top vs the bottom of the range, is substantial. And yet, if you bought a car that was 3x times cheaper than a Ferrari you’d still expect not just that the wheels stay on but that it can actually get you somewhere, in good time and do so without making you horribly car sick. Turns out buyers of robot vacuums need to tread far more carefully. Here comes the bookending top-line conclusion: Robot vacuums are amazing. A modern convenience marvel. But — and it’s a big one — only if you’re willing to shell out serious cash to get a device that actually does the job intended. Roborock S6: It’s a beast at gobbling your furry friend’s dander Comparing the Roborock S6 and the Rowenta Smart Force Essential Aqua RR6971WH (to give it its full and equally terrible name) is like comparing a high-end electric car with a wind-up kid’s toy. Where the latter product was so penny-pinching the company hadn’t even paid to include in the box a user manual that contained actual words — opting, we must assume, to save on translation costs by producing a comic packed with inscrutable graphics and bizarro don’t do diagrams which only served to cement the fast-cooling buyer’s conviction they’d been sold a total lemon — the Roborock’s box contains a well written paper manual that contains words and clearly labeled diagrams. What a luxury! At the same time there’s not really that much you need to grok to get your head around operating the Roborock. After a first pass to familiarize yourself with its various functions it’s delightfully easy to use. It will even produce periodic vocal updates — such as telling you it’s done cleaning and is going back to base. (Presumably in case you start to worry it’s gone astray under the bed. Or that quiet industry is a front for brewing robotic rebellion against indentured human servitude.) One button starts a full clean — and this does mean full thanks to on-board laser navigation that allows the bot to map the rooms in real-time. This means you get methodical passes, minimal headbutting and only occasional spots missed. (Another button will do a spot clean if the S6 does miss something or there’s a fresh spill that needs tidying — you just lift the bot to where you want it and hit the appropriate spot.) There is an app too, if you want to access extra features like being able to tell it to go clean a specific room, schedule cleans or set no-go zones. But, equally delightfully, there’s no absolute need to hook the bot to your wi-fi just to get it to do its primary job. All core features work without the faff of having to connect it to the Internet — nor indeed the worry of who might get access to your room-mapping data. From a privacy point of view this wi-fi-less app-free operation is a major plus. In a small apartment with hard flooring the only necessary prep is a quick check to clear stuff like charging cables and stray socks off the floor. You can of course park dining chairs on the table to offer the bot a cleaner sweep. Though I found the navigation pretty adept at circling chair legs. Sadly the unit is a little too tall to make it under the sofa. The S6 includes an integrated mopping function, which works incredibly well on lino-style hard flooring (but won’t be any use if you only have carpets). To mop you fill the water tank attachment; velcro-fix a dampened mop cloth to the bottom; and slide-clip the whole unit under the bot’s rear. Then you hit the go button and it’ll vacuum and mop in the same pass. In my small apartment the S6 had no trouble doing a full floor clean in under an hour, without needing to return to base to recharge in the middle. (Roborock says the S6 will drive for up to three hours on a single charge.) It also did not seem to get confused by relatively dark flooring in my apartment — which some reviews had suggested can cause headaches for robot vacuums by confusing their cliff sensors. After that first clean I popped the lid to check on the contents of the S6’s transparent lint bin — finding an impressive quantity of dusty fuzz neatly wadded therein. This was really just robot vacuum porn, though; the gleaming floors spoke for themselves on the quality of the clean. The level of dust gobbled by the S6 vs the Rowenta underlines the quality difference between the bottom and top end of the robot vacuum category. So where the latter’s plastic carapace immediately became a magnet for all the room dust it had kicked up but spectacularly failed to suck, the S6’s gleaming white shell has stayed remarkably lint-free, acquiring only a minimal smattering of cat hairs over several days of operation — while the floors it’s worked have been left visibly dust- and fur-free. (At least until the cat got to work dirtying them again.) Higher suction power, better brushes and a higher quality integrated filter appear to make all the difference. The S6 also does a much better cleaning job a lot more quietly. Roborock claims it’s 50% quieter than the prior model (the S5) and touts it as its quietest robot vacuum yet. It’s not super silent but is quiet enough when cleaning hard floors not to cause a major disturbance if you’re working or watching something in the same room. Though the novelty can certainly be distracting. Even the look of the S6 exudes robotic smarts — with its raised laser-housing bump resembling a glowing orange cylonic eye-slot. Although I was surprised, at first glance, by the single, rather feeble looking side brush vs the firm pair the Rowenta had fixed to its undercarriage. But again the S6’s tool is smartly applied — stepping up and down speed depending on what the bot’s tackling. I found it could miss the odd bit of lint or debris such as cat litter but when it did these specs stood out as the exception on an otherwise clean floor. It’s also true that the cat did stick its paw in again to try attacking the S6’s single spinning brush. But these attacks were fewer and a lot less fervent than vs the Rowenta, as if the bot’s more deliberate navigation commanded greater respect and/or a more considered ambush. So it appears that even to a feline eye the premium S6 looks a lot less like a dumb toy. Cat plots another ambush while the S6 works the floor On a practical front, the S6’s lint bin has a capacity of 480ml. Roborock suggests cleaning it out weekly (assuming you’re using the bot every week), as well as washing the integrated dust filter (it supplies a spare in the box so you can switch one out to clean it and have enough time for it to fully dry before rotating it back into use). If you use the mopping function the supplied reusable mop cloths do need washing afterwards too (Roborock also includes a few disposable alternatives in the box but that seems a pretty wasteful option when it’s easy enough to stick a reusable cloth in with a load of laundry or give it a quick wash yourself). So if you’re chasing a fully automated, robot-powered, end-to-cleaning-chores dream be warned there’s still a little human elbow grease required to keep everything running smoothly. Still, there’s no doubt a top-of-the-range robot vacuum like the S6 will save you time cleaning. If you can justify the not inconsiderable cost involved in buying this extra time by shelling out for a premium robot vacuum that’s smart enough to clean effectively all that’s left to figure out is how to spend your time windfall wisely — resisting the temptation to just put your feet up and watch the clever little robot at work.
Alexa, does the Echo Dot Kids protect children’s privacy?

Alexa, does the Echo Dot Kids protect children’s privacy?

8:06am, 9th May, 2019
A coalition of child protection and privacy groups has filed a complaint with the Federal Trade Commission (FTC) urging it to investigate a kid-focused edition of smart speaker. The complaint against Amazon Echo Dot Kids, which has been lodged with the FTC by groups including the Campaign for a Commercial-Free Childhood, the Center for Digital Democracy and the Consumer Federation of America, argues that the ecommerce giant is violating the Children’s Online Privacy Protection Act (Coppa) — including by failing to obtain proper consents for the use of kids’ data. As with its other smart speaker Echo devices the Echo Dot Kids continually listens for a wake word and then responds to voice commands by recording and processing users’ speech. The difference with this Echo is it’s intended for children to use — which makes it subject to US privacy regulation intended to protect kids from commercial exploitation online. The complaint, which can be read in full via the group’s complaint , argues that Amazon fails to provide adequate information to parents about what personal data will be collected from their children when they use the Echo Dot Kids; how their information will be used; and which third parties it will be shared with — meaning parents do not have enough information to make an informed decision about whether to give consent for their child’s data to be processed. They also accuse Amazon of providing at best “unclear and confusing” information per its obligation under Coppa to also provide notice to parents to obtain consent for children’s information to be collected by third parties via the online service — such as those providing Alexa “skills” (aka apps the AI can interact with to expand its utility). A number of other concerns are also being raised about Amazon’s device with the FTC. Amazon released the Echo Dot Kids — and, as we noted at the time, it’s essentially a brightly bumpered iteration of the company’s standard Echo Dot hardware. There are differences in the software, though. In parallel Amazon updated its Alexa smart assistant — adding parental controls, aka its FreeTime software, to the child-focused smart speaker. Amazon said the free version of FreeTime that comes bundled with the Echo Dot Kids provides parents with controls to manage their kids’ use of the product, including device time limits; parental controls over skills and services; and the ability to view kids’ activity via a parental dashboard in the app. The software also removes the ability for Alexa to be used to make phone calls outside the home (while keeping an intercom functionality). A paid premium tier of FreeTime (called FreeTime Unlimited) also bundles additional kid-friendly content, including Audible books, ad-free radio stations from iHeartRadio Family, and premium skills and stories from the likes of Disney, National Geographic and . At the time it announced the Echo Dot Kids, Amazon said it had tweaked its voice assistant to support kid-focused interactions — saying it had trained the AI to understand children’s questions and speech patterns, and incorporated new answers targeted specifically at kids (such as jokes). But while the company was ploughing resource into adding a parental control layer to Echo and making Alexa’s speech recognition kid-friendly, the Coppa complaint argues it failed to pay enough attention to the data protection and privacy obligations that apply to products targeted at children — as the Echo Dot Kids clearly is. Or, to put it another way, Amazon offers parents some controls over how their children can interact with the product — but not enough controls over how Amazon (and others) can interact with their children’s data via the same always-on microphone. More specifically, the group argues that Amazon is failing to meet its obligation as the operator of a child-directed service to provide notice and obtain consent for third parties operating on the Alexa platform to use children’s data — noting that its Children’s Privacy Disclosure policy states it does not apply to third party services and skills. Instead the complaint says Amazon tells parents they should review the skill’s policies concerning data collection and use. “Our investigation found that only about 15% of kid skills provide a link to a privacy policy. Thus, Amazon’s notice to parents regarding data collection by third parties appears designed to discourage parental engagement and avoid Amazon’s responsibilities under Coppa,” the group writes in a summary of their complaint. They are also objecting to how Amazon is obtaining parental consent — arguing its system for doing so is inadequate because it’s merely asking that a credit or debit/debit gift card number be inputted. “It does not verify that the person “consenting” is the child’s parent as required by Coppa,” they argue. “Nor does Amazon verify that the person consenting is even an adult because it allows the use of debit gift cards and does not require a financial transaction for verification.” Another objection is that Amazon is retaining audio recordings of children’s voices far longer than necessary — keeping them indefinitely unless a parent actively goes in and deletes the recordings, despite Coppa requiring that children’s data be held for no longer than is reasonably necessary. They found that additional data (such as transcripts of audio recordings) was also still retained even after audio recordings had been deleted. A parent must contact Amazon customer service to explicitly request deletion of their child’s entire profile to remove that data residue — meaning that to delete all recorded kids’ data a parent has to nix their access to parental controls and their kids’ access to content provided via FreeTime — so the complaint argues that Amazon’s process for parents to delete children’s information is “unduly burdensome” too. Their investigation also found the company’s process for letting parents review children’s information to be similarly arduous, with no ability for parents to search the collected data — meaning they have to listen/read every recording of their child to understand what has been stored. They further highlights that children’s Echo Dot Kids’ audio recordings can of course include sensitive personal details — such as if a child uses Alexa’s ‘remember’ feature to ask the AI to remember personal data such as their address and contact details or personal health information like a food allergy. The group’s complaint also flags the risk of other children having their data collected and processed by Amazon without their parents consent — such as when a child has a friend or family member visiting on a playdate and they end up playing with the Echo together. Responding to the complaint, Amazon has denied it is in breach of Coppa. In a statement a company spokesperson said: “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA). Customers can find more information on Alexa and overall privacy practices here: .” An Amazon spokesperson also told us it only allows kid skills to collect personal information from children outside of FreeTime Unlimited (i.e. the paid tier) — and then only if the skill has a privacy policy and the developer separately obtains verified consent from the parent, adding that most kid skills do not have a privacy policy because they do not collect any personal information. At the time of writing the FTC had not responded to a request for comment on the complaint. Over in Europe, there has been growing over the use of children’s data by online services. A report by England’s children’s commissioner late last year warned kids are being “datafied”, and suggested profiling at such an early age could lead to a data-disadvantaged generation. Responding to rising concerns the UK privacy regulator launched a on a last month, asking for feedback on 16 proposed standards online services must meet to protect children’s privacy — including requiring that product makers put the best interests of the child at the fore, deliver transparent T&Cs, minimize data use and set high privacy defaults. The UK government has also recently published a Whitepaper setting out a which has a heavy focus on child safety.
Drone sighting at Germany’s busiest airport grounds flights for about an hour

Drone sighting at Germany’s busiest airport grounds flights for about an hour

3:46am, 9th May, 2019
A drone sighting caused all flights to be suspended at Frankfurt Airport for around an hour this morning. The airport is Germany’s busiest by passenger numbers, serving almost 14.8 million passengers in the first three months of this year. In a tweet sent after flights had resumed the airport reported that operations were suspended at 07:27, before the suspension was lifted at 08:15, with flights resuming at 08:18. It added that security authorities were investigating the incident. Drohnensichtung am . Flugbetrieb im Zeitraum von 07:27 bis 08:15 Uhr eingestellt. Aufklärungs- und Fahndungsmaßnahmen der Sicherheitsbehörden wurden umgesetzt. Flugbetrieb seit 08:18 Uhr wieder aufgenommen. Unsere Pressemitteilung folgt. — Bundespolizei Flughafen Frankfurt am Main (@bpol_air_fra) A report in suggests more than 100 takeoffs and landings were cancelled as a result of the disruption caused by the drone sighting. All flights to Frankfurt (FRA) are currently holding or diverting due to drone activity near the airport — International Flight Network (@FlightIntl) It’s the second such incident at the airport after a drone sighting at the end of March also caused flights to be suspended for around half an hour. Drone sightings near airports have been on the increase for years as drones have landed in the market at increasingly affordable prices, as have reports of drone near misses with aircraft. The Frankfurt suspension follows far more major disruption caused by repeat drone sightings at the UK’s second largest airport, Gatwick Airport, — which caused a series of flight shutdowns and travel misery for hundreds of thousands of people right before the holiday period. The UK government came in for trenchant criticism immediately afterwards, with experts saying it had failed to listen and warnings about the risks posed by drone misuse. A planned drone bill has also been long delayed, meaning new legislation to comprehensively regulate drones has slipped. In response to the Gatwick debacle the UK government quickly pushed through an around airports after criticism by aviation experts — beefing up the existing 1km exclusion zone to 5km. It also said to tackle drone misuse. In Germany an amendment to air traffic regulations entered into force in 2017 that prohibits drones being flown within 1.5km of an airport. Drones are also banned from being flown in controlled airspace. However with local press reporting , with the country’s Air Traffic Control registering 125 last year (31 of which were around Frankfurt), the 1.5km limit looks similarly inadequate.
Non-invasive glucose monitor EasyGlucose takes home Microsoft’s Imagine Cup and $100K

Non-invasive glucose monitor EasyGlucose takes home Microsoft’s Imagine Cup and $100K

12:36pm, 8th May, 2019
yearly Imagine Cup student startup competition crowned its latest winner today: , a non-invasive, smartphone-based method for diabetics to test their blood glucose. It and the two other similarly beneficial finalists presented today at Microsoft’s Build developers conference. The Imagine Cup brings together winners of many local student competitions around the world with a focus on social good and, of course, Microsoft services like Azure. Last year’s winner was a smart prosthetic forearm that uses a camera in the palm to identify the object it is meant to grasp. (They were on hand today as well, with an improved prototype.) The three finalists hailed from the U.K., India, and the U.S.; EasyGlucose was a one-person team from my alma mater UCLA. EasyGlucose takes advantage of machine learning’s knack for spotting the signal in noisy data, in this case the tiny details of the eye’s iris. It turns out, as creator Brian Chiang explained in his presentation, that the iris’s “ridges, crypts, and furrows” hide tiny hints as to their owner’s blood glucose levels. EasyGlucose presents at the Imagine Cup finals. These features aren’t the kind of thing you can see with the naked eye (or rather, on the naked eye), but by clipping a macro lens onto a smartphone camera Chiang was able to get a clear enough image that his computer vision algorithms were able to analyze them. The resulting blood glucose measurement is significantly better than any non-invasive measure and more than good enough to serve in place of the most common method used by diabetics: stabbing themselves with a needle every couple hours. Currently EasyGlucose gets within 7 percent of the pinprick method, well above what’s needed for “clinical accuracy,” and Chiang is working on closing that gap. No doubt this innovation will be welcomed warmly by the community, as well as the low cost: $10 for the lens adapter, and $20 per month for continued support via the app. It’s not a home run, or not just yet: Naturally, a technology like this can’t go straight from the lab (or in this case the dorm) to global deployment. It needs FDA approval first, though it likely won’t have as protracted a review period as, say, a new cancer treatment or surgical device. In the meantime, EasyGlucose has a patent pending, so no one can eat its lunch while it navigates the red tape. As the winner, Chiang gets $100,000, plus $50,000 in Azure credit, plus the coveted one-on-one mentoring session with Microsoft CEO Satya Nadella. The other two Imagine Cup finalists also used computer vision (among other things) in service of social good. Caeli is taking on the issue of air pollution by producing custom high-performance air filter masks intended for people with chronic respiratory conditions who have to live in polluted areas. This is a serious problem in many places that cheap or off-the-shelf filters can’t really solve. It uses your phone’s front-facing camera to scan your face and pick the mask shape that makes the best seal against your face. What’s the point of a high-tech filter if the unwanted particles just creep in the sides? Part of the mask is a custom-designed compact nebulizer for anyone who needs medication delivered in mist form, for example someone with asthma. The medicine is delivered automatically according to the dosage and schedule set in the app — which also tracks pollution levels in the area so the user can avoid hot zones. Finderr is an interesting solution to the problem of visually impaired people being unable to find items they’ve left around their home. By using a custom camera and computer vision algorithm, the service watches the home and tracks the placement of everyday items: keys, bags, groceries, and so on. Just don’t lose your phone, since you’ll need that to find the other stuff. You call up the app and tell it (by speaking) what you’re looking for, then the phone’s camera it determines your location relative to the item you’re looking for, giving you audio feedback that guides you to it in a sort of “getting warmer” style, and a big visual indicator for those who can see it. After their presentations, I asked the creators a few questions about upcoming challenges, since as is usual in the Imagine Cup, these companies are extremely early stage. Right now EasyGlucose is working well but Chiang emphasized that the model still needs lots more data and testing across multiple demographics. It’s trained on 15,000 eye images but many more will be necessary to get the kind of data they’ll need to present to the FDA. Finderrr recognizes all the images in the widely used ImageNet database, but the team’s Ferdinand Loesch pointed out that others can be added very easily with 100 images to train with. As for the upfront cost, the U.K. offers a 500-pound grant to visually-impaired people for this sort of thing, and they engineered the 360-degree ceiling-mounted camera to minimize the number needed to cover the home. Caeli noted that the nebulizer, which really is a medical device in its own right, is capable of being sold and promoted on its own, perhaps licensed to medical device manufacturers. There are other smart masks coming out, but he had a pretty low opinion of them (not strange in a competitor but there isn’t some big market leader they need to dethrone). He also pointed out that in the target market of India (from which they plan to expand later) isn’t as difficult to get insurance to cover this kind of thing. While these are early-stage companies, they aren’t hobbies — though admittedly many of their founders are working on them between classes. I wouldn’t be surprised to hear more about them and others from Imagine Cup pulling in funding and hiring in the next year.
Samsung spilled SmartThings app source code and secret keys

Samsung spilled SmartThings app source code and secret keys

8:16am, 8th May, 2019
A development lab used by Samsung engineers was leaking highly sensitive source code, credentials and secret keys for several internal projects — including its platform, a security researcher found. The electronics giant left dozens of internal coding projects on a instance hosted on a Samsung-owned domain, Vandev Lab. The instance, used by staff to share and contribute code to various Samsung apps, services and projects, was spilling data because the projects were set to “public” and not properly protected with a password, allowing anyone to look inside at each project, access, and download the source code. , a security researcher at Dubai-based cybersecurity firm SpiderSilk who discovered the exposed files, said one project contained credentials that allowed access to the entire AWS account that was being used, including over a hundred S3 storage buckets that contained logs and analytics data. Many of the folders, he said, contained logs and analytics data for Samsung’s SmartThings and Bixby services, but also several employees’ exposed stored in plaintext, which allowed him to gain additional access from 42 public projects to 135 projects, including many private projects. Samsung told him some of the files were for testing but Hussein challenged the claim, saying source code found in the GitLab repository contained the same code as the app, published in Google Play on April 10. The app, which has since been updated, has to date. “I had the private token of a user who had full access to all 135 projects on that GitLab,” he said, which could have allowed him to make code changes using a staffer’s own account. Hussein shared several screenshots and a video of his findings for TechCrunch to examine and verify. The exposed GitLab instance also contained private certificates for Samsung’s SmartThings’ iOS and Android apps. Hussein also found several internal documents and slideshows among the exposed files. “The real threat lies in the possibility of someone acquiring this level of access to the application source code, and injecting it with malicious code without the company knowing,” he said. Through exposed private keys and tokens, Hussein documented a vast amount of access that if obtained by a malicious actor could have been “disastrous,” he said. A screenshot of the exposed AWS credentials, allowing access to buckets with GitLab private tokens. (Image: supplied). Hussein, a white-hat hacker and data breach discoverer, reported the findings to Samsung on April 10. In the days following, Samsung began revoking the AWS credentials but it’s not known if the remaining secret keys and certificates were revoked. Samsung still hasn’t closed the case on Hussein’s vulnerability report, close to a month after he first disclosed the issue. “Recently, an individual security researcher reported a vulnerability through our security rewards program regarding one of our testing platforms,” Samsung spokesperson Zach Dugan told TechCrunch when reached prior to publication. “We quickly revoked all keys and certificates for the reported testing platform and while we have yet to find evidence that any external access occurred, we are currently investigating this further.” Hussein said Samsung took until April 30 to revoke the GitLab private keys. Samsung also declined to answer specific questions we had and provided no evidence that the Samsung-owned development environment was for testing. Hussein is no stranger to reporting security vulnerabilities. He recently disclosed , an anonymous social networking site popular among Silicon Valley employees — and found a server for scientific journal giant Elsevier. Samsung’s data leak, he said, was his biggest find to date. “I haven’t seen a company this big handle their infrastructure using weird practices like that,” he said. Read more:
Live transcription and captioning in Android are a boon to the hearing-impaired

Live transcription and captioning in Android are a boon to the hearing-impaired

2:57pm, 7th May, 2019
A set of new features for Android could alleviate some of the difficulties of living with hearing impairment and other conditions. Live transcription, captioning, and relay use speech recognition and synthesis to make content on your phone more accessible — in real time. Announced today at I/O event in a surprisingly long segment on accessibility, the features all rely on improved speech-to-text and text-to-speech algorithms, some of which now run on-device rather than sending audio to a datacenter to be decoded. The first feature to be highlighted, live transcription, was already mentioned by Google before. It’s a simple but very useful tool: open the app and the device will listen to its surroundings and simply display any speech it recognizes as text on the screen. We’ve seen this in translator apps and devices, like the , and the meeting transcription highlighted yesterday at Microsoft Build. One would think that such a straightforward tool is long overdue, but in fact everyday circumstances like talking to a couple friends at a cafe, can be remarkably difficult for natural language systems trained on perfectly recorded single-speaker audio. Improving the system to the point where it can track multiple speakers and display accurate transcripts quickly has no doubt been a challenge. Another feature enabled by this improved speech recognition ability is live captioning, which essentially does the same thing as above, but for video. Now when you watch a YouTube video, listen to a voice message, or even take a video call, you’ll be able to see what the person in it is saying, in real time. That should prove incredibly useful not just for the millions of people who can’t hear what’s being said, but also those who don’t speak the language well and could use text support, or anyone watching a show on mute when they’re supposed to be going to sleep, or any number of other circumstances where hearing and understanding speech just isn’t the best option. Captioning phone calls is something CEO Sundar Pichai said is still under development, but the “live relay” feature they demoed on stage showed how it might work. A person who is hearing-impaired or can’t speak will certainly find an ordinary phone call to be pretty worthless. But live relay turns the call immediately into text, and immediately turns text responses into speech the person on the line can hear. Live captioning should be available on Android Q when it releases, with some device restrictions. is available now but a warning states that it is currently in development. Live relay is yet to come, but showing it on stage in such a complete form suggests it won’t be long before it appears.
OnePlus CEO Pete Lau will discuss the future of mobile at Disrupt SF

OnePlus CEO Pete Lau will discuss the future of mobile at Disrupt SF

12:47pm, 7th May, 2019
Founded in late 2013, did the impossible, coming seemingly out of nowhere to take on some of the biggest players in mobile. The company has made a name by embracing a fawning fanbase and offering premium smartphone features at budget pricing, even as the likes of Samsung and Apple routinely crack the $1,000 barrier on their own flagships. history is awash with clever promotions and fan service, all while exceeding expectations in markets like the U.S., where fellow Chinese smartphone makers have run afoul of U.S. regulations. The company’s measured approach to embracing new features has won a devoted fantasied among Android users. Over the past year, however, the company has looked to bleeding edge technology as a way forward. OnePlus was one of the first to embrace In-Display fingerprint sensors with last year’s 6T and has promised to be among the first to offer 5G on its handsets later this year. CEO formed the company with fellow Oppo employee Carl Pei. The pair have turned the company into arguably the most exciting smartphone manufacturer in the past decade. OnePlus has big plans on the horizon, too, including further expansion into the Indian market and the arrival of its first TV set in the coming year. At Disrupt SF (which runs October 2 to October 4), Lau will discuss OnePlus’ rapid accent and its plans for the future. Tickets are available .
Marshall continues to impress with new retro portable speakers

Marshall continues to impress with new retro portable speakers

10:47am, 6th May, 2019
Marshall, the headphone company and not the loudspeaker company of the same vintage, today announced two new portable speakers. Like the company’s previous offerings, these speakers ooze a retro vibe. The two new speakers, the Stockwell II and Tufton, , but stand tall, literally and figuratively, apart from the rest of Marshall’s speakers as portable models with a vertical orientation, internal batteries, wireless capabilities and a rugged casing that should survive a trip outside. The large Tufton impresses with clear, powerful sound even when on battery. The highs carry over a solid low-end. It’s heavy. This isn’t a speaker you want to take backpacking, but, if you did, the casing has an IPX4 water-resistant rating, so it’s tough enough to handle most weather. Marshall says the battery lasts up to six hours. The smaller Stockwell II is much smaller. The little speaker is about the size of an iPad Mini, though as thick as a phone book. The internal battery is good for four hours and the casing is still tough, though sports an IPX2 rating, so it’s not as durable as the Tufton. The speaker is a bit smaller and the music quality is as well. The Stockwell II is a great personal speaker, but it doesn’t produce a pounding sound like the Tufton. Use the Stockwell II for a quiet campfire and the Tufton for a backwoods bonfire. Sadly, these speakers lack Google Assistant or Amazon Alexa integration. Users either have to connect a device through a 3.5mm port or Bluetooth. I’ve been a fan of every Marshall speaker I’ve tried. For my money, they feature a great balance of sound and classic design. Each one I’ve tried lives up to the Marshall name and these two new speakers are no different. Portability doesn’t come cheap. These speakers cost a bit more than their stationary counterparts. The small Stockwell II retails for $249 while the large Tufton is $399.
Life-size robo-dinosaur and ostrich backpack hint at how first birds got off the ground

Life-size robo-dinosaur and ostrich backpack hint at how first birds got off the ground

8:17pm, 2nd May, 2019
Everyone knows birds descended from dinosaurs, but exactly how that happened is the subject of much study and debate. To help clear things up, these researchers went all out and just straight up built a robotic dinosaur to test their theory: that these proto-birds flapped their “wings” well before they ever flew. Now, this isn’t some hyper-controversial position or anything. It’s pretty reasonable when you think about it: natural selection tends to emphasize existing features rather than invent them from scratch. If these critters had, say, moved from being quadrupedal to being bipedal and had some extra limbs up front, it would make sense that over a few million years those limbs would evolve into something useful. But when did it start, and how? To investigate, Jing-Shan Zhao of Tsinghua University in Beijing looked into an animal called Caudipteryx, a ground-dwelling animal with “feathered forelimbs that could be considered “proto-wings.” Based on the well-preserved fossil record of this bird-dino crossover, the researchers estimated a number of physiological metrics, such as the creature’s top speed and the rhythm with which it would run. From this they could estimate forces on other parts of the body — just as someone studying a human jogger would be able to say that such and such a joint is under this or that amount of stress. What they found was that, in theory, these “natural frequencies” and biophysics of the Caudipteryx’s body would cause its little baby wings to flap up and down in a way suggestive of actual flight. Of course they wouldn’t provide any lift, but this natural rhythm and movement may have been the seed which grew over generations into something greater. To give this theory a bit of practical punch, the researchers then constructed a pair of unusual mechanical items: a pair of replica Caudipteryx wings for a juvenile ostrich to wear, and a robotic dinosaur that imitated the original’s gait. A bit fanciful, sure — but why shouldn’t science get a little crazy now and then? In the case of the ostrich backpack, they literally just built a replica of the dino-wings and attached it to the bird, then had the bird run. Sensors on board the device verified what the researchers observed: that the wings flapped naturally as a result of the body’s motion and vibrations from the feet impacting the ground. The robot is a life-size reconstruction based on a complete fossil of the animal, made of 3D-printed parts, to which the ostrich’s fantasy wings could also be affixed. The researchers’ theoretical model predicted that the flapping would be most pronounced as the speed of the bird approached 2.31 meters per second — and that’s just what they observed in the stationary model imitating gaits corresponding to various running speeds. You can see another gif . As the researchers summarize: These analyses suggest that the impetus of the evolution of powered flight in the theropod lineage that lead to Aves may have been an entirely natural phenomenon produced by bipedal motion in the presence of feathered forelimbs. Just how legit is this? Well, I’m not a paleontologist. And an ostrich isn’t a Caudipteryx. And the robot isn’t exactly convincing to look at. We’ll let the scholarly community pass judgment on this paper and its evidence (don’t worry, it’s been peer-reviewed), but I think it’s fantastic that the researchers took this route to test their theory. A few years ago this kind of thing would have been far more difficult to do, and although it seems a little silly when you watch it (especially in gif form), there’s a lot to be said for this kind of real-life tinkering when so much of science is occurring in computer simulations. The paper was .