Skip to main content

Week in Tech: N is for Nougat, Now TV, and nano-cams

A Vine time for TV shows

Adult Swim has released Vine's first full TV show, the sensible-sounding Narg Nallin' Sclopio Peepio. If you're thinking that Vine's six-second limit means a pretty short show, you're right – but Adult Swim hasn't been limited to six seconds, so the programme weighs in at a millennial attention span-testing 10 minutes. As Jon Porter says, "it's not entirely clear why the platform was chosen, other than due to its reach. With 200 million monthly active users Vine has a similarly sized user base to long-form video site Vimeo, but is dwarfed by the likes of YouTube". However, Vine did extend its maximum video length to 140 seconds last week, which suggests that longer videos may become more common on the service.

Bring on the brain cams

Fancy a camera so small that you can inject it directly into your brain? No, us neither, but scientists have invented one anyway. As Duncan Geere reports: "A team of researchers at the University of Stuttgart has developed a teeny-tiny little camera that could be used in medicine, security monitoring and for miniature robots." As much as we hate the phrase 'paradigm shift', it's a paradigm shift in the way small cameras can be made: the lens is just 0.1mm wide, and the entire system fits comfortably inside the needle of a standard syringe. According to its creators: "The unprecedented flexibility of our method paves the way towards printed optical miniature instruments such as endoscopes, fibre-imaging systems for cell biology, new illumination systems, miniature optical fibre traps, integrated quantum emitters and detectors, and miniature drones and robots with autonomous vision."

Do androids see electric sheep?

How's this for science: humans and robots see things completely differently, but still arrive at the same answers. Over to Duncan Geere again: "it turns out that when an AI looks at an picture, it sees totally different things to humans. In experiments conducted at Facebook and Virginia Tech, researchers found significant differences between what humans and computers looked at when asked a simple question about an image". The neural networks they tested didn't see what the humans saw, but "the neural networks still turned out to be pretty good at getting the answers to the questions right. Which raises the question of how they knew". And the question of what else the robots aren't telling us…