A new lineup of inductees to the much coveted International Photography Hall of Fame was announced last Thursday for World Photography Day. Amongst the eight culture shocking individuals was an unlikely but familiar face. The late Steve Jobs, co founder and visionary of Apple Inc.
There is no question to the iPhone’s impact on the photography world. That tiny camera and sensor revolutionised the snapshot Continue reading →
Cutting the imperfections from our final selection is something we as photographers are all familiar with. We’re used to striving for clean, crisp, tack sharp images that can only be bettered by the next model in line. There are some of us though that have begun to embrace those imperfections, even dare I suggest invoke them with the *groan* Instagram filters.
Some wise and sombre words from English musician Brian Eno on struggling to embrace the odd quirks in the current medium at the forefront of technology.
Digital imaging technology is slowly pursuing that line of perfection. New models of cameras are now focussing on areas that only a few photographers require leaving errors either intentional or devastating. A ruined digital file can’t be repaired as easily as a faded print.
Off the top of our Phogotraphy heads the only obvious mainstream digital camera imperfection we can recall over recent years is the purple fringing on the iPhone 5. If you can think of any others that may one day be considered a unique feature, let us know in the comment section or via social media.
I’m going to come clean with you guys, I don’t even know what Google Photos is. Sure, I’m aware Google does a lot of stuff and in all likelihood has a cloud storage service for photographs that’s called Google Photos, but I had no idea it was such a big deal.
During their September 29th keynote it was announced that Google Photos now has 50 billion uploads consisting of photos, animations and videos.
That’s enough to cover several selfies of every human being alive with a few left over. I just can’t understand how the number got so high in such a short Continue reading →
Yesterday evening I was lucky enough to catch the tail end of BBC Radio 4’s Four Thought featuring Charles Leadbeater, British author of We Think and former political advisor to Tony Blair. The episode on a whole addressed the struggle we all have with ‘The Whirlpool Economy’ and how we can come to terms with working longer hours and achieving less.
Towards the end of the monologue Leadbeater uses an analogy we are all very familiar with by now and relates the ubiquitous use of camera phones to the devaluation of photography as a whole. However somewhat refreshingly he flips the thought on its head and ends up calling the camera phone “A little empathy making machine.” Continue reading →
It’s increasingly apparent that artificial intelligence’s inevitable ascension as the dominant species on our planet (and beyond) will not come as some have predicted in an instant, but a slow, invisible growth. The latest advancement in AI comes in the subdued revelation by Facebook that it now has an algorithm that can tell us all apart from the back of our heads. The announcement of DEEPFACE came and went mostly unnoticed.
The final algorithm was revealed and demonstrated by Facebook last week at the Boston CVPR 2015 conference. It’s been reported that Yann LeCun, head of Facebook’s artificial intelligence division says it worked with an 83% success rate after reviewing 60,000 public photographs of 2000 people from Flickr and running them through a sophisticated neural network. This figure rises significantly if a frontal face is recognised to 93.4%, making it possibly as accurate as a human brain.
The algorithm works quite simply by recognising silhouettes, clothes, hair colour and other distinguishable features that a person may be identified by and comparing them with other photographs. LeCun states that it easily recognises Mark Zuckerberg because he’s always wearing the same grey T shirt.
Thankfully, Yann LeCun recognises the romp to stardom AI is currently having and warns we must keep a watchful eye:
There is little doubt that future progress in computer vision will require breakthroughs in unsupervised learning, particularly for video understanding, But what principles should unsupervised learning be based on?
I for one would prefer not to be recognised by my behind, however if this is the future our society holds it’ll spur me on to dress better and certainly lose a few pounds to confuse those pesky Facebook neural networks.
Folks, we’ve officially come full circle. The gigantic advancements in technology that have been made since Daguerre first fixed an image, Fox-Talbot invented the reproducible image and Kirsch invented the pixel still has photographers like us grasping for our roots. “The Camera Obscura is Back.”
And like anything in this post-post-modern world, it needs a tacky sales schpiel to sell it (we’ll get to that later.)
Before there were photographs, we painted to make a record of people, places and ideas. Many of these artists would use a camera obscura to aid them in this process as they provided a still frame from which to copy from, trace over or interpret more easily than their own sight. Damn peripheral vision!
David Hockney pointed out that many masterpieces had to have been created using this ‘old camera technology’ much to the horror of many art historians.
Then came along 1800s and something called ‘fixing the image’. Scientists, chemists and hobbyists (there were no photographers back then obviously) started experimenting with different materials that would take the light exposed onto the back of the camera obscura and fix it in place. Niépce was the first to successfully do this in 1827, although modern photography is often attributed to Daguerre and Fox-Talbot over a decade later.
Excited? Well get ready to have your visions of grandeur dashed by an awful ’90s style infomercial that not only teaches you how to suck eggs, but at the same time devalues the premise of the idea it’s created. Continue reading →
Two and a half years ago the web was abuzz with news that Mark Zuckerberg, CEO of Facebook.com had struck a deal with the owner-founders of Instagram to buy the photo sharing service for a billion dollars made up of $FB stock options and hard cash. On the most part it was considered a very risky move by Facebook, as Instagram wasn’t even turning a profit. Purchasing a completely separate entity rather than innovating themselves is something Facebook has made a name for.
The general sentiment at the time was dampened as not long had passed since Yahoo had completely desecrated the once king of photo sharing sites Flickr by rolling out a complete redesignat the bereft of its users. If one of the major players in social media can do that to Flickr, what will Facebook to to Instagram?
Connor Adams Sheets for The International Business Times summed up how much of the industry felt about the deal at the time:
“overpaying for companies like Instagram won’t help Facebook maintain its dominant market share, as a billion bucks for a fun (and admittedly useful) photo app represents a huge overestimation of how much the company is really worth.”
Oh how wrong we all were. Because today, Facebook announced by virtue of the brokers CitiGroup that their acquisition of the selfie, filtered, food-fest site is now estimated at a very cool $35 billion dollars. The maths involved has had a few commentators a little sceptical, but even if it’s only half that value it is an impressive investment.
Next comes the real test, as you may recall early this year Mark made another photography related (yes, perhaps a little tenuous) acquisition by bringing on board the mobile messaging app WhatsApp for $19 billion dollars! Let’s see what Citigroup have to say about that next quarter.
Surveillance just added another weapon to its growing arsenal; identification by wobble, or as described in the recently published paper in Egocentric Video Biometrics “a person’s gait.”
Using data compiled from videos created by GoPro cameras mounted onto the helmets of 34 different subjects, researchers Yedid Hoshen & Shmuel Peleg of Cornell University were able to identify unique signatures in the differentiating wobble from just four seconds of camera footage. This, they say will compromise ego-centric (mounted) camera wearers anonymity, although it could have some benevolent uses. Your newly purchased camera can be tailored to recognise only your movements which may prevent some thefts, or user analytics on video sharing websites.
The experiment has only so far been performed with baseball cap mounted Go-Pro cameras but researchers plan on expanding the tests to include Google Glass and body mounted surveillance cameras such as those soon to be in use after the order of 50,000 units for the US police force was approved.
Perhaps we can finally learn the truth behind the Italy Go-Pro camera robbery in which an armed robber enters a supermarket and terrifies the public whilst looting. If you’ve not seen that, you’re in for a thrill:
We’re one step closer to Completing the Circle, although I have to admit to not considering this method of surveillance before. Scary stuff or much needed improvements in tech? Leave your comments below.