Translate into a different language

Saturday, April 21, 2018

Tour the Space Station in VR with This Amazing 3D, 360-Degree Video | Tech - Space.com

Follow on Twitter as @HarrisonTasoff
"The National Geographic Channel has revealed the first 3D, 360-degree video of space as a part of its new documentary series "One Strange Rock" reports 

We took a virtual tour with the astronauts aboard the International Space Station while hearing their thoughts on the enormity of space, and it left us speechless.

First-Ever 3D VR Filmed in Space | One Strange Rock


A special delivery arrived at the space station last November: a state-of-the-art Vuze VR camera. European Space Agency astronaut Paolo Nespoli brought the camera with him during his daily routine on the station. Nespoli received unique training on the device from series filmmaker Darren Aronofsky himself, who gave the Italian astronaut a crash course in VR filming via Skype. To experience the full impact of the video, watch it on your smartphone while wearing your favorite VR headset.

The video begins in low Earth orbit. An instrumental prelude plays as the space station approaches. The welcoming voice of retired Canadian astronaut Chris Hadfield relates how his 166 days in space changed his world view, both literally and metaphorically. Hadfield is soon joined by former NASA astronauts Mae Jemison, Mike Massimino and Nicole Stott, all of whom discuss their experience of Earth from the rarified vantage point of the space station. [The International Space Station: Inside and Out (Infographic)]

Nespoli carries the trusty camera through the tight quarters of the space station, providing viewers with a 360-degree perspective of life aboard the outpost in the sky. Wires, fixtures and equipment cover nearly every surface of the cabins, but that's nary a problem when you can float past them in microgravity. Nespoli also recorded super-high-definition footage of NASA astronaut Peggy Whitson, the first woman to command the space station, including her final day in space at the end of Expedition 52. The sequences will appear in the series' final episode, which airs on Monday, May 28 at 10 p.m. EDT/9 p.m. CDT.
Read more...

Source: Space.com and National Geographic Channel (YouTube)


If you enjoyed this post, make sure you subscribe to my Email Updates!

Seven Artificial Intelligence Advances Expected This Year | Technology - Forbes

"Artificial intelligence (AI) has had a variety of targeted uses in the past several years, including self-driving cars" continues Forbes Technology Council.

Photo: Shutterstock

Recently, California changed the law that required driverless cars to have a safety driver. Now that AI is getting better and able to work more independently, what's next?
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
We asked seven technology experts from the Forbes Technology Council their thoughts on the advances and implementations in AI that they expect to see in the year ahead. All the responses touched on how AI can help humans now, instead of much further down the road. This is what they had to say.

1. Improved Patient Health Outcomes 

I expect that we will see an increased focus on improving health outcomes utilizing artificial intelligence. Patients are producing significant amounts of health data with mobile devices and connected wearables. Providers are using electronic health records generating enormous amounts of information. Applying artificial intelligence will utilize information from patients and providers to actively identify health conditions that may not have been detected until later. - Meghann Chilcott, OrderInsite, LLC
Read more...  

Source: Forbes  


If you enjoyed this post, make sure you subscribe to my Email Updates!

What You Need to Know About Artificial Intelligence | Parade

Photo: Kathleen McCleary
Kathleen McCleary, Contributor says, "Artificial intelligence, the top job trend, is here to stay and it’s changing the face of work."
 
Photo: iStock
If you’ve recently chatted online with customer service, had an X-ray taken or applied for a loan, you’ve likely experienced A.I., including “chatbots,” diagnostic imaging machines and loan algorithms. But the new wave of technology doesn’t necessarily mean unemployment.

“People fear a lot of jobs will be destroyed, but the reality is jobs will change as people team up with technology,” says Andrew Chamberlain, Ph.D., chief economist with job search website Glassdoor. A recent report by McKinsey Global Institute (MGI) found that up to 32 percent of the U.S. workforce (166 million people) will have to move out of their current occupational categories to find work over the next 12 years, but they’ll be taking on different jobs, including some that never existed before.

“Everybody’s job is going to look different by 2030,” says Susan Lund, partner with MGI and an author of the report. Think back to 1980, before personal computers and the internet. PCs have created 19.5 million new jobs in the U.S., from software developers to semiconductor manufacturers. At the same time, 3.5 million jobs have dried up, including typists, secretaries and typewriter manufacturers. Still, the economy has gained 16 million jobs over the past 35 years, thanks to new technology...

Natalie Choate, director of media relations and partnerships for the Texas Tribune, worked for the organization for more than five years in fundraising and membership before moving over to her current job. “I was stepping out into the unknown,” she says. “It was a completely different job—and different jargon.” The Tribune was willing to invest in the time it took her to learn her new gig in order to keep her on staff. “I have very patient co-workers who helped me get from point A to point B,” she says.
Read more...  

Source: Parade


If you enjoyed this post, make sure you subscribe to my Email Updates!

Changing the game: Machine learning in healthcare | Healthcare IT News

"When EHRs can learn – gather and remember – what works best for each user, they can attain maximum efficiency" according to Paul Black, CEO of Allscripts.
 

Photo: Healthcare IT News (blog)

As we live in the new world of quality, value-based care, we must be able to draw more insights and conclusions from ever-increasing amounts of information. We have the data, now we must put it to work. When we combine all of this data with machine learning, we are equipped to make smarter decisions. We have the power to transform healthcare – from the way we use electronic health records to the way we predict and deliver care.  

A game changer for EHRs 
Most EHRs are built on technology that is 20 or 30 years old. Generally, EHRs have kept up with rapid changes in healthcare by making incremental improvements over time. But it is challenging to retrofit EHRs to take full advantage of new innovations.

EHRs must do more than store data. They should be smart enough to deliver the right information at the right time, at the point of care. When an EHR is powered by machine learning, it can pre-populate information based on usage patterns and deliver preference reminders, constantly surveilling trends by user and organization to create opportunities for more effective care...

A game changer for population health, predictive modeling 
Machine learning is also empowering us to analyze patient data at a level never before possible. We can now transform data into insights and actionable information.

Just think how a "data lake," where we are able to store millions of de-identified patient information to structure and to analyze data and study problems that are meaningful to health care, could transform diabetes care, for example.

We now have the power to compare things like blood sugar levels, body mass index, age and other risk factors and analyze treatment outcomes...

The way of the future 
...extraordinarily exciting set of capabilities today that didn't exist a decade ago. It enables computers to handle greater amounts of work than human beings can undertake, and will become increasingly important in this era of consumerization...
Read more...

Source: Healthcare IT News (blog)


If you enjoyed this post, make sure you subscribe to my Email Updates!

Friday, April 20, 2018

Machine-learning system processes sounds like humans do | MIT News

"Neuroscientists train a deep neural network to analyze speech and music" says Anne Trafton, MIT News Office.

MIT neuroscientists have developed a machine-learning system that can process speech and music the same way that humans do.
Photo: Chelsea Turner/MIT
Using a machine-learning system known as a deep neural network, MIT researchers have created the first model that can replicate human performance on auditory tasks such as identifying a musical genre.

This model, which consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks, was used by the researchers to shed light on how the human brain may be performing the same tasks.

“What these models give us, for the first time, is machine systems that can perform sensory tasks that matter to humans and that do so at human levels,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT and the senior author of the study. “Historically, this type of sensory processing has been difficult to understand, in part because we haven’t really had a very clear theoretical foundation and a good way to develop models of what might be going on.”

The study, which appears in the April 19 issue of Neuron, also offers evidence that the human auditory cortex is arranged in a hierarchical organization, much like the visual cortex. In this type of arrangement, sensory information passes through successive stages of processing, with basic information processed earlier and more advanced features such as word meaning extracted in later stages.

MIT graduate student Alexander Kell and Stanford University Assistant Professor Daniel Yamins are the paper’s lead authors. Other authors are former MIT visiting student Erica Shook and former MIT postdoc Sam Norman-Haignere. 

Modeling the brain
When deep neural networks were first developed in the 1980s, neuroscientists hoped that such systems could be used to model the human brain. However, computers from that era were not powerful enough to build models large enough to perform real-world tasks such as object recognition or speech recognition.

Over the past five years, advances in computing power and neural network technology have made it possible to use neural networks to perform difficult real-world tasks, and they have become the standard approach in many engineering applications. In parallel, some neuroscientists have revisited the possibility that these systems might be used to model the human brain.
Read more... 

Journal Reference:
Journal Alexander J.E. Kell, Daniel L.K. Yamins, Erica N. Shook, Sam V. Norman-Haignere, Josh H. McDermott. A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy. Neuron, 2018; DOI: 10.1016/j.neuron.2018.03.044

Source: MIT News


If you enjoyed this post, make sure you subscribe to my Email Updates!

Tensorflow with Javascript Brings Deep Learning to the Browser | InfoQ.com

Alexis Perrier, Data Scientist inform, "At the recent TensorFlow Dev Summit 2018, Google announced the release of Tensorflow.js, a Javascript implementation of Tensorflow, its open-source deep-learning framework. Tensorflow.js allows training models directly in the browser by leveraging the WebGL JavaScript API for faster computations."

Machine Learning in JavaScript (TensorFlow Dev Summit 2018)
 


Tenforflow.js is an evolution of deeplearn.js, a Javascript library released by Google in August 2017. Deeplearn.js was born out of the success of the Tensorflow Playground, an interactive visualization of neural networks written in TypeScript.

Tensorflow.js has four layers: The WebGL API for GPU-supported numerical operations, the web browser for user interactions, and two APIs: Core and Layers. The low-level Core API corresponds to the former deeplearn.js library. It provides hardware-accelerated linear algebra operations and an eager API for automatic differentiation. The higher-level Layers API is used to build machine-learning models on top of Core. The Layers API is modeled after Keras and implements similar functionality. It also allows to import models previously trained in python with Keras or TensorFlow SavedModels and use it for inference or transfer learning in the browser. 

With Tensorflow.js, machine-learning models can be utilized in the browser in three ways: by importing already pre-trained models and using them for inference only, by training models from scratch directly in the browser, or by using transfer learning to first adapt imported models to the user's context and then use these improved models for inference. 

As Nikhil Thorat and Daniel Smilkov, members of the Tensorflow team, point out in their announcement video, (see in the top of the post) running Tensorflow in the browser has several advantages: the infrastructure and set of requirements are simplified as the need for background API requests is removed; the available data is richer in nature thanks to newly accessible sensors, such as webcam and microphone on computers or GPS and gyroscope on mobile devices; and the data also remains on the client side which addresses privacy concerns. 
Read more...

Source: InfoQ.com and TensorFlow Channel (YouTube)


If you enjoyed this post, make sure you subscribe to my Email Updates!

What Is Deep Learning and How Does it Relate to AI? | CMSWire

Photo: Erika Morphy
"Google’s AlphaGo made history in May 2017 when it defeated Ke Jie, the world’s reigning champion of the ancient Chinese game Go" summarizes Erika Morphy, New Orleans-based journalist.

Photo: Vlad Tchompalov

It was the first computer program to defeat a professional human Go player, much less a world champion. Later that year, Google introduced AlphaGo Zero, an even more powerful iteration of AlphaGo.

Anyone wanting to understand the difference between artificial intelligence and deep learning can start by understanding the difference between AlphaGo and AlphaGo Zero. With AlphaGo, Google trained the original AlphaGo to play by teaching it to look at data from the top players, said Avi Reichental, CEO of XponentialWorks. Within a short period of time it was able to beat almost all standing champions hands down, he said. But with AlphaGo Zero, instead of having an algorithm look at lots of data from other players, Google taught the system the rules of the game and let the algorithm learn how to improve on its own, Reichental said. The end result, he said, is a computational power unparalleled in speed and intelligence.

Without a doubt artificial intelligence is becoming more common in our daily and business lives. It is making appearances in voice assistants and chatbots, as well as in complex business applications. As it does, it is important to learn to distinguish among the different types of AI, such as deep learning.

Defining AI and Its Many Iterations 
Starting with the basics, AI is a concept of getting a computer or machine or robot to do what previously only humans could do, said Mark Stadtmueller, VP of Product Strategy at Lucd. Machine learning is a type of AI where algorithms are used to analyze data, he continued. “Machine learning analysis involves looking for patterns within the data and creating and refining a model/equation that best approximates the data pattern. With this model/equation, predictions can be made on new data that follows that data pattern.”

Neural networks are a type of machine learning in which brain neuron behavior is approximated to model many input values to determine or predict an outcome, Stadtmueller said. When many layers of neurons are used, it is called a deep neural network. “Deep neural networks have been very successful in improving the accuracy of speech recognition, computer vision, natural language processing and other predictive capabilities,” he said. When using deep neural networks, people refer to it as deep learning, Stadtmueller said. “So deep learning is the act of using a deep neural network to perform machine learning, which is a type of AI.”
Read more...

Source: CMSWire


If you enjoyed this post, make sure you subscribe to my Email Updates!

Thursday, April 19, 2018

Do You Really Need An MBA? | Career Advice - Refinery29

Photo: Judith Ohikuare
Mark Zuckerberg being hauled before Congress is a signal to some people that at 33 years old, the tech founder is finally "growing up." So, maybe it's also time to retire the idea successful entrepreneurs should drop out of college to get ahead. After all, companies like  Facebook are called "unicorns" for a reason.) says Judith Ohikuare, Work & Money Writer at Refinery29.

Photo: Christy Kurtz..
Those of us who didn’t start a multimillion dollar company in our dorm rooms have to consider other paths to becoming business leaders. But it's wise to think twice before spending tens, if not hundreds, of thousands of dollars on a master's in business, but it's also important to remember that an MBA isn't meant to be a prerequisite for starting a business at all. The 2018 Alumni Perspectives Survey from the Graduate Management Admission Council (GMAC) found that 79% of b-school alumni worked for another company and 10% were self-employed. So, this degree usually comes in handy for people who have dreams of being high-level business executives in a variety of industries.

If you're toying with the idea of getting an MBA but aren't sure if the time and financial commitment are worth it, here are some things to consider.

What It Is For?
Writing in Harvard Business Review a few years ago, executive coach Ed Batista said the three key uses for an MBA were: practical leadership and management skills, a job marketplace credential, and access to a vast alumni network. Those may sound like abstract rewards but in some circumstances, they can reap concrete benefits: MBA alums can generally expect higher salaries than those other grads, with a median base salary of $115,000, depending on job level and location.

How Much Will It Cost? 
There's no point being delicate about it: MBAs are expensive.

To attend a program at a school like Stanford, you can expect to exceed the $100,00 mark over your two years there. Financial aid, in the form of loans and fellowships, is available, but how much you get of either naturally depends on your individual assets. "On average, people receive $36,000 or $37,000 in fellowships per year. But that average is pretty meaningless because some people get full rides, and others get zero," Khan says. "If you work in a pretty low-income job, you're going to get a much higher financial aid package in terms of the fellowship proportion to loans."...

Is There Another Way In?
The gospel of education can sometimes make it seem like getting any degree, and as many as possible, is necessary to advance. But there's no point in wasting your time and money on a costly credential that reaps little benefits. Khan says online MBA programs, an increasingly popular option, may be worthwhile for people who want to gain exposure to specific subject matter — like accounting, for example. (Just do your homework on online programs especially.) On the other hand, if you simply want to learn a skill, you can also enroll in an accounting class without going the full-on school route.
Read more...

Source: Refinery29


If you enjoyed this post, make sure you subscribe to my Email Updates!

Lack of security skills has become a drag on Australia’s digital transformation | CSO Australia

Photo: David Braue, 
Follow on LinkedIn | Twitter | Blog
"A lack of cybersecurity skills has forced more than half of Australian IT decision-makers to slow down their cloud rollouts, according to new research that has redoubled the urgency of strategies for building and deploying Australia’s cybersecurity capabilities" writes (CSO Online).
 
Photo: CSO Australia

As new initiatives woo cybersecurity talent, Australia’s cybersecurity workforce is falling behind global benchmarks – and cloud-first initiatives are suffering

A lack of cybersecurity skills has forced more than half of Australian IT decision-makers to slow down their cloud rollouts, according to new research that has redoubled the urgency of strategies for building and deploying Australia’s cybersecurity capabilities.

The rush to the cloud was slowing across the board, according to a new McAfee survey of 1400 IT decision-makers that found the proportion of businesses with cloud-first strategies had dropped from 82 percent a year ago, to 65 percent now.

One in four companies has experienced data theft from the public cloud, while 1 in 5 said they have experienced an advanced attack against their public cloud infrastructure.

With cloud security estimated to rise from 27 percent of IT-security budgets to 37 percent within the next 12 months, Cloud Security Business Unit senior vice president Rajiv Gupta told CSO Australia, the figures suggest that customers were learning the hard way that cloud security is harder than many companies had anticipated when they began ambitious digital-transformation efforts.

Poor visibility was flagged as a significant issue – and vendors, Gupta said, are to blame. 

“We see a plethora of vendors claiming to be best of breed, but they have laid the effort of integrating all of these products into a cohesive whole, on the feet of their customers.”

“But that is not their business; their business is producing sweaters, or cars, or managing financial instruments. We as an industry need to show that the different products we sell can exchange threat telemetry to function as a cohesive whole.”

Significantly, the problem seemed to be markedly worse in Australia, where 53 percent of respondents said problems with cloud security had forced them to slow down their cloud rollouts. This was well above the 30 percent figure in the UK, 37 percent in Canada, and 40 percent figure recorded globally – suggesting that the long-reported paucity of relevant security skills in Australia was taking its toll.

Just 10 percent of Australian companies said they do not have a cybersecurity skill shortage and are continuing with cloud adoption – well behind the 24 percent figure in the UK, 19 percent in the US and Japan, and 16 percent globally.
Read more... 

Source: CSO Australia


If you enjoyed this post, make sure you subscribe to my Email Updates!

Wednesday, April 18, 2018

This Online MBA Program Is Bringing Distance-Learning Into The Real World | MBA Distance - Learning - BusinessBecause

Photo: Amy Hughes
"Discovery learning techniques on Maastricht School of Management’s (MSM) Online MBA bring the advantages of campus-based courses to bear in the distance-learning space" reports Amy Hughes, Business Because Ltd.


Maastricht School of Managment activates MBAs' learning through discovery learning techniques
Photo: BusinessBecause 
Why do people opt for full-time MBAs?

For some, it’s the immersive experience of postgraduate education that swings their decision. For others, it’s about networking—getting to know a cohort inside and outside the classroom on a full-time course. For others still, it’s the practical experience that many courses offer: the chance to apply their learning in real-time.

But imagine if you could get all of this from a part-time course—a course that you could complete, if you wanted to, entirely online.
This is what Maastricht School of Management (MSM) aims to deliver through its Online MBA, launched in 2017. The course eliminates the opportunity cost associated with traditional MBAs as it doesn’t require students to leave work. It’s ranked by CEO magazine in the ‘Gold Tier’ of online MBA programs—4th in the world – proving the high quality of this Online MBA program. 

The Online MBA at MSM is governed by the principle of ‘Discovery Learning’ which, according to course director Dr. Pascale Hardy, encourages the students to be active in their own education. 

“Unlike a traditional course where students obtain most new knowledge directly from course instructors via lectures and textbook-based discussions and assignments,” she explains, “the courses in the online program require that students learn through discovery—by researching, reading, undertaking online activities such as writing papers, and discussion board posts that demonstrate their knowledge, understanding, and their ability to apply [these things].”

Online MBA students have the advantage of being learning while they work, and MSM’s Online MBA program actively encourages MBAs to apply the lessons they learn in their modules to their everyday working environment. 

In line with MSM’s increasing commitment to innovation, this active engagement in the collection and collation of course information is supported by the latest developments in technology.

“The Maastricht School of Management Online MBA [uses] cutting-edge technology to support an innovative pedagogical framework in order to offer students the ultimate learning experience,” Pascale confirms. “Students participate in video conferencing sessions [hosted by Zoom Web Communications], during which they review and discuss the topics with their peers and instructor.” 
Read more... 

Source: BusinessBecause


If you enjoyed this post, make sure you subscribe to my Email Updates!