Viral News - Feed https://viralnews1.top from the top Tue, 13 Nov 2018 18:59:00 +0000 en-US https://wordpress.org/?v=4.9.8 https://viralnews1.top/wp-content/uploads/2018/06/cropped-ama-news-logo-32x32.pngViral News – from the tophttps://viralnews1.top 32 32 107769663 Parents denied time with dying babyhttps://viralnews1.top/parents-denied-time-with-dying-baby/ Tue, 13 Nov 2018 18:57:17 +0000 https://viralnews1.top/?p=5689
Image copyright Getty Images
Image caption The parents were told they had to be supervised at all times when visiting their dying son

A dying baby's parents were given limited time to spend with him in his last few weeks of life due to failings by social workers.

Strict supervision orders were placed on the couple by York City Council amid a safeguarding investigation.

It meant the couple, who were eventually cleared of any wrongdoing, were only able to spend time with their son when other people were present.

The council apologised and has paid the family £2,000 for the distress caused.

When the baby, who had a range of health conditions, was taken to hospital with breathing difficulties, a doctor noted injuries to his ribs.

The family said these could have been caused by invasive, physical and medical interventions during a previous hospital stay.

'Horrifically stressful'

But because of the injuries, social workers began a safeguarding investigation and interim care orders were issued for their two other children to be looked after by grandparents.

When visiting their son in hospital, the parents had to be supervised at all times, either by other relatives or nursing staff, who were not always available to do so.

It meant that on one day they could not see their son at all and on others, they only had a few hours with him.

An investigation by the Local Government and Social Care Ombudsman Michael King found that even when their son's condition deteriorated, social workers did not relax instructions despite the council stating that the risk of any harm to the baby was low.

At no point did a social worker go to the hospital to see the situation for themselves, the report said.

Image copyright LGO
Image caption Ombudsman Michael King criticised the council for taking nearly a year to respond to the family's complaint about the situation

Mr King said although the council could not be criticised for starting action, the care plan did not consider their baby's emotional needs and more should have been done to review the situation.

He said: "This would have been a horrifically stressful time for the family, at a time when their world must have felt like it was falling apart."

At the final court hearing 11 weeks after their son had died, the council withdrew the care order in respect of their other children and said although the baby's injuries were unexplained, they could not be attributed to the parents.

The council said it "apologised unreservedly to the family" and fully accepted recommendations made by the ombudsman.

Related Topics

Original Article : HERE ; This post was curated & posted using : RealSpecific

=>
***********************************************
Learn More Here: Parents denied time with dying baby
************************************
=>

]]>
Image copyright Getty Images
Image caption The parents were told they had to be supervised at all times when visiting their dying son

A dying baby's parents were given limited time to spend with him in his last few weeks of life due to failings by social workers.

Strict supervision orders were placed on the couple by York City Council amid a safeguarding investigation.

It meant the couple, who were eventually cleared of any wrongdoing, were only able to spend time with their son when other people were present.

The council apologised and has paid the family £2,000 for the distress caused.

When the baby, who had a range of health conditions, was taken to hospital with breathing difficulties, a doctor noted injuries to his ribs.

The family said these could have been caused by invasive, physical and medical interventions during a previous hospital stay.

'Horrifically stressful'

But because of the injuries, social workers began a safeguarding investigation and interim care orders were issued for their two other children to be looked after by grandparents.

When visiting their son in hospital, the parents had to be supervised at all times, either by other relatives or nursing staff, who were not always available to do so.

It meant that on one day they could not see their son at all and on others, they only had a few hours with him.

An investigation by the Local Government and Social Care Ombudsman Michael King found that even when their son's condition deteriorated, social workers did not relax instructions despite the council stating that the risk of any harm to the baby was low.

At no point did a social worker go to the hospital to see the situation for themselves, the report said.

Image copyright LGO
Image caption Ombudsman Michael King criticised the council for taking nearly a year to respond to the family's complaint about the situation

Mr King said although the council could not be criticised for starting action, the care plan did not consider their baby's emotional needs and more should have been done to review the situation.

He said: "This would have been a horrifically stressful time for the family, at a time when their world must have felt like it was falling apart."

At the final court hearing 11 weeks after their son had died, the council withdrew the care order in respect of their other children and said although the baby's injuries were unexplained, they could not be attributed to the parents.

The council said it "apologised unreservedly to the family" and fully accepted recommendations made by the ombudsman.

Related Topics

Original Article : HERE ; This post was curated & posted using : RealSpecific

=>
***********************************************
Learn More Here: Parents denied time with dying baby
************************************
=>

]]>
5689
Here’s What Would Happen If We Switched Mars And Venus In The Solar Systemhttps://viralnews1.top/heres-what-would-happen-if-we-switched-mars-and-venus-in-the-solar-system/ Tue, 13 Nov 2018 14:28:37 +0000 https://viralnews1.top/?p=5686

At a meeting earlier this year, some experts took some time out of their schedules to have an intriguing discussion – what if Mars and Venus swapped places?

The question was raised at the Comparative Climatology of Terrestrial Planets III meeting at the Lunar and Planetary Institute in Houston, Texas in August. Here researchers were discussing the environments of rocky worlds in our Solar System and beyond. 

But according to NASA, a thought experiment about switching our two neighboring planets was also discussed. Of course, it was just a bit of fun – as far as we know we haven’t invented a planet-moving machine yet – but there was some interesting science to come out of it.

Mars has a mass of one-tenth that of Earth, whereas Venus has a fairly similar mass. The former orbits within the current habitable zone of the Sun, while the latter orbits slightly within the inner edge. Of course, neither look habitable now – Mars has an average surface temperature of -60°C (-80°F), with temperatures reaching 460°C (860°F) on Venus owing to its thick atmosphere. So what if we switched them?

“Modern Mars at Venus’s orbit would be fairly toasty by Earth standards,” said Chris Colose, a climate scientist based at the NASA Goddard Institute for Space Studies, wrote Elizabeth Tasker for NASA.

Mars today has a thin atmosphere, blasted away by the Sun when it lost its magnetic field for unknown reasons. Were we to move Mars to Venus’ orbit today, it's unlikely the temperatures would be high enough to release enough carbon dioxide trapped on the planet to thicken the atmosphere much.

Even if it could be thickened, without a magnetic field Mars couldn’t cling onto its atmosphere, meaning the chances of liquid water existing would be slim. “I suspect it would just be a warmer rock,” said Colose.

As for Venus, interestingly its temperature doesn’t rely on the Sun that much; move it to the orbit of Mars, and it would still remain largely similar as its atmosphere is in equilibrium. Over a long time, however, it might be that the planet starts to cool. Otherwise, the only option is to take it beyond the orbit of Mars.

“It seems that simply switching the orbits of the current Venus and Mars would not produce a second habitable world,” wrote Tasker. But had Venus originally formed in the position of Mars, a planet of that size may have fared better at holding onto its atmosphere.

Original Article : HERE ; This post was curated & posted using : RealSpecific

=>
***********************************************
Original Post Here: Here’s What Would Happen If We Switched Mars And Venus In The Solar System
************************************
=>

]]>

At a meeting earlier this year, some experts took some time out of their schedules to have an intriguing discussion – what if Mars and Venus swapped places?

The question was raised at the Comparative Climatology of Terrestrial Planets III meeting at the Lunar and Planetary Institute in Houston, Texas in August. Here researchers were discussing the environments of rocky worlds in our Solar System and beyond. 

But according to NASA, a thought experiment about switching our two neighboring planets was also discussed. Of course, it was just a bit of fun – as far as we know we haven’t invented a planet-moving machine yet – but there was some interesting science to come out of it.

Mars has a mass of one-tenth that of Earth, whereas Venus has a fairly similar mass. The former orbits within the current habitable zone of the Sun, while the latter orbits slightly within the inner edge. Of course, neither look habitable now – Mars has an average surface temperature of -60°C (-80°F), with temperatures reaching 460°C (860°F) on Venus owing to its thick atmosphere. So what if we switched them?

“Modern Mars at Venus’s orbit would be fairly toasty by Earth standards,” said Chris Colose, a climate scientist based at the NASA Goddard Institute for Space Studies, wrote Elizabeth Tasker for NASA.

Mars today has a thin atmosphere, blasted away by the Sun when it lost its magnetic field for unknown reasons. Were we to move Mars to Venus’ orbit today, it's unlikely the temperatures would be high enough to release enough carbon dioxide trapped on the planet to thicken the atmosphere much.

Even if it could be thickened, without a magnetic field Mars couldn’t cling onto its atmosphere, meaning the chances of liquid water existing would be slim. “I suspect it would just be a warmer rock,” said Colose.

As for Venus, interestingly its temperature doesn’t rely on the Sun that much; move it to the orbit of Mars, and it would still remain largely similar as its atmosphere is in equilibrium. Over a long time, however, it might be that the planet starts to cool. Otherwise, the only option is to take it beyond the orbit of Mars.

“It seems that simply switching the orbits of the current Venus and Mars would not produce a second habitable world,” wrote Tasker. But had Venus originally formed in the position of Mars, a planet of that size may have fared better at holding onto its atmosphere.

Original Article : HERE ; This post was curated & posted using : RealSpecific

=>
***********************************************
Original Post Here: Here’s What Would Happen If We Switched Mars And Venus In The Solar System
************************************
=>

]]>
5686
California bar shooting suspects despicable actions condemned by Marine Corps top officerhttps://viralnews1.top/california-bar-shooting-suspects-despicable-actions-condemned-by-marine-corps-top-officer/ Tue, 13 Nov 2018 05:24:21 +0000 https://viralnews1.top/?p=5683

Original Article : HERE ; This post was curated & posted using : RealSpecific

=>
***********************************************
Originally Published Here: California bar shooting suspects despicable actions condemned by Marine Corps top officer
************************************
=>

]]>

Original Article : HERE ; This post was curated & posted using : RealSpecific

=>
***********************************************
Originally Published Here: California bar shooting suspects despicable actions condemned by Marine Corps top officer
************************************
=>

]]>
5683
Three ways to avoid bias in machine learninghttps://viralnews1.top/three-ways-to-avoid-bias-in-machine-learning/ Mon, 12 Nov 2018 22:59:11 +0000 https://viralnews1.top/?p=5680

At this moment in history it’s impossible not to see the problems that arise from human bias. Now magnify that by compute and you start to get a sense for just how dangerous human bias via machine learning can be. The damage can be twofold:

  • Influence. If the AI said so it must be true… people trust outputs of AI, so if human bias is missed in the training it could compound the problem by infecting more people;
  • Automation. Sometimes AI models are plugged into a programmatic function, which could lead to the automation of bias.

But there is potentially a silver machine-learned lining. Because AI can help expose truth inside messy data sets, it’s possible for algorithms to help us better understand bias we haven’t already isolated, and spot ethically questionable ripples in human data so we can check ourselves. Exposing human data to algorithms exposes bias, and if we are considering the outputs rationally, we can use machine learning’s aptitude for spotting anomalies.

But the machines can’t do it on their own. Even unsupervised learning is semi-supervised, as it requires data scientists to choose the training data that goes into the models. If a human is the chooser, bias can be present. How the heck do we tackle such a bias beast? We will attempt to pick it apart.

The landscape of ethical concerns with AI

Bad examples abound. Consider the finding from Carnegie Mellon that showed that women were shown significantly fewer online ads for high-paying jobs than men were. Or recall the sad case of Tay, Microsoft’s teen slang Twitter bot that had to be taken down after producing racist posts.

In the near future, such mistakes could result in hefty fines or compliance investigation, a conversation that’s already occurring in the U.K. parliament. All mathematicians and machine learning engineers should consider bias to some degree, but that degree varies from instance to instance. A small company with limited resources will often be forgiven for accidental bias as long as the algorithmic vulnerability is fixed quickly; a Fortune 500 company, which presumably has the resources to ensure an unbiased algorithm, will be held to a tighter standard.

Of course, an algorithm that recommends novelty T-shirts does not need nearly as much oversight as an algorithm that decides what dose of radiation to give to a cancer patient. It’s these high-stakes decisions that will become the most pronounced when legal liability enters the discussion.

It’s important for builders and business leaders to establish a process for monitoring the ethical behavior of their AI systems.

Three keys to managing bias when building AI

There are signs of existing self-correction in the AI industry: Researchers are looking at ways to reduce bias and strengthen ethics in rule-based artificial systems by taking human biases into account, for example.

These are good practices to follow; it’s important to be thinking proactively about ethics regardless of the regulatory environment. Let’s take a look at several points to keep in mind as you work on your AI.

1. Choose the right learning model for the problem.

There’s a reason all AI models are unique: Each problem requires a different solution and provides varying data resources. There’s no single model to follow that will avoid bias, but there are parameters that can inform your team as it’s building.

For example, supervised and unsupervised learning models have their respective pros and cons. Unsupervised models that cluster or do dimensional reduction can learn bias from their data set. If belonging to group A highly correlates to behavior B, the model can mix up the two. And while supervised models allow for more control over bias in data selection, that control can introduce human bias into the process.

It’s better to find and fix vulnerabilities now than to have regulators find them later on.

Non-bias through ignorance — excluding sensitive information from the model — may seem like a workable solution, but it still has vulnerabilities. In college admissions, sorting applicants by ACT scores is standard, but taking their ZIP code into account might seem discriminatory. But because test scores might be affected by the preparatory resources in a given area, including the ZIP code in the model could actually decrease bias.

You have to require your data scientists to identify the best model for a given situation. Sit down and talk them through the different strategies they can take when building a model. Troubleshoot ideas before committing to them. It’s better to find and fix vulnerabilities now — even if it means taking longer — than to have regulators find them later on.

2. Choose a representative training data set.

Your data scientists may do much of the leg work, but it’s up to everyone participating in an AI project to actively guard against bias in data selection. There’s a fine line you have to walk. Making sure the training data is diverse and includes different groups is essential, but segmentation in the model can be problematic unless the real data is similarly segmented.

It’s inadvisable — both computationally and in terms of public relations — to have different models for different groups. When there is insufficient data for one group, you could possibly use weighting to increase its importance in training, but this should be done with extreme caution. It can lead to unexpected new biases.

For example, if you have only 40 people from Cincinnati in a data set and you try to force the model to consider their trends, you might need to use a large weight multiplier. Your model would then have a higher risk of picking up on random noise as trends — you could end up with results like “people named Brian have criminal histories.” This is why you need to be careful with weights, especially large ones.

3. Monitor performance using real data.

No company is knowingly creating biased AI, of course — all these discriminatory models probably worked as expected in controlled environments. Unfortunately, regulators (and the public) don’t typically take best intentions into account when assigning liability for ethical violations. That’s why you should be simulating real-world applications as much as possible when building algorithms.

It’s unwise, for example, to use test groups on algorithms already in production. Instead, run your statistical methods against real data whenever possible. Ask the data team to check simple test questions like “Do tall people default on AI-approved loans more than short people?” If they do, determine why.

When you’re examining data, you could be looking for two types of equality: equality of outcome and equality of opportunity. If you’re working on AI for approving loans, result equality would mean that people from all cities get loans at the same rates; opportunity equality would mean that people who would have returned the loan if given the chance are given the same rates regardless of city. Without the latter, the former could still hide if one city has a culture that makes defaulting on loans common.

Result equality is easier to prove, but it also means you’ll knowingly accept potentially skewed data. While it’s harder to prove opportunity equality, it is at least valid morally. It’s often practically impossible to ensure both types of equality, but oversight and real-world testing of your models should give you the best shot.

Eventually, these ethical AI principles will be enforced by legal penalties. If New York City’s early attempts at regulating algorithms are any indication, those laws will likely involve government access to the development process, as well as stringent monitoring of the real-world consequences of AI. The good news is that by using proper modeling principles, bias can be greatly reduced or eliminated, and those working on AI can help expose accepted biases, create a more ethical understanding of tricky problems and stay on the right side of the law — whatever it ends up being.

Original Article : HERE ; This post was curated & posted using : RealSpecific

=>
***********************************************
Read More Here: Three ways to avoid bias in machine learning
************************************
=>

]]>

At this moment in history it’s impossible not to see the problems that arise from human bias. Now magnify that by compute and you start to get a sense for just how dangerous human bias via machine learning can be. The damage can be twofold:

  • Influence. If the AI said so it must be true… people trust outputs of AI, so if human bias is missed in the training it could compound the problem by infecting more people;
  • Automation. Sometimes AI models are plugged into a programmatic function, which could lead to the automation of bias.

But there is potentially a silver machine-learned lining. Because AI can help expose truth inside messy data sets, it’s possible for algorithms to help us better understand bias we haven’t already isolated, and spot ethically questionable ripples in human data so we can check ourselves. Exposing human data to algorithms exposes bias, and if we are considering the outputs rationally, we can use machine learning’s aptitude for spotting anomalies.

But the machines can’t do it on their own. Even unsupervised learning is semi-supervised, as it requires data scientists to choose the training data that goes into the models. If a human is the chooser, bias can be present. How the heck do we tackle such a bias beast? We will attempt to pick it apart.

The landscape of ethical concerns with AI

Bad examples abound. Consider the finding from Carnegie Mellon that showed that women were shown significantly fewer online ads for high-paying jobs than men were. Or recall the sad case of Tay, Microsoft’s teen slang Twitter bot that had to be taken down after producing racist posts.

In the near future, such mistakes could result in hefty fines or compliance investigation, a conversation that’s already occurring in the U.K. parliament. All mathematicians and machine learning engineers should consider bias to some degree, but that degree varies from instance to instance. A small company with limited resources will often be forgiven for accidental bias as long as the algorithmic vulnerability is fixed quickly; a Fortune 500 company, which presumably has the resources to ensure an unbiased algorithm, will be held to a tighter standard.

Of course, an algorithm that recommends novelty T-shirts does not need nearly as much oversight as an algorithm that decides what dose of radiation to give to a cancer patient. It’s these high-stakes decisions that will become the most pronounced when legal liability enters the discussion.

It’s important for builders and business leaders to establish a process for monitoring the ethical behavior of their AI systems.

Three keys to managing bias when building AI

There are signs of existing self-correction in the AI industry: Researchers are looking at ways to reduce bias and strengthen ethics in rule-based artificial systems by taking human biases into account, for example.

These are good practices to follow; it’s important to be thinking proactively about ethics regardless of the regulatory environment. Let’s take a look at several points to keep in mind as you work on your AI.

1. Choose the right learning model for the problem.

There’s a reason all AI models are unique: Each problem requires a different solution and provides varying data resources. There’s no single model to follow that will avoid bias, but there are parameters that can inform your team as it’s building.

For example, supervised and unsupervised learning models have their respective pros and cons. Unsupervised models that cluster or do dimensional reduction can learn bias from their data set. If belonging to group A highly correlates to behavior B, the model can mix up the two. And while supervised models allow for more control over bias in data selection, that control can introduce human bias into the process.

It’s better to find and fix vulnerabilities now than to have regulators find them later on.

Non-bias through ignorance — excluding sensitive information from the model — may seem like a workable solution, but it still has vulnerabilities. In college admissions, sorting applicants by ACT scores is standard, but taking their ZIP code into account might seem discriminatory. But because test scores might be affected by the preparatory resources in a given area, including the ZIP code in the model could actually decrease bias.

You have to require your data scientists to identify the best model for a given situation. Sit down and talk them through the different strategies they can take when building a model. Troubleshoot ideas before committing to them. It’s better to find and fix vulnerabilities now — even if it means taking longer — than to have regulators find them later on.

2. Choose a representative training data set.

Your data scientists may do much of the leg work, but it’s up to everyone participating in an AI project to actively guard against bias in data selection. There’s a fine line you have to walk. Making sure the training data is diverse and includes different groups is essential, but segmentation in the model can be problematic unless the real data is similarly segmented.

It’s inadvisable — both computationally and in terms of public relations — to have different models for different groups. When there is insufficient data for one group, you could possibly use weighting to increase its importance in training, but this should be done with extreme caution. It can lead to unexpected new biases.

For example, if you have only 40 people from Cincinnati in a data set and you try to force the model to consider their trends, you might need to use a large weight multiplier. Your model would then have a higher risk of picking up on random noise as trends — you could end up with results like “people named Brian have criminal histories.” This is why you need to be careful with weights, especially large ones.

3. Monitor performance using real data.

No company is knowingly creating biased AI, of course — all these discriminatory models probably worked as expected in controlled environments. Unfortunately, regulators (and the public) don’t typically take best intentions into account when assigning liability for ethical violations. That’s why you should be simulating real-world applications as much as possible when building algorithms.

It’s unwise, for example, to use test groups on algorithms already in production. Instead, run your statistical methods against real data whenever possible. Ask the data team to check simple test questions like “Do tall people default on AI-approved loans more than short people?” If they do, determine why.

When you’re examining data, you could be looking for two types of equality: equality of outcome and equality of opportunity. If you’re working on AI for approving loans, result equality would mean that people from all cities get loans at the same rates; opportunity equality would mean that people who would have returned the loan if given the chance are given the same rates regardless of city. Without the latter, the former could still hide if one city has a culture that makes defaulting on loans common.

Result equality is easier to prove, but it also means you’ll knowingly accept potentially skewed data. While it’s harder to prove opportunity equality, it is at least valid morally. It’s often practically impossible to ensure both types of equality, but oversight and real-world testing of your models should give you the best shot.

Eventually, these ethical AI principles will be enforced by legal penalties. If New York City’s early attempts at regulating algorithms are any indication, those laws will likely involve government access to the development process, as well as stringent monitoring of the real-world consequences of AI. The good news is that by using proper modeling principles, bias can be greatly reduced or eliminated, and those working on AI can help expose accepted biases, create a more ethical understanding of tricky problems and stay on the right side of the law — whatever it ends up being.

Original Article : HERE ; This post was curated & posted using : RealSpecific

=>
***********************************************
Read More Here: Three ways to avoid bias in machine learning
************************************
=>

]]>
5680
Plea to find prisoner released in errorhttps://viralnews1.top/plea-to-find-prisoner-released-in-error/ Mon, 12 Nov 2018 18:31:56 +0000 https://viralnews1.top/?p=5677
Image copyright Humberside Police
Image caption Michael Kavanagh was on remand in the prison awaiting trial on weapons charges

A prisoner is on the run after being released from HMP Hull in error, Humberside Police has said.

Michael Kavanagh was on remand awaiting trial for allegedly carrying an offensive weapon and intent to cause grievous bodily harm in June.

He was released by mistake on Friday and was last seen wearing a dark Adidas hooded top, with grey jogging bottoms and blue Adidas trainers.

Anyone who spots Mr Kavanagh is urged not to approach him but to call police.

Supt Gary Hooks said: "Firstly I would like to appeal to Michael directly to hand yourself in to your nearest police station immediately."

"Anyone found supporting and harbouring him could be subject to prosecution for assisting an offender," he added.

HMP Hull is a Category B men's prison that originally opened in 1870 to hold both men and women. It has capacity for 1,044 prisoners.

Original Article : HERE ; This post was curated & posted using : RealSpecific

=>
***********************************************
Read More Here: Plea to find prisoner released in error
************************************
=>

]]>
Image copyright Humberside Police
Image caption Michael Kavanagh was on remand in the prison awaiting trial on weapons charges

A prisoner is on the run after being released from HMP Hull in error, Humberside Police has said.

Michael Kavanagh was on remand awaiting trial for allegedly carrying an offensive weapon and intent to cause grievous bodily harm in June.

He was released by mistake on Friday and was last seen wearing a dark Adidas hooded top, with grey jogging bottoms and blue Adidas trainers.

Anyone who spots Mr Kavanagh is urged not to approach him but to call police.

Supt Gary Hooks said: "Firstly I would like to appeal to Michael directly to hand yourself in to your nearest police station immediately."

"Anyone found supporting and harbouring him could be subject to prosecution for assisting an offender," he added.

HMP Hull is a Category B men's prison that originally opened in 1870 to hold both men and women. It has capacity for 1,044 prisoners.

Original Article : HERE ; This post was curated & posted using : RealSpecific

=>
***********************************************
Read More Here: Plea to find prisoner released in error
************************************
=>

]]>
5677