We pay a lot of attention – right in the ways in which emerging and advanced technologies such as artificial intelligence may enhance the biases or existing inequality based on the data on which the models are trained.
But there is anxiety on the side of the face that does not get almost notice: What if, instead of strengthening ethical and moral views only, Amnesty International He also has the ability to create a completely new moral and moral possibilities that reshape what society considers natural?
This idea is at the heart of a modern conversation from Thinking of artificial intelligence podcastIn this case from An episode of John DanharA great lecturer in ethics at the Faculty of Law in Nuwa Galwaway.
Danaher discussed the research in which he collaborated on how to explore how to reshape our principles and ethical frameworks – an evolution on typical thinking where the responsible innovation focuses on ensuring compatibility with or strengthening the artificial intelligence of current ethics.
“In the academic circles, the usual direction of analysis is to use the current ethical standards and principles of technology assessment. To a large extent, this is the entire field Artificial intelligence ethics “Danhar thinks about thinking about the host of artificial intelligence Kimberly Nevala.” So it seemed to be a clear thing, which is just doing something a little different from what everyone is doing. “
The paper resulting from this research determines the six mechanisms in which the authors believe that changing technology is social/moral beliefs and practices. Danaer’s hope is that people who design new technologies can look at their work through the lens of those mechanisms to consider whether any such changes will be a good thing or something bad.
What makes it interesting is that it is not only to guard the direct negative impact – such as bias, for example – but the secondary consequences that can reshape how things are completely judged. After that, of course, he asks: What is the ruling on value on Those Possible changes?
https://www.youtube.com/watch?
Learn more about thinking about artificial intelligence
Podcast deals with topics through the community and technology group with a variety of creators, preachers and data scientists who are eager to explore the effect and effects of artificial intelligence – for the best and worse.
Do artificial intelligence work … at work?
Just as other guests have proven to think about artificial intelligence, Danaher is not alone in thinking about these effects related to merely adhering to or violating the current morals or morals of society.
Take Matt Shearer, chief policy lawyer at the Center for Democracy and Technology, who Join Kimberly for discussion The effects of implementing artificial intelligence in the workplace.
Whether it decides how artificial intelligence employees should use or consider the repercussions of already defeating human operations such as employment and release, it is clear that this technology has the ability to change how we deal with people in the workplace.
Shearer recalled a conversation with a human resource technical seller, who argued that raising fears of the use of artificial intelligence for employment did not carry the same risks as something as a self -driving car because “no one dies as a result of the artificial intelligence system that performs human resources tasks.”
It is safe to say that “this is not bad because no one dies” is a severe shift in ethics, which is considered the rule with any reasonable definition. It really reflects how we (and we “we” include the artificial intelligence system now?) Start quickly treat people when we get people out of the equation.
Do we enter into complications?
Outside the workplace, what happens when artificial intelligence removes people from how we socially interact? It was a hot topic recently, as it became stories about people addicted to ChatGPT or Dependence on the artificial intelligence of the companion.
in Another back episode of artificial intelligenceDr. Maria Tskoub-a psychological scientist and a human interaction researcher-on this topic and revealed an interesting pair of these types of artificial intelligence robots that can change our moral and moral behavior.
First, the products themselves are clear. Using an example of a “AI’s friend” necklace, Dr. Tschopp thought about how wearing such a device affects someone. Do they speak to their history or robot? Will Amnesty International be “on” history with them? Either way, there is one clear thing: artificial intelligence facilities can greatly change how humans interact with each other.
Second, how do we judge people’s use of this technology. By calling a good or bad companion, you end up judging the user by extension, because if you like it, he must do that they Strange or wrong. Even here, even the tool analysis can change how we behave towards his human colleagues.
Return to the previous question of Danaher about creating a framework to assess these “developments”, it is irony that the fact that we tend to judge the use of technology in certain ways such as “good” or “bad” may be an argument against the use of this fact to determine whether this technology is actually in reality He is Good or bad.
Rights and errors
Questions are not limited to influencing people, too.
Take the concept of “machine rights”, which was one of the many topics that Kimberly discussed with international human rights lawyer and author Susi Alegre in “Right AI” episode From podcast.
Alexre’s action focuses on how technology development affects human rights, and it shows that even addressing the academic issue of the so -called robot rights has the ability to be a problem.
By giving “rights” machines, humans may have to avoid or cancel the subscription to make moral and moral justifications for the technology they build. She says: “She acts and lets these things explode and do all the terrible things that creations do while abandoning responsibility.”
Think about our morals
So what do all these complex questions share?
Well, it seems that with any moral question related to the prosecution that we ask, we end up opening a box of worms-only to learn that it is full of it more Worms boxes. Fun, right?
The time will determine what is the answer to all these questions (and their inevitable follow -up) … perhaps. Realistically, they may be just interrogation problems as we are more permanently “solved”. Especially since the goalkeeper is unlikely to remain firmly planted in one place where technology develops at the rate it does.
In fact, the invitation to work that we need to give ourselves-whether creators, users, or simply observers of innovations in artificial intelligence-ha: keeping our eyes on the ways in which the standards change or have the ability to change, and not only the value that governs anything that holds a title on that day.
Because with the tide entering day after day, it may be difficult to know whether the ethical and moral sand is permanently turning under our feet as well.







