CARVIEW |
@ALL
“AI does not and can not have a conscience nor does it posses the fear of consequences.”
Nor can AI observe independently so it can not learn independently or apply real world rationality.
Thus it can not work out good/bad of actions it commissions.
]]>AI does not and can not have a conscience nor does it posses the fear of consequences. A human has a conscience, although it may be seared. A human possesses the fear of consequences, especially so with greater experience and without a specific correlation to conscience. To some degree this assists in building a level of trust and a expectation of conscientiousness from a human assistant which cannot exist from an AI assistant.
]]>The YouTube link you give is longer than you should let it be.
Everything past and including
&pp=
Provides YouTube with tracking data.
If you remove it you will find the link still works and YouTube can not link others back by that tracking data.
]]>Lots of real world exploits and how companies fixed vulnerabilities in this talk “New Important Instructions”
https://m.youtube.com/watch?v=qyTSOSDEC5M&pp=ygUPRW1icmFjZSB0aGUgcmVk
]]>With regards the article, there is one very important part that most will miss
“A mentor once said to me that the best litigators are those who are well-read. If you want to be among the best cybersecurity practitioners, you should know where we’ve been in order to know where we are going.”
That second sentence is why cybersecurity is crap. As has been said on this blog a number of times
“The ICTsec industry does not learn from its history.”
Along with that old saying slightly misattributed to George Santayana
“Those who do not learn from history are condemned to relive it.”
But to add another deliberate misquote
“Only the dead have seen the end of insecurity.”
https://bigthink.com/culture-religion/those-who-do-not-learn-history-doomed-to-repeat-it-really/
]]>https://icdt.osu.edu/news/2021/06/exploding-phone-untold-story-teenagers-and-outlaws-who-hacked-ma-bell ]]>
If you can’t think of a single reason to check on your assistants’ work, hopefully you’re not doing anything of much import.
]]>You might want to consider that ‘intellectual humility’
https://greatergood.berkeley.edu/article/item/what_does_intellectual_humility_look_like
Is a relatively new concept and currently not a very good one for a whole list of reasons especially when judging deterministic systems.
Because not least of which is it assumes that ‘people’ have irrational beliefs. AI systems do not have ‘beliefs’ rational or not.
However as far as individuals are concerned the older ‘Emotional Intelligence’ is probably a better indicator in some respects.
But again AI as we currently know it via LLMs and ML can not walk down that road either.
But consider, what IQ does an encyclopedia with a good index have?
]]>https://www.bbc.com/future/article/20240515-ai-wisdom-what-happens-when-you-ask-an-algorithm-for-relationship-advice
“IQs tend to follow the distribution of a “bell curve” – with most people’s IQs
falling around the average of 100, and far fewer reaching either extreme. For example, according to the reference sample for the “Wechsler Adult Intelligence Scale” (WAIS), which is currently the most commonly used IQ test, only 10% of
people have an IQ higher than 120. Identifying where someone’s cognitive
ability falls on the normal curve is now the primary means of calculating their IQ.
Some psychologists have even started investigating whether you can measure
people’s wisdom – the good judgement that should allow us to make better decisions throughout life.
Looking at the history of philosophy, Igor Grossmann at the University of
Waterloo in Canada first identified the different “dimensions” of wise reasoning: !!! recognising the limits of our knowledge, identifying the
possibility for change, considering multiple perspectives, searching for
compromise, and seeking a resolution to the conflict.
Grossmann found that this measure of wise reasoning can better predict people’s
wellbeing than IQ alone. Those with higher scores tended to report having happier relationships, lower depressive rumination and greater life satisfaction. This is evidence that it can capture something meaningful about the quality of someone’s judgment.
As you might hope, people’s wisdom appears to increase with life experience – a
thoughtful 50-year-old will be more sage than a hot-headed 20-year-old – though it also depends on culture. An international collaboration found that wise reasoning scores in Japan tend to be equally high across different ages. This may be due to differences in their education system, which may be more effective at encouraging qualities such as intellectual humility.
When people imagine discussing their problem from the point of view of an
objective observer, for example, they tend to consider more perspectives and
demonstrate greater intellectual humility.
Inspired by Roivainen’s results, I asked Grossmann about the possibility of
measuring an AI’s wise reasoning. He kindly accepted the challenge and designed
some suitable prompts based on the “Dear Abby” letters, which he then presented to OpenAI’s GPT4 and Claude Opus, a large language model from Anthropic. His
research assistants – Peter Diep, Molly Matthews, and Lukas Salib – then analyzed the responses on each of the individual dimensions of wisdom.
“Showing something that resembles wise reasoning versus actually using wise
reasoning – those are very different things,” says Grossmann.
He is more interested in the practical implications of using AI to encourage deeper thinking.
]]>He has considered creating an AI that plays a “devil’s advocate”, for example,
which might push you to explore alternative viewpoints on a troubling
situation. “It’s a bit of a wild west out there, but I think that there is a quite a bit of room for studying this type of interaction and the circumstances in which it could be beneficial,” Grossmann says.We could train an AI, for example, to emulate famous thinkers like Socrates to
talk us through our problems. Even if we disagree with its conclusions, the process might help us to find new insights into our underlying intuitions and assumptions.”
To be “unambiguous” requires the input to be tagged in some way
Yes, but that’s easy in many situations.
Imagine an email assistant. It has some “system” rules about how it should act and what it is and isn’t supposed to do (e.g. “Don’t generate insulting or fraudulent mails”) which are written by the company which runs the service. These should always be obeyed. It has user-defined rules (“If I get mail related to this project, sort it into this folder and send me a daily summary, but notify me if something urgent comes up”) or user commands which should be obeyed unless they conflict with system rules. And then there is all the other input the model needs to perform these tasks, like email metadata and content, and it should not try to follow any instructions in those at all.
It’s not a problem to tag all of these parts appropriately when feeding them to the model, since the model input is prepared by a normal piece of software which can easily track where each piece of text is coming from. Crucially, the tagging is not coming from an LLM, it does not require any understanding (or even knowledge) of the contents of what is being tagged, it’s literally just “take the contents of the ‘user_rules’ database column, tag them as instruction priority 2, and add them to the model input”
]]>